# Zap Scan On K8S Redesign Notes
## Goals
- Users should be able to start as many ZAP scan as practically possible by acheiveing horizontal scaling.
- Some better way of indication of a successfull proxy connection.(Move from file based)
- The components/services in the ZAP should have more intutive names.
## Non-Goals
- Better state management for web scan via the PG database.
## Challenges
- The webappsvc and and the ZAP Pod has requirements to share the storage that limits ZAP scan to scale as much as the node vertical capacity.
### Splitting Proposal
1. web-scan-manager [Manages the life cycle of the web scans]
- Single Container
- manages the zap state via the PGDB
- manages proxy connection
- manages the lifecycle of ZAP scan
2. web-scan [Does the actual web scans][On Demand Job]
- zap-scan container -- Provided by the ZAP itself[Not part of DF]
- Create a container image.
- Set up C2 communication(Tranistion from init container way to hold the start of container until the proxy connects)
- Auth-proxy container
- web-scanner container -- [Part of df scan][Pull the functionalities webappsvc into this]
### Resposnsibilities
- web-scan-manager
- Only manages the life cycle of ZAP scans.
- REST APIs for CREATE and STOP ZAP
- web-scanner container
- Driving and monitoring the scan. ( e.g. Cancel, Start, Stop and partial reports)
**Notes:** Thw current webappsvc is going to split into `web-scan-manger` and `wen-scanner` container.