# Testing & verifying the implementation correctness
TLDR; Currently testing and ensuring correctness is hard and unreliable because we always test end-to-end (all components at once) in a live cluster. When something goes wrong (as it often does) it's tricky to figure out what aspect of the application behaves incorrectly.
A more granular approach would help us to zoom in on a particular problematic area (routing, storing, accounting etc) quicker and with higher success rate.
## Feature flags
1. Implement feature flags in bee that would allow us to:
- turn off certain protocols on demand.
- throttle a certain protocol (artificially hold back the response for N seconds)
This would allow us to simulate slow a cluster runnning low resource or hacked nodes and ensure that the chunk can get around the bad ones and reach its destination.
2. Implement the ability to run a cluster with some core components disabled.
- disable accounting: this way we can ensure that the base data sync protocols are working as expected
- disable blockchain access: all stamps are automatically valid so we could test with this particular aspect factored out.
3. Implement the ability to run a cluster without storage. It stores to RAM all the 'seen' addresses with their associated action (origin, forwarded, rejected, stored etc)
- this will allow us to ensure that our data syncing protocols work properly regardless of the speed at which the node disk performs.
## Infra tooling
1. Have the ability to rapidly (ideally - dynamically, through debug api) limit the neighborhood size
- so if we have a cluster of `20` nodes and limit the size to `7`, we should be able to observe 3 neighborhoods
2. Develop a tool that would be able to:
- take file and break it down into a set of chunks
- scan a namespace and collect all the overlays
- iterate over chunks and compute it's target neighborhoods (for each one)
- upload the file to network
- verify that the given chunk has arrived to its respective neighborhood.
- verify that the nodes that participated in forwarding have the chunk in the cache
- their associated tracing IDs reflect their paths through the network
3. Scale down the size of the neighborhoods to `5`, now you have `four` neighborhoods in total
- re-run the computation for each chunk and verify that it's present in the network
- wait (or trigger) a prunning action on each node in the cluster
- verify that the chunks who now fall out of the storage radius have been evicted and are no longer present in the reserve
## Additional benefits
Having a more modular system not only helps testability but also helps in cases where we want to 'swap' some of these components, such as moving to a different blockchain.