# Lighthouse PeerDAS Full node vs Supernode bandwidth comparison
A quick experiement to understand the "relative" bandwidth usage of PeerDAS supernodes and full node. To get more accurate numbers to production, we'd need a large enough network (70 - 100 nodes).
## Setup
### Local Kurtosis network
Total of 10 nodes on a single machine.
- 1 full node with 1 validator
- 1 supernode with 1 validator
- 1 full node with 62 validators
- used to propose blocks
- 7 supernoes with no validators
- used to fill up mesh peers across all data column subnets, default mesh peers in Lighthouse is 8.
### Network configuration:
- data_column_sidecar_subnet_count: 64
- samples_per_slot: 16
- custody_requirement: 4
- blob spamming enabled
- Same number of blobs as Deneb (target 3/ max 6)
### Limitations
- This is a very basic test with no latency and much smaller number of peers compare to production.
- There are no syncing peers.
## Metrics
The main thing that I'd like to understand here is the *relative* bandwidth between full node and supernode, although this is still far from accurate as the peer count is still much lower than production nodes.
### Network
From the gathered metrics, a supernode's bandwidth usage is about 10x of a full node. The bandwidth usage on blocks and blobs for supernodes is roughly 2x of a Deneb full node due to erasure coding. Therefore a Deneb full node is estimated to consume roughly 4-5x more bandwidth than a PeerDAS full node (with no validators attached).
It would be interesting to see this test on a larger network though. This was tested under "perfect" network condition, which means there's little extra block lookup / range sync activity. I suspect there would be more rpc requests to supernodes on a bigger and more realistic network, which could make this gap even bigger.
NOTE: Sam (ethPandaOps) suggested the [`attacknet`](https://ethpandaops.io/posts/attacknet-introduction) tool may be suitable for simulating network latency.
[](https://hackmd.io/_uploads/SyiNSqjPC.png)
### Block & Data Column Processing
Most of the timing metrics here are not super meaningful, as we're running a lot of load on a single machine.
[](https://hackmd.io/_uploads/BJ00Z5svR.png)
### KZG
We've switched from `c-kzg` from the Rust `peerdas-kzg` library. Again, numbers are not meaningful here, but included here for completeness. For better numbers see [this](https://github.com/sigp/lighthouse/pull/5941#issuecomment-2216882009).
[](https://hackmd.io/_uploads/By4XBcswC.png)
## Machine spec
- CPU: AMD Ryzen 7 3700X (3.6 GHz, 8 cores)
- RAM: 64 GB
- Drives: 2 x 1 TB SSD