# The Impact of Increasing Blob Count: Pectra, PeerDAS and Beyond
There's been a lot of recent debate around the bandwidth increase that comes with scaling Ethereum's blob count. This document aims to outline what I expect in the upcoming DA upgrades.
Some concerns I've noticed lately:
- Will increasing blob count raise hardware and bandwidth requirements?
- Will home stakers still be able to participate in block production without outsourcing block building to MEV builders?
- Is maintaining current hardware and bandwidth requirements hurting Ethereum's competitiveness with other L1s like Solana?
I believe researchers and client teams have been working hard to design an upgrade path that balances scaling and decentralisation. A lot of decisions and research efforts so far have been focused on keeping staking accessible. From Francesco's post [From 4844 to Danksharding: a path to scaling Ethereum DA](https://ethresear.ch/t/from-4844-to-danksharding-a-path-to-scaling-ethereum-da/18046), you'll see that all the network parameters have been designed to maintain the same conditions as 4844 and what we have today on Mainnet.
Let's take a closer look at each of the upcoming scaling upgrades:
## Pectra (2025, max 6-8 blobs)
There's an anticipated demand for more blobs, and the proposal to increase the blob count by Vitalik [here](https://github.com/ethereum/pm/issues/1153#issuecomment-2377002932) comes with [EIP-7623](https://eips.ethereum.org/EIPS/eip-7623), aimed at reducing the maximum block size. While we expect a slight increase in blob bandwidth, it could potentially be offset by a reduction in block bandwidth.
However, many stakers today are already experiencing missed blocks due to delays in publishing blobs. This is an issue we've been aware for some time, and we've been working on a solution. Michael Sproul from our team (Lighthouse) proposed an optimisation - [fetch blobs from the EL](https://github.com/sigp/lighthouse/pull/5829) - back in May 2024 that aimed to solve this issue, and recent [test results](https://blog.sigmaprime.io/peerdas-distributed-blob-building.html) with PeerDAS are looking very promising. Notably, this optimisation was originally intended for Deneb and works without PeerDAS! In short, the optimisation enables nodes to fetch blobs from their execution layer: if a home staker proposer includes blobs with a block, the blobs must be in the public mempool, meaning most nodes that run an EL with default configuration **already have them** on their node. Therefore, other nodes can vote for the block before they receive the blobs from the CL p2p network and can help propagate the blobs too! This means the proposer is less likely to miss the block even if they're slow with uploading. The important thing is that the node *must* send out the block as soon as possible for this to work.
This optimisation is now being investigated and worked on by multiple client teams and *could* be shipped without a fork, potentially before Pectra.
Note that this optimisation does not actually reduce *bandwidth usage*, but it does reduce *bandwidth requirements* by reducing the time-sensitiveness of blob propagation time by the proposer, benefiting home stakers the most as they are less likely to miss a block due to delays in sending blobs to peers before the attestation deadline.
Client teams are also working on a mechanism ([`IDONTWANT`](https://x.com/terencechain/status/1839661686472716340)) to stop peers from sending messages based on what they have already received. This has already been implemented by some clients, including Lighthouse. This will help reduce *bandwidth usage*.
With the above optimisations, I expect the effect of increasing blob count:
- **Bandwidth usage** will increase slightly with the increased blob count, as nodes will send/download more blobs over the course of a slot. However, it's likely that `IDONTWANT` may offset some of the increase.
- **Bandwidth requirements** will likely remain the same, and the optimisations mentioned above are expected to alleviate the missed block issues we're seeing today.
- Home stakers' performance should improve regardless of whether we increase the blob count.
As mentioned earlier, these optimisations do not require a fork, and having *some* nodes run with the optimisation would already help with blob progagation and improve the entire network! We hope more clients implement theses optimisations and we can make an more informed decision based on test data running this optimisation across multiple clients.
## 1D PeerDAS with Subnet Sampling (2025-2026, max 12-16+ blobs)
The first iteration of PeerDAS - 1D PeerDAS with Subnet Sampling - will lead to a significant reduction in bandwidth for full nodes, given the same blob count. This allows us to increase the blob count further while keeping bandwidth usage similar to 4844.
Currently, a single validator node downloads the following for each block:
- Signed beacon block
- Up to 6 blobs, 128KB each, totalling up to 768 KB
- *On the EL side, assuming it downloads 6 blobs during the slot, that's another `128 * 6 = 768 KB`
- *Total* *blob* bandwidth usage over a slot then adds up to **1536 KB**
Under PeerDAS, a single validator node downloads the following for each block:
- Signed beacon block
- 8 out of 128 column sidecars, with each sidecar sized at 2KB per blob
- For 16 blobs, each column sidecar is up to 32KB, and the node downloads 8 of them, for a total of up to 256KB
- *On the EL side, assuming it downloads 16 blobs during the slot: `128 * 16 = 2048 KB`
- *Total* *blob* bandwidth usage over a slot then adds up to **2304 KB**
*Thanks to Dmitrii (From Teku) for mentioning the missing EL metrics in the calculation here. For actual test bandwidth metrics, see [this post](https://blog.sigmaprime.io/peerdas-distributed-blob-building.html#metrics).
So, if PeerDAS is shipped with an increased max blob count of 16, a single validator node's *bandwidth usage* ~~actually decreases~~ only increases slightly (`(2304 KB - 1536 KB) / 12s = 64 KB/s`). Note that with [validator custody](https://github.com/ethereum/consensus-specs/pull/3871), the number of data columns a node must download increases with the amount of ETH staked. The numbers above account for a node with a validator count between 1-8 (assuming 32 ETH each), or up to 256 ETH (with MaxEB).
Without any optimisations, the *bandwidth requimrenet* for node increases, as publishing during block proposal is time-sensitive, and there are now more blob data to send to peers. The current spec requires proposers to publish the block and all data columns (blob data) and make them available to all nodes before the attestation deadline. This means up to 4MB of blob data must be propagated at the slot start to *multiple* peers - a significant challenge, especially for home stakers.
However, this is addressed by the `getBlobsV1` optimisation mentioned earlier, and we've been able to get resource-limited nodes to produce blocks with 16 blobs at the same success rate as higher-capacity nodes during our testing. For more details and test results, see this post: [Scaling Ethereum with PeerDAS and Distributed Blob Building](https://blog.sigmaprime.io/peerdas-distributed-blob-building.html).
## ePBS (EIP-7732)
It's worth mentioning that with [EIP-7732](https://eips.ethereum.org/EIPS/eip-7732), a block proposer would have at least 9 seconds to propagate blobs, as noted by Potuz [here](https://x.com/potuz_eth/status/1839606007170744747) - this also helps with solving the computation and bandwidth issue. I think this complements the optimisations mentioned above, because as the blob count increases, so does the computation and propagation time. Without distributed blob building, this places a lot of strain on a single node and introduces more variation in propagation timings depending on the node's hardware and bandwidth. For example, a high-capacity node might propagate 16 blobs in 1 second, while it *could* take a minimum staking machine 5-7 seconds to do the same. Even so, this is a positive side effect that will help greatly with scaling and reduce the risk of missed blocks due to propagation delays.
## Peer Sampling and 2D PeerDAS (max 32 blobs and beyond)
The subsequent iteration of PeerDAS will introduce Peer Sampling and 2D erasure coding, allowing for further scaling while maintaining similar bandwidth. By that time, consumer-grade validator hardware and bandwidth should be much better than they are today, paving the way for smoother hardware upgrades for validators.
## Conclusion
So, my *opinion* on the questions raised at the beginning:
> Will increasing blob count raise hardware and bandwidth requirements?
If all teams implement the optimisations, I don't think it's necessary based on the currently proposed increases:
- Pectra max 6-8 blobs, *or* just *target* blob increase
- PeerDAS, no blob count decided yet, but assuming 16 blobs (from our testing), I don't expect a hardware/bandwidth upgrade for a node running 1-8 validators, or 256 ETH.
> Will home stakers still be able to participate in block production without outsourcing block building to MEV builders?
With the optimisations, I believe so, at least that's what we're working towards.
> Is maintaining current hardware and bandwidth requirements hurting Ethereum's competitiveness with other L1s like Solana?
This is essentially a trade-off between data availability throughput and decentralisation. In my opinion, we should aim for *sufficient* throughput while maintaining the same level of decentralisation we have today. This doesn't necessarily mean we won't consider increasing resource requirements in the future - this really depends on the demand and our development progress. However, I believe with the upcoming upgrades, there's no urgent need to raise the bar for running a validator node *just yet*.