# The road to PeerDAS in Pectra
The core dev community has decided that PeerDAS is a top priority to ship as soon as possible in order to scale Ethereum's data throughput. Given that scaling the throughput amounts to raising the in-protocol blob count maximum, this suggests the next hard fork should contain both features:
1. Deploying PeerDAS for data sampling
2. Raising the blob count
This document further explains how we can deploy PeerDAS and makes two concrete suggestions to reduce the "decision surface" when raising the blob count.
If the feedback from this document is positive, I'm happy to make PRs for the engine API and the PeerDAS validator guide to reflect the proposed changes.
## Deploying PeerDAS
It was decided on [ACDC #134](https://github.com/ethereum/pm/issues/1050) to implement PeerDAS as a feature set on top of Pectra with a separate activation fork epoch. This strategy means that we have some optionality to easily remove PeerDAS if we get to a point where the rest of the Pectra EIPs are ready to ship and we still don't have confidence in the PeerDAS implementation.
To illustrate, there are two relevant fork epochs the consensus layer client must reason about: `PECTRA_FORK_EPOCH` and `PEERDAS_ACTIVATION_EPOCH`. I strongly suggest that either these two values are identical (meaning PeerDAS goes live with Pectra), or `PEERDAS_ACTIVATION_EPOCH` is left as a maximum value while `PECTRA_FORK_EPOCH` is set to its mainnet value. Having two separate epochs means that development can proceed in parallel without all of the ceremony around an entirely new fork (e.g. PeerDAS in the F/Osaka fork) in clients that would complicate the implementation and testing of PeerDAS along with the other Pectra features.
## Uncertainty around deployment timeline and cross-layer concerns
The above "uncoupled" strategy should streamline R&D, but it does mean it is not clear if Pectra will have a blob count increase. The current implementation of the blob count maximum has a value both on the CL (max blob commitments possible) and EL layer (max blob data gas). The fact that this decision is coupled across the CL and EL means that the uncertainty around the CL timeline imposes uncertainty around the EL timeline. This uncertainty is unwanted as scoping the EL side of Pectra is also quite complex given many parallel features with demand from different members of the Ethereum community. And as long as this coupling exists, this problem will keep presenting itself: an entire hard fork just to change this one constant across both CL and EL clients.
To uncouple the two protocol layers around the blob count, I suggest the following (h/t Dankrad for suggesting something along these lines):
:::info
- Remove a hard-coded data gas limit from the EL
- Have the CL send the data gas limit (or equivalently just the maximum blob commitment count) to the EL for each payload across the Engine API
:::
The data gas limit should only change at fork boundaries so we could imagine the CL only sending once per fork, but there is precedent already to send data from CL to EL with each payload (c.f. the parent beacon block root for EIP-4788) and sending the data with each message makes validation self-contained (rather than demanding the EL track some state for each payload received).
By doing this, the EL no longer has to care about what the blob gas limit is and can simply follow the CL's directive here. We can then uncouple timeline discussions around raising the blob count from EL scoping discussions.
I think the best place to specify this behavior would be in the Pectra Engine API spec.
## On raising the blob count in Pectra
Assuming the PeerDAS implementations go well over the next few months and we are ready to ship it in Pectra, we will need to determine how to adjust the maximum blob count. We can already see today's maximum blob count of 6 brings a higher rate of orphaned blocks in [this analysis done by Toni](https://ethresear.ch/t/big-blocks-blobs-and-reorgs/19674#blobs-3). The likely explanation is that blocks with 6 blobs are already straining nodes on the network with lower available resources (e.g. solo stakers at home). Raising the maximum blob count further will only exacerbate this issue, and in the terminal state of Ethereum's scaling roadmap we assume that blob production will be handled by specialized nodes that can handle 32 MB messages. At the same time, we do not want to completely exclude the home staker from participating in blob production as they are an important member of the Ethereum network.
The simple answer to resolve all of these tensions is to have consensus clients support a flag for a local target for block/blob production. To suggest an implementation:
:::info
`CL_BEACON_NODE --local-block-production.maximum-blob-count N`,
where `N` is the maximum number of blobs a beacon node should include in a proposer's block when building locally, regardless of the in-protocol maximum.
:::
Note that more powerful nodes, like execution builders, can either omit this flag (using the in-protocol max), or suggest a higher number than a solo staker may prefer.
My understanding was that a target of 3 and max of 6 blobs would be fine to start for all nodes on the network, but the data suggests otherwise and so the time to implement this feature is with Pectra. Doing so in Pectra also helps derisk a blob count increase for the network, at the cost of a slightly more complex deployment for the solo staker (who needs to be aware of this flag and configure it in their own staking setup).
Another option suggested by Josh Rudolf was to default this flag to a sufficiently low number, and let execution builders take on the task of customzing for a higher blob count. This option moves complexity from the solo staker to the specialized block builders which is more inline with our resource model of these node types; however, I would point out that defaults are powerful and we don't want to accidentally block a blob increase simply because builders are unaware of this requirement.
I think the best place to specify this possibility would be in the EIP-7594 validator guide in the consensus specs under block construction. It would be up to each consensus client how to structure the flag and the relevant documentation for their users.