blob/acc in 2025

A proposal to safely deliver a maximally impactful Fusaka.

tl;dr? the core proposal is here

Fusaka, Peerdas and blobs

The upcoming Fusaka (Fulu + Osaka) upgrade for Ethereum introduces PeerDAS, a data availability sampling technique, via EIP-7594 that supports a theoretical 8x increase on top of today's data layer.

Following the imminent Pectra upgrade, Ethereum will support a target of 6 blobs per block, up to a maximum of 9 blobs per block. PeerDAS then suggests a target of 48 blobs per block, up to a maximum of 72 blobs per block.

Given the fact that PeerDAS is an entirely new technique for data availability, there is some concern about moving from Pectra's blob limits directly to the PeerDAS theoretical max. To mitigate this concern, Fusaka could start at a much more conservative blob count with the implication that we would need additional network upgrades to further increase the blob count.

Each network upgrade carries an intrinsic amount of risk, and demand a lot of coordination work to carry out successfully. This fact implies that a progressive schedule to reach PeerDAS's full capacity across a series of network upgrades would likely take a long time. And simultaneously, demand for as much blob throughput as the protocol can provide is clear. We need as many blobs as possible as soon as possible.

To resolve this tension, we have seen several proposals for a more flexible deployment strategy using "blob parameter only (BPO)" forks:

  1. https://ethereum-magicians.org/t/blob-parameter-only-bpo-forks/22623
  2. https://eips.ethereum.org/EIPS/eip-7892

The intent of these proposals is to continue scaling the data layer while avoiding the overhead we see with a completely new network upgrade.

Blob schedule for Fusaka

Here's a specific BPO mechanism to consider for Fusaka.

Assuming (target, max) of (6,9) at the time of Fusaka deployment, blob throughput in the protocol would programmatically scale on the following schedule:

Step Increase Blob count
T = 0 1x (6, 9)
T = 2 weeks 2x (12, 18)
Previous T + 2 months 2x (24, 36)
Previous T + 2 months 2x (48, 72)

T = 0 coincides with the deployment of Fusaka on mainnet. Each step occurs some time after that as given in the table.

This schedule gives 2 weeks to simply observe PeerDAS on mainnet with the existing blob counts. Then, we simply double blob capacity every 2 months until we reach the maximum we expect PeerDAS can support. Client implementations would be configured to perform these increases automatically without any input from a user. And to make it clear, there is only one network upgrade here: Fusaka.

This schedule is compatible with EIP-7892, and delivers the maximum impact from Fusaka PeerDAS on a reasonable timescale while giving the community time to react if we start to see degradation from excessive blob scaling. In the event we see issues, a different configuration can be shipped in clients that can delay (even indefinitely!) further increases until solutions are found.

Schedule parameters

Key parameters to this schedule are the quantity of increase (e.g. 2x at each step), and cadence (e.g. 2 months between steps). The concrete parameters above seem like a reasonable tradeoff between security and scale, pending further analysis as PeerDAS gets closer to mainnet. We can change these parameters to inhabit different points across this tradeoff.

Timing projection

It is not clear when Fusaka will go live on mainnet. Many core developers support a target of Q3'25 for Fusaka. Using this only for sake of this example, we can project blob capacity over 2025 and into 2026.

Mid-Sept 2025: Fusaka goes live
October 2025: (12, 18) blob (target, max)
December 2025: (24, 36) blobs
February 2026: (48, 72) blobs

Note this projection only demonstrates one possibility, and changing the above schedule would alter these dates. Moreover, future R&D on PeerDAS may suggest a schedule like this is not suitable to maintain mainnet's security and a different strategy will need to be pursued.