**Date:** 2026 / 04 / 29
**Hosts:** Kamil, Bosul
**Time:** 13:30
# Sparse Blobpool (EIP-8070)
Presentation and discussion of the Sparse Blobpool proposal (EIP-8070) for the Glamsterdam hard fork, aimed at reducing EL bandwidth consumption during blob propagation by aligning the EL's mempool blob distribution with the CL's PeerDAS custody model.
## Problem Statement
As blob count increases, EL bandwidth consumption grows disproportionately to the CL. Currently:
- A single 128 KB blob announced to ~50 peers (Geth's announcement count) consumes ~6 MB of upload bandwidth
- Uploading one blob to 50 peers takes roughly an entire second
- Under EIP-7870 blob count assumptions, this becomes a serious bottleneck
## Proposal
Instead of every EL fetching full blob data, each node probabilistically assumes one of two roles:
- **Provider** (15% probability): fetches and serves full blob data
- **Sampler** (85% probability): fetches only the cells corresponding to the CL's custody set (e.g., 8 of 128 cells)
### Protocol Changes
**devp2p (new `eth/72` protocol):**
- `NewPooledTransactionHashes` extended with a **cell mask** indicating which columns the sender holds (all-ones for providers)
- `PooledTransactions` no longer carries blob data
- New messages: `GetCells` and `Cells`
**Engine API:**
- New `engine_getBlobsV4`: more granular than V3, allows CL to request partial blob data
- `forkChoiceUpdated` V4: new field allowing CL to communicate its custody set to the EL
## Simulation Results
Run with 10,000 nodes, 20 type-3 transactions over 120 seconds:
| Network adoption | Bandwidth per node |
|---|---|
| 100% sparse blobpool | ~1.2 MB |
| 90% sparse blobpool | ~2 MB (~80% increase from full adoption) |
| 0% sparse blobpool | ~7.6 MB |
Propagation latency (p50/p95) also improves, with a notable drop above 90% adoption.
## Migration Path
For mixed-network periods:
- **Provider receiving legacy announcement**: fetch full blob immediately
- **Sampler receiving new announcement (cell mask all ones)**: fetch only partial data
- **Sampler receiving legacy announcement**: 2-second timeout waiting for an `eth/72` peer; fall back to full blob fetch if none arrives
## Adversarial Considerations
| Attack type | Mitigation |
|---|---|
| **Withholders** (claim provider, fail to serve) | Add noise cells to requests to verify completeness |
| **Free riders** (only ever sample) | Heuristics to detect sampling-only nodes |
| **Non-announcers** (only request) | Existing mitigations may apply |
| **Selective signalers** (flood non-includable txs) | Wait for minimum independent peer announcements before sampling |
## Discussion Highlights
### Alternative: Random Full-Blob Sampling (Julio)
Suggestion to skip cells entirely and have nodes randomly download ~20% of full blobs. Counter-arguments:
- Loses the `engine_getBlobs` optimization benefit
- Relies on async execution to make latency irrelevant
- Less efficient than CL-aligned cell-level fetching
The pre-propagation philosophy — using the mempool's longer time window to ensure availability before block inclusion — was emphasized as a key motivator over PeerDAS-style on-demand fetching.
### Reed-Solomon / Erasure Coding (Alexei)
Proposal to use erasure-coded redundant chunks for propagation. Rejected for now because:
- KZG cell proof reconstruction is expensive (~100–300 ms even with c-kzg)
- Mempool data is unconfirmed; burning CPU on it is undesirable
- Could be revisited as an on-top reliability layer
### Edge Cases Raised
- **Block production**: builders/proposers must eagerly behave as providers to have full blob data available
- **CL switching / CGC changes**: validator reattachment can cause custody mismatches mid-slot. Consensus: acceptable degradation for one slot, since this is an optimization not a consensus rule
## Implementation Status (Geth — Bosul)
- ~4,000 LoC change in draft PR
- Three components: **blob fetcher**, **blob buffer**, **blobpool**
- Asynchronous transaction/cell fetching chosen for latency reasons
- Random peer selection with peer tracking for dropping
- Fallback: if no second provider found, switch to provider role
- **Nonce skips now permitted** in the blobpool (necessary because cell-fetch delay is unpredictable)
### Engine API PR
- `execution-apis` PR #774
- `engine_getBlobsV4` returns cells (clients without sparse blobpool support can compute cells on-demand)
- `forkChoiceUpdated` V4 custody set field can be ignored by non-implementing ELs
## Client Commitments
| Client | Status |
|---|---|
| Geth | Implementation in progress (draft PR) |
| Nethermind | Actively working |
| Besu | Starting implementation |
| Reth | Started working |
| CL side | Minimal changes — engine API update only; CL/EL can negotiate version support |
## Decisions / Next Steps
1. **No consensus changes required** — can be introduced between forks
2. Target **Glamsterdam devnet 2** (optional inclusion)
3. EL/CL version negotiation on engine API allows legacy fallback
4. EIP PR exists in the working group thread; spec changes posted
## Open Questions
- Whether erasure coding should be revisited as a supplementary reliability layer
- Detailed mitigation specs for the identified adversary classes
- Coordination of CL custody set changes with EL during validator reattachment scenarios