# Partial Messages / Cell Level Dissemination
[toc]
## Pre-reading
https://ethresear.ch/t/gossipsubs-partial-messages-extension-and-cell-level-dissemination/23017
libp2p/specs: <https://github.com/libp2p/specs/blob/marco/partial-messages/pubsub/gossipsub/partial-messages.md>
consensus-specs: <https://github.com/ethereum/consensus-specs/pull/4558>
## Devnet specs
- [blob-devnet-0](https://notes.ethereum.org/@ethpandaops/blob-devnet-0)
# Current states
## Current State (2026-1-22)
- Prysm and Lighthouse on latest consensus spec changes.
- Discussed tuning gossip logic to be better than standard gossipsub.
- for next time, go over edge cases we want to cover in the devnet.
## Current State (2026-1-15)
See meeting agenda (notes inlined)
## Current State (2026-1-8)
- Prysm + Lighthouse interop!
- Prysm branch ready for review
- some gossipsub-interop issues that marco still needs to debug.
## Current State (2025-12-17)
- Rust gossipsub changes in PR
- Prysm branch is mostly complete
- Cells are now being verified (potentially in batches)
- Will do a pass at cleaning up the PR today. Will be ready to review tomorrow
- Rust and Go gossipsub pass all interop test in the gossipsub interop tester.
- Gossipsub interop tester: https://github.com/libp2p/test-plans/pull/684
- Lighthouse branch is: https://github.com/dknopik/lighthouse/tree/partial-columns
- Prysm branch is: https://github.com/MarcoPolo/prysm/tree/partial-columns
## Current State (2025-12-11)
- Go gossipsub changes merged
- Rust gossipsub changes in PR
- Prysm branch is mostly complete
- Needs to add column validation
- Needs some refactoring per review comments
- Lighthouse branch is (TODO fill this in)
- Gossipsub interop tester: https://github.com/libp2p/test-plans/pull/684
# Future Optimizations
- Moving partial column header data to a separate topic.
- The partial column header data is the same across all columns. It would be useful if we could consolidate this by having a separate topic for just this header in order to reduce duplicates.
- The downside of this approach is increased complexity on the CL side, especially with how the node has to "cache" any cells that arrive before the header in order to validate them later. The other downside is that it increases the scope of this change. I, marco, think we can ship the initial version without this, and then add this optimization in a future version.
# misc
## Tasks left todo
- [x] Write the EIP
- [x] Write up Implementation recommendations in libp2p spec
- [x] Pass rust <-> go gossipsub interop tests
- Update Consensus specs PR with interop results
- [x] Kurtosis interop lighthouse & prysm
- [x] Ship GetBlobsV3
- [ ] Identify bottleneck in SSL setup
## Initial test results
:::info
Ask for credentials.
:::
- Baseline: [Results](http://94.130.19.237:3000/d/netnew/network-monitor-new?orgId=1&from=2025-11-05T22:00:00.000Z&to=2025-11-06T12:00:00.000Z&timezone=utc&var-network=fusaka-devnet-ssl-prysm-7&var-consensus=$__all&var-execution=$__all&var-group_by=instance&refresh=1m) / [Config](https://github.com/testinprod-io/fusaka-devnets/tree/main/network-configs/devnet-ssl-prysm-7)
- Partial messages: [Results](http://94.130.19.237:3000/d/netnew/network-monitor-new?orgId=1&from=2025-11-05T22:00:00.000Z&to=2025-11-06T12:00:00.000Z&timezone=utc&var-network=fusaka-devnet-ssl-part-msg-1&var-consensus=$__all&var-execution=$__all&var-group_by=instance&refresh=1m) / [Config](https://github.com/testinprod-io/fusaka-devnets/tree/main/network-configs/devnet-ssl-part-msg-2) / [Report](https://github.com/testinprod-io/fusaka-devnets/tree/main/network-configs/fusaka-devnet-ssl-part-msg-1)
Other resources:
- https://github.com/testinprod-io/fusaka-devnets/blob/main/ansible/inventories/devnet-ssl-part-msg-1/group_vars/prysm.yaml
## Onboarding path for other clients
We have the consensus specs and libp2p specs. Implementations should be
straightforward, but we might want to do a pass on recommendations for
implementations to explain optimizations and pitfalls to watch out for.
TODO
## Deployment strategy
1. Deploy support for partial messages on 1 column.
2. Deploy nodes that request partial messages for this column. Ideally when this
column is not critical to DA checks. Evaluate:
1. Latency to full column
2. Bandwidth usage.
3. Deploy support for partial messages on all columns.
4. Deploy nodes that request partial messages for all columns and monitor:
1. Latency to full column (per column).
2. Latency to pass DAS check.
3. Bandwidth usage.
### Fallback strategies
### A bug in an implementation that can be updated
TODO
## Open Questions
- (@marcopolo) Should we delay making an IWANT request from gossip peers if we have partial
message capable peers?
- You can imagine we have 99% of a column, and we are in the process of
getting the last cell, when we receive an IHAVE for a message we haven't
seen. When we fetch it, it turns out it was the full column and we wasted
this bandwidth.
- I think, probably yes, but can be after MVP
- (@marcopolo) Should we omit eager pushing cells for the MVP when forwarding?
- I think yes as long as we have a path to add it later.
- (@raulk) Should the publisher use knowledge of local blobpool hits to optimistically publish only partial columns? (probably: no)
- Figure out the right metrics to share
- Make sure the gossipsub scores are consistent between peers with partial messages and those without.
- Gossipscoring to take into account a peer submitting multiple group IDs for a slot.
# Meetings
### To discuss in future meeting
### 2026-1-29
- Agenda bash (1 min)
- A new section in this doc (future optimizations)
- client updates (15 min):
- EIP first draft published: https://github.com/ethereum/EIPs/pull/11176
- discussion on security considerations.
- Prysm
- Not using getBlobsV3 yet
- Considering when to send updated bitmap (always or only on complete)
- Should we make a partialDataColumn request for all cells while waiting for getBlobsV3
- Sukun:
- We should improve the getBlobs performance (abandon json?)
- rk: More fluid interactions between CL<->EL. Can EL inform CL which blobs are includable? If more interaction and CL knows about which blobs are available (no data payload just meta), cl could respond quicker.
- Lighthouse
- fixed minor edge cases bugs.
- added logging.
- all metrics except usefulness of full columns.
- added heartbeat gossip
- nimbus
- lodestar
- refactor code that got us started.
- teku (jvm)
-
- Discuss request bitmask spec change (10 min)
- Discuss not always sending partsMetadata updates to peers. (10 min) (Kasey)
- adding bitmask also helps in this case
- New Metric?: Time waiting for DA checks.
- this would be the time after receiving the block where we are blocked on attesting due to missing the required data columns.
- Discuss Edge cases we want to cover (as time allows)
- Everyone supports partial messages.
- All private blobs, how this compares with full data columns (should be similar, with more latency)
- 1 cell missing.
- 1/8th of the network DOES NOT support partial messages, these peers should still be well peered in the network and we should not observe clustering.
- Avoiding clustering: Maybe only one useful message boost per x?
- dk: do we want to test anything around support vs request?
- mm: I think we can do the simpler thing of request or nothing.
- mk: late blocks. How late can we go?
- dk: Many forks case: we have one group id per slot, if we have long periods of non-finality, we may see gossip of multiple valid blocks.
- kk: would you try to follow your head?
- dk: what if we consider both to be valid?
- kk: Prysm can simulate this by having validators split their votes (patching the impls).
### 2026-1-22
- agenda bash (1 min)
- client updates (10 min):
- Prysm: https://github.com/OffchainLabs/prysm/pull/15869
- aarsh started review
- marco added eager push data column header and version byte
- lighthouse: https://github.com/sigp/lighthouse/pull/8314
- refactor based on pawan feedback done.
- adds eager push data column header and version byte.
- nimbus
- lodestar
- started looking through PRs
- teku (jvm)
- spec updates (5 min): https://github.com/ethereum/consensus-specs/pull/4558
- gloas changes
- Removed/added fields from PartialDataColumnHeader as done in DataColumnSidecar.
- eager push data column header
- Aarsh Q: Why include the header if we might receive the data via full column first?
A: Full columns might not be sent anymore in future and are larger and thus smaller
- version byte in group id
- gossipsub extension spec
- gossip vs mesh peers:
- rk: should we analyze the tradeoff between latency/bandwidth and try to adjust this behavior?
- rk: should we come up with a different policy than what default gossipsub uses?
- as: there is no explicit request builtin. There is no explicit way to request for parts. just a way to exchange what parts a peer has.
- st: staggered gossip might work better than default gossip.
- mm: keeping existing gossipsub behavior is an option for MVP.
- rk: we have to take into account how scoring afffects gossip behavior.
- st: we could have a separate bitmap for prioritizing the request of parts along with availability map.
-
- discuss edge cases in devnets
- eager cell data policy discussion (10 min)
- https://hackmd.io/iwIrBvSpTzikkyWKlDh2YA
- DK: having the blob in the mempool for a long time can signal it is publicly available.
- RK: We can have the local builder to have the policy to only allow "well-aged" blobs that have likely disseminated in the network.
- timeline (15 min)
- Rough path:
- devnets:
- discuss edge cases
- interop only partial message clients
- Verify we see expected bandwidth savings.
- verify backwards compatibility. Mix non-partial message clients
- with only partial message clients and scale blob count.
- BB: could launch a devnet once clients have the metrics in. What scale are we looking for?
- How many full/super?
- EIP:
- I'll have this drafted next week.
- libp2p specs:
- I'll do another editorial pass at this, but should be ready to merge by next week.
- Consensus Specs:
- https://github.com/ethereum/consensus-specs/pull/4558
- testnet rollout of clients?
- New client implementations can come progressively.
- as time allows:
- RK: IDONTWANT is timing dependent. If you've queued an eager push to a peer and you receive a partsMetadata from them before you've written the eager push here is a similiar issue.
- follow up:
- add a sentence on clarifying not eagerly push cell data if you have partsMetadata from a peer.
### 2026-1-15
**Agenda:**
- agenda bash (1 min)
- client updates (5 min) (actual time: ):
- Prysm
- go-libp2p:
- partial message gossip: https://github.com/libp2p/go-libp2p-pubsub/pull/663
- eager push method on Partial Message interface: https://github.com/libp2p/go-libp2p-pubsub/pull/662
- lighthouse
- nimbus
- nim-libp2p implementing partial messages
- lodestar
- teku (jvm)
- rollout discussion:
- do we need more than 2 clients before rollout?
- barnabas: concerned about peer clusterning
- MK: a metric for knowing if the peer is on the new vs old version
- KK: can we parameterize all p2p metrics if there is partial or not?
- KK: clients that support partial messages will beat other peers that don't and could score better. Would be good to align with a BPO for this reason.
- PD: are you proposing we bundle this with a blob count increase? We should be testing partial message interaction on testnets, and we should see this clustering in the testnet. I would like to see this working with the current blob count before increasing it.
- RK: We can try different strategies of rolling this out, such as using it on separate non-custody columns. Running tests on main-net is viable, but we must still go through testnet gauntlet first.
- testnet rollout discussion (10 min)
- metrics (5 min)
- https://github.com/ethereum/beacon-metrics/pull/21/changes#r2691314901
- eager partial data column header (5 min)
- https://hackmd.io/Q9xQfN8zTtywJ75g312t8Q
- KK: Could we have an envelope type that could not be SSZ, something else. Would to reuse the existing data column sidecar component. alternative to the special message in the header.
- BB: We need the get the consensus specs merged before the devnet goes in.
- Typical flow:
- merged consensus spec
- release in clients
- devnet from release. This might not be necessary for this work though
- eager cell data policy discussion (5 min) (next time)
- https://hackmd.io/iwIrBvSpTzikkyWKlDh2YA
- timeline (10 min) (next time)
- Rough path:
- devnets:
- interop only partial message clients
- Verify we see expected bandwidth savings.
- verify backwards compatibility. Mix non-partial message clients
- with only partial message clients and scale blob count.
- BB: could launch a devnet once clients have the metrics in. Why scale are we looking for?
- How many full/super?
- EIP:
- I'll have this drafted next week.
- libp2p specs:
- I'll do another editorial pass at this, but should be ready to merge by next week.
- Consensus Specs:
- https://github.com/ethereum/consensus-specs/pull/4558
- testnet rollout of clients?
- New client implementations can come progressively.
- as time allows:
- Partial Message extension code point (next time):
- Should we use an experimental code point during rollout and switch this to a canonical one in glamsterdam?
- Would allow us to use a new code point easily if we discover a bug we need to fix. Basically a way to make a "new version" of the partial message extension. This is fairly common in internet protocols.
#### Daniel's notes:
- Aarsh starting to review Marcos branch
- LH is refactoring and has support for getBlobsV3 metrics, libp2p metrics follow soon
- Nimbus: work has started implementing partial messages in nim-libp2p
- Lodestar: Raul is contributing a AI base impl derived from other impls - libp2p interop test passes
- Teku:
### 2026-1-8
**Agenda:**
- agenda bash (1min)
- client updates:
- Prysm
- lighthouse
- lodestar
- teku (jvm)
- Metrics (30 min):
- EL:
```go
// Number of times getBlobsV3 responded with some, but not all, blobs
getBlobsRequestPartialHit = metrics.NewRegisteredCounter("engine/getblobs/partial", nil)
```
- CL:
```
beacon_engine_getBlobsV3_requests_total
# TODO counter for partial responses
beacon_engine_getBlobsV3_responses_total
beacon_engine_getBlobsV3_request_duration_seconds
# The total size of publish messages sent via rpc for a particular topic
p2p_pubsub_rpc_sent_pub_size_total
# Labels:
# topic: the topic for the publish message being sent
# is_partial: bool if the message is published using the partial messages extension
## Expectations: is_partial=false total bytes goes down. is_partial=true is much less than the prior state of only having full columns
# Number of partial-message capable peers in mesh
TODO_NAME_THIS
# Labels:
# topic: the topic of the mesh
## Expectations: until all support this, we expect this mixed. Hopefully with a number of peers here.
# Number of useful cells received via a partial message
TODO_NAME_THIS
# Labels:
# topic
## Expectations: a postive number
# Number of total cells received via a partial message
TODO_NAME_THIS
# Labels:
# topic
## Expectations: Not too much higher than useful cells
# Number of useful full columns (any cell being useful) received
TODO_NAME_THIS
# Labels:
# topic
## Expectations: a low number
# How often the partial message first completed the column
TODO
# Labels:
# topic
## Expectations: hopefully higher than useful full columns
# How often we receive a partial column before the block.
TODO
# Time to verify cells gossip verification
barnabas comment:
can we unify these metrics somehow?
can we have a label to differentiate
# Do we want to track the bitmaps sent/received?
todo think about this a bit. Could be useful in identifying if a peer's partsMetadata update logic is a bit buggy.
# Track cardinality of group ids for a given slot
TODO_NAME
## This highlights forks, bugs, and malicious behavior for a given slot.
```
- [beacon-metrics](https://github.com/ethereum/beacon-metrics) repo to add the metrics
- Test plan strategy
- We want metrics so we can evaluate.
- launch something next week.
- We need to have a flag for partial messages.
- A/B machines with the same network.
- Kasey: Would be useful to have different degress of private messages
- Barnabas: doable with spamoor
- kasey: messages are snappy compressed. we need entropy in the messages.
- Raul: do we want something we can use to force fragmentation in the mempool?
- Discussion around block header (15min):
- This only applies for eager pushes.
- When we aren't doing for the MVP, so we don't need to handle this right now.
- Discussion around current behavior of data columns arriving before blocks
- daniel & marco simulation scenario follow up
- many valid head scenario:
- including the parent root might be useful
- Discuss some open questions (as time allows):
- (@marcopolo) Should we replace IHAVE with a partial message?
- I think yes.
- (@marcopolo) Should we delay making an IWANT request from gossip peers if we have partial
message capable peers?
- You can imagine we have 99% of a column, and we are in the process of
getting the last cell, when we receive an IHAVE for a message we haven't
seen. When we fetch it, it turns out it was the full column and we wasted
this bandwidth.
- I think, probably yes
### 2025-12-18
**Agenda:**
- Recap
- Gossipsub interop
- Prysm ready for PR
- GetBlobsV3
-
- What it takes to implement + integrate (@marcopolo)
- A brief code walkthrough of the gossipsub implementation and the Prysm integration.
- Discussion on new metrics
- think getBlobsV3
- Path to mainnet (@raulk)
- Devnets, testnets, next BPO batch,
- Open questions
**Notes:**
- Lighthouse and Prysm interop achieved
- Prysm ready for a PR review
- Lighthouse doing some API changes for cleanup, already submitted
- Where is proof verification done in the validation pipeline within gossip?
- Discussion about unknown parent during column sidecar propagation
- Validation gossip sync/async discussion
- Noted edge cases on validating and refetching partials from other peers if invalid
- Open point: partial messages scoring, define correct behaviour baseline but not exact parameters
- Touched on PeerFeedback mechanism
- Consider OTEL traces during devnets to visualize how solution behaves in practice
- path to mainnet
- depends on where impl like nimbus/lodestar and where they can get to.
- target sometime in march
- It would be good to think about this in january
- Matthew Expectation to get this spec merged:
- we should talk about spec finalization timelines before talk about software finalization
MK: Why not do this in glamsterdam? paired with sparse blobpool
RK: We want these optimizations sooner for BPOs
KK: (in text chat) I think we need a better name for sparse blob pool, it's easy to mix up with organic fragmentation (I did this myself earlier today).
### 2025-12-11 (kickoff)
**Agenda:**
- Intro (@raulk)
- Motivation and timeline
- Blobs traction
- What it takes to implement + integrate (@marcopolo)
- A brief code walkthrough of the gossipsub implementation and the Prysm integration.
- Path to mainnet (@raulk)
- Devnets, testnets, next BPO batch,
- Open questions