# PeerDAS L1 R&D breakout notes
[Referenced benchmarks](https://notes.ethereum.org/WcT6jxTcSkGbNJpZomut4A)
Agenda:
1. Proof distribution proposal (no objections raised):
- Proposal content:
- Cell proofs are computed by tx senders and go in the network wrapper for blob txs: light (6 KBs, ~5% size increase) but slow to compute (~200ms)
- The blob itself is sent raw, *not extended*: heavy to send the extension (2x size increase) but fast to compute it (~2.5ms)
- Upon receiving a blob tx, the blob is extended (2.5ms) and *all* cell proofs (for the extended blob) are verified (16ms), *before* forwarding.
- Advantage: no need to compute proofs as part of block building or even worse in the critical path of distributed block building
- Disadvantage: proof verification on the EL (though not part of the tx format)
- *Note: [in the previous distributed block building call](https://docs.google.com/document/d/1yWNPU7ZdDcU7HrUZlYfTb1_W2IUKxCp680QHLU1E59A/edit?tab=t.0), a concern had been expressed wrt to the proof verification time being too high for mempool propagation. However*:
- There are only going to be a few blob txs, not 1000s
- Each blob tx is already heavy, adding 16ms for verification has a low relative impact
2. Sharded blob mempool:
- Rough consensus that it is probably not going to be required for the first DAS rollout. Average bandwidth consumption per node is only ~10 KB/s per blob since mempool is pull-based, and mempool propagation isn't time critical and spread out over the slot. Should be manageable even with a target of 16.
- Can do simple horizontal mempool sharding if needed, without already committing to a more ambitious design (e.g. vertical sharding)? E.g. only pull transactions from senders coinciding with your shard(s), fully local rule.
- Do we even need explicit mempool sharding? Mempool will naturally be sharded for nodes that do not have sufficient bandwidth. Q:
- The EL does not have insight in the bandwidth requirements of the CL. Without capping the usage through explicit sharding, or some form of bandwidth prioritization, how do we prevent the blob mempool eventually just saturating a node's connection, interfering with the CL?
3. Local block building:
- Tx choice:
- Without mempool sharding (explicit or otherwise), no problem here, just include all blob txs you have
- With mempool sharding (explicit or otherwise), can still just attempt to download as many txs as possible in the slot before proposing, without announcing all of them (e.g. still only the ones in your shard). Even with 128 blobs, need ~20 MBit/s *download* (only for one slot).
- Column propagation:
- Would be more distributed (not bottlenecked by nodes that have all txs, allowing smaller contributions etc...) if we could send cells instead of columns:
- Maybe anyway more compatible with long term designs? (e.g. mempool sampling, 2D sampling, local reconstruction)
- Desire to at least look into the feasibility of doing so. Main concern is losing batch verification? 3ms to verify a single cell, but only 6ms to verify a column with 16 cells (8x efficiency gain).