# Fetch blobs from EL pool Since Deneb activation, blocks with large blob counts experiment worst performance. Stakers with high latency or limited bandwidth are also penalized by big blocks since they may vote for the wrong head. All major implementation today wait for all blocks to arrive over gossip before importing the block. ```mermaid sequenceDiagram User->>Proposer: Broadcast blob tx Note right of Proposer: Pack tx into blob Proposer->>Attester: Broadcast block Proposer->>Attester: Broadcast blob Note right of Attester: Import block ``` However, in most cases all blobs are already in the host machine in the execution client mempool. The consensus client can attempt to fetch all blobs from a block by version hash from the EL mempool. If there's a hit, the consensus client does not have to wait for blobs over gossipsub reducing the time to head. ```mermaid sequenceDiagram User->>Proposer: Broadcast blob tx User->>Attester: Broadcast blob tx Note right of Proposer: Pack tx into blob Proposer->>Attester: Broadcast block Note right of Attester: Read blob tx from EL pool Note right of Attester: Import block Proposer->>Attester: Broadcast blob ``` This idea is also expressed by dankrad in this recent post: https://notes.ethereum.org/@dankrad/self-builder-strategies#Before-PeerDAS ## Implementation - Consensus side: https://github.com/sigp/lighthouse/pull/5829 - Execution side: https://github.com/michaelsproul/reth/pull/1 - Engine API spec: https://github.com/ethereum/execution-apis/pull/560 ## After PeerDAS The same optimization still has value after PeerDAS. If a node has seen all blob transactions, it can consider the block available before receiving the columns over gossip. Note that compared to pre-PeerDAS re-seeding of column sidecars is only possible for nodes that have received all the blob transactions.