--- tags: ethereum,blobs --- # Gossip Blobs Signature Verification Efficiency Issue: https://github.com/sigp/lighthouse/pull/2832 ## Finding the proposer index 1. `beacon_proposer_cache` 2. fallback to snapshot cache 3. if not in snapshot cache, load from DB ## Why are we getting Cache Miss? **UPDATE**: @realbigsean found out we're actually accessing the proposer cache incorrectly, fixed [here](https://github.com/sigp/lighthouse/pull/4646). That explains why we were getting cache miss so much (it only works at epoch boundaries!), the below race condition is still possible but less likely to occur, and would be easily fixed once we have [tree-states](https://github.com/sigp/lighthouse/issues/2806). ### Proposer cache race condition - In block processing: 1. snapshot removed from snapshot cache, but proposer cache not yet primed 2. cheap state advance happens 3. proposer cache is primed - If blob arrives between 1 and 3, we get a cache miss and we load state from DB and re-compute, which is inefficient. ```mermaid sequenceDiagram participant gossip_block participant gossip_blob participant block_verification participant blob_verification participant proposer_cache participant snapshot_cache participant beacon_store Title: Blob arrived first gossip_block->>block_verification: block received block_verification->>proposer_cache: get proposer index from cache miss block_verification->>snapshot_cache: get snapshot state (**removes state**) activate block_verification Note over block_verification: compute proposer indices gossip_blob->>blob_verification: blob received blob_verification->>proposer_cache: proposer cache miss blob_verification->>snapshot_cache: snapshot cache miss (removed earlier😭) blob_verification->>beacon_store: get parent block & state activate blob_verification Note over blob_verification: compute proposer indices block_verification->>proposer_cache: prime cache deactivate block_verification blob_verification->>proposer_cache: prime cache again deactivate blob_verification ``` ## Possible Solutions - Use `Promise` to avoid computing proposer twice, similar to `shuffling_cache` - Somehow not drop the snapshots - however this would be expensive before tree states - what about only dropping the snapshot *after* proposer cache is primed? <!-- ## Code ```rust match self.cache.get(&key) { item @ Some(CacheItem::Proposers(_)) => item.cloned(), item @ Some(CacheItem::Promise(receiver)) => match receiver.try_recv() { // The promise has already been resolved. Replace the entry in the cache with a // `EpochBlockProposers` entry and then return the proposers. Ok(Some(proposers)) => { let ready = CacheItem::Proposers(proposers); let _ = self.insert( epoch, shuffling_decision_block, proposers.proposers.into(), proposers.fork.clone(), ); Some(ready) } // The promise has not yet been resolved. Return the promise so the caller can await // it. Ok(None) => item.cloned(), Err(oneshot_broadcast::Error::SenderDropped) => { self.cache.remove(key); None } }, None => None, } ``` -->