Try   HackMD

Gossip Blobs Signature Verification Efficiency

Issue: https://github.com/sigp/lighthouse/pull/2832

Finding the proposer index

  1. beacon_proposer_cache
  2. fallback to snapshot cache
  3. if not in snapshot cache, load from DB

Why are we getting Cache Miss?

UPDATE: @realbigsean found out we're actually accessing the proposer cache incorrectly, fixed here. That explains why we were getting cache miss so much (it only works at epoch boundaries!), the below race condition is still possible but less likely to occur, and would be easily fixed once we have tree-states.

Proposer cache race condition

  • In block processing:
    1. snapshot removed from snapshot cache, but proposer cache not yet primed
    2. cheap state advance happens
    3. proposer cache is primed
  • If blob arrives between 1 and 3, we get a cache miss and we load state from DB and re-compute, which is inefficient.
beacon_storesnapshot_cacheproposer_cacheblob_verificationblock_verificationgossip_blobgossip_blockbeacon_storesnapshot_cacheproposer_cacheblob_verificationblock_verificationgossip_blobgossip_blockcompute proposer indicescompute proposer indicesblock receivedget proposer index from cache missget snapshot state (**removes state**)blob receivedproposer cache misssnapshot cache miss  (removed earlier😭)get parent block & stateprime cacheprime cache againBlob arrived first

Possible Solutions

  • Use Promise to avoid computing proposer twice, similar to shuffling_cache
  • Somehow not drop the snapshots - however this would be expensive before tree states
    • what about only dropping the snapshot after proposer cache is primed?