UPDATE: @realbigsean found out we're actually accessing the proposer cache incorrectly, fixed here. That explains why we were getting cache miss so much (it only works at epoch boundaries!), the below race condition is still possible but less likely to occur, and would be easily fixed once we have tree-states.
Proposer cache race condition
In block processing:
snapshot removed from snapshot cache, but proposer cache not yet primed
cheap state advance happens
proposer cache is primed
If blob arrives between 1 and 3, we get a cache miss and we load state from DB and re-compute, which is inefficient.
Possible Solutions
Use Promise to avoid computing proposer twice, similar to shuffling_cache
Somehow not drop the snapshots - however this would be expensive before tree states
what about only dropping the snapshot after proposer cache is primed?