Assumptions
Problem 1: We currently create a bloom filter with size proportional to number of historical DhtOps and exchange it while gossiping. Ideally we would take up less space for older data that is unlikely to change.
Problem 2: We create one of these bloom filters for each peer we gossip with, because each peer's DHT Arc has a unique overlap with ours. Ideally we would have less unique overlaps, so that we could re-use computation.
If we align our DHT Arcs along predictable boundaries and split the Arcs into chunks, we can re-use gossip data for a given chunk across each connection that has that as overlap.
If we compute hashes of historical DhtOps for various time ranges, we can gossip those hashes instead of a bloom filter, and in the frequent case where those hashes do not differ between peers, we can save resources.
We want to optimize gossip to accomplish a few main goals:be easy to maintain (follow a single pattern in a single gossip loop)
minimize overhead in terms of computation and bandwidth
require the most reliable nodes to do be doing the least work (meaning, a node that has been offline knows it is going to have to get updates when it comes on, but nodes that stay on, should only be computing and gossiping new data, never old data with each other)
Overview