\[ \newcommand{\fmin}{F_{\text{min}}} \]
For a given file \(F = \{b_1, \cdots, b_m\}\), we assume we have a set of storage nodes \(S\) such that \(o \in S\) stores a subset of \(F\) which we denote \(s(o) \subset F\). We furthermore assume that the \(s(o)\) forms a partition, namely, that \(\bigcup_{o \in S} s(o) = F\) and \(\bigcap_{o \in S} s(o) = \emptyset\). Such partitions are named slots in Codex.
We further assume we have a time-varying set of downloading nodes \(D(t)\) which wish to download any subset \(\fmin\) of the blocks of \(F\) which meets \(\left|\fmin\right| = c \times \left|F\right|\), where \(c \in (0, 1]\) is the code rate. Data can be replicated over downloading nodes, and the assumption is that \(D(t)\) changes quickly whereas \(S\) remains relatively stable.[1]
Nodes in \(S\) and \(D(t)\) should form a swarm where blocks percolate from storage to downloading nodes, as well as amongst downloading nodes.
We would like to provide nodes with a peer sampling service (PSS) which allows downloading nodes to contact neighbors at random and ask for content. As we have shown before, in the absence of other downloading nodes, one would be able to retrieve all of the content after contacting \(c \times \left|S\right|\) nodes on average if partition sizes follow a uniform distribution.
Figure 1. A random graph.
When there are other downloading nodes, the PSS can offer, to each node, fresh samples taken over the entire swarm, allowing a node to peek into the entire connected component of the swarm instead of just its immediate neighbors like with regular Bitswap swarms. Note that the role of the PSS is to maintain a pool of samples of the swarm. It does not concern itself neither with block discovery nor downloads.
A possible workflow for a downloading node \(o\), then, could be as follows:
This does not have to be so synchronous, though. As soon as a neighbor runs out of useful information, \(o\) can pick a new sample from the PSS and try again.
There are also possible variations on this basic protocol. If we are concerned by cost of connections, for instance, we could maintain the topology stable for more time and keep re-probing neighbors, hoping that blocks eventually percolate over the random graph (which it should, rather quickly[2])
There are many ways to implement a PSS, of which we will discuss a few here.
The BitTorrent DHT Protocol Spec describes a simple mechanism through which DHT nodes act as trackers. Whenever a node \(o\) wishes to join a swarm for a given file, it looks up the CID of the file and finds its (replicated) tracker. It then joins the swarm by announcing itself to the \(8\) nodes closest to the CID[3], and downloading a random sample of the nodes from this replicated tracker. Node \(o\) must re-announce itself every \(15\) minutes so as to not be dropped by the tracker.
It is easy to see that the replicated tracker implements a PSS: if \(o\) runs out of useful neighbors, it can always ask the replicated tracker for a new sample.
Concerns.
Gossip-based Peer Sampling[4] implements a protocol that forms bounded-degree random graphs in a decentralized fashion. The overall idea of pushpull gossip-based peer sampling is that node \(o\) keeps a \(d\)-length (where \(d \in \mathbb{N}\)) set of peer entries in local state (called a view), where each entry in this view contains a node ID, its address, and an age counter which tells the node how old that entry is.
Node \(o\) then engages in a periodic, round-based protocol where, at each round, it builds a message containing:
Then:
This effectively guarantees that every node has a random sample of the swarm, and that nodes in this sample are likely to be alive.
For this to work, we need the set of nodes that previously functioned as a swarm's tracker to participate in the peer sampling protocol. Peers can then join the swarm by bootstrapping their views against one of the DHT nodes and from then onwards maintain their own views by running the protocol themselves.
The main advantage of gossip-based peer sampling over a plain BitTorrent tracker is that the load on DHT nodes is bounded, irrespective of the size of the swarm. Indeed, because of the way the protocol is constructed, the average number of contacts a node receives per round is always \(1\). Random overlays generated in this way can also scale to very large sizes without ever disconnecting, even with extreme churn.
Concerns.
The excessive number of connections in regular peer sampling occurs because sampling and overlay maintenance are intertwined. There are approaches, however, which decouple those by creating a stable random overlay and performing peer sampling through distributed random walks. Note that it is imperative that such random walks are distributed, as otherwise they would require the same number, if not more, connections than gossip-based peer sampling (e.g.[5]).
Wormhole Peer Sampling[6] uses a network of randomly connected public nodes to which private nodes connect and form a base stable overlay. This base stable overlay is rewired very slowly or in the event of node failures, both of which are not frequent events. Nodes then fire periodic random walks by pushing their own identifiers over the network by means of Metropolis-Hastings (i.e. degree independent) random walks. After following random walks for about TTL rounds, the node where the identifier lands should take it as a valid sample.
The key to Wormhole Peer Sampling is that it works despite limited connectivity between neighbors. I still do not have a clear understanding of all of its parameters though.
Concerns.
\(S\) is also a time-varying set though, even if just because storage nodes are not required to be always available. But we will avoid overcomplicating things for now. ↩︎
A. Demers et al., "Epidemic algorithms for replicated database maintenance". Proc. of PODC'87, 1987. ↩︎
J. Knoll, "BitTorrent Tech Talks: DHT". The BitTorrent Engineering Blog, 2013. ↩︎
M. Jelasity et al. "Gossip-based peer sampling", ACM TOCS, 2007. ↩︎
C. P. Fry and M. K. Reiter, "Really truly trackerless bittorrent", Carneggie Mellon University, Tech. Rep. CMU-CS-06-148, 2006. ↩︎
R. Roverso, J. Dowling, and M. Jelasity, "Through the wormhole: Low cost, fresh peer sampling for the Internet", Proc. P2P'13, 2013. ↩︎