owned this note
owned this note
Published
Linked with GitHub
# Publishing and Downloading in Codex
$$
\newcommand{\fmin}{F_{\text{min}}}
$$
## Concepts
For a given file $F = \{b_1, \cdots, b_m\}$, we assume we have a set of storage nodes $S$ such that $o \in S$ stores a subset of $F$ which we denote $s(o) \subset F$. We furthermore assume that the $s(o)$ forms a _partition_, namely, that $\bigcup_{o \in S} s(o) = F$ and $\bigcap_{o \in S} s(o) = \emptyset$. Such partitions are named _slots_ in Codex.
We further assume we have a time-varying set of downloading nodes $D(t)$ which wish to download any subset $\fmin$ of the blocks of $F$ which meets $\left|\fmin\right| = c \times \left|F\right|$, where $c \in (0, 1]$ is the _code rate_. Data can be replicated over downloading nodes, and the assumption is that $D(t)$ changes quickly whereas $S$ remains relatively stable.[^1]
## Basic Approach
Nodes in $S$ and $D(t)$ should form a _swarm_ where blocks percolate from storage to downloading nodes, as well as amongst downloading nodes.
We would like to provide nodes with a _peer sampling service_ (PSS) which allows downloading nodes to contact neighbors at random and ask for content. As we have shown before, in the absence of other downloading nodes, one would be able to retrieve all of the content after contacting $c \times \left|S\right|$ nodes on average if partition sizes follow a uniform distribution.
<center>
<img src="https://hackmd.io/_uploads/ByUHnPWh2.png" width="70%">
<p>
<b>Figure 1.</b> A random graph.
</p>
</center>
When there are other downloading nodes, the PSS can offer, to each node, fresh samples taken over the entire swarm, allowing a node to peek into the entire connected component of the swarm instead of just its immediate neighbors like with regular Bitswap swarms. Note that the role of the PSS is to _maintain a pool of samples of the swarm_. It does not concern itself neither with block discovery nor downloads.
A possible workflow for a downloading node $o$, then, could be as follows:
1. $o$ bootstraps the PSS (joins the swarm);
2. $o$ connects with $n_s$ nodes taken from the PSS and discovers their blocks;
3. $o$ downloads all blocks there are to download from these neighbors;
4. if $o$:
(a). has $\fmin$ blocks, then $o$ stops and starts to seed;
(b). otherwise, goto $2$.
This does not have to be so synchronous, though. As soon as a neighbor runs out of useful information, $o$ can pick a new sample from the PSS and try again.
There are also possible variations on this basic protocol. If we are concerned by cost of connections, for instance, we could maintain the topology stable for more time and keep re-probing neighbors, hoping that blocks eventually percolate over the random graph (which it should, rather quickly[^5])
## Implementation of the PSS
There are many ways to implement a PSS, of which we will discuss a few here.
### BitTorrent-like Trackers
The [BitTorrent DHT Protocol Spec](https://www.bittorrent.org/beps/bep_0005.html) describes a simple mechanism through which DHT nodes act as trackers. Whenever a node $o$ wishes to join a swarm for a given file, it looks up the CID of the file and finds its (replicated) tracker. It then joins the swarm by announcing itself to the $8$ nodes closest to the CID[^3], and downloading a random sample of the nodes from this replicated tracker. Node $o$ must re-announce itself every $15$ minutes so as to not be dropped by the tracker.
It is easy to see that the replicated tracker implements a PSS: if $o$ runs out of useful neighbors, it can always ask the replicated tracker for a new sample.
**Concerns.**
1. Apart from dealing with all swarm admissions, the load on the tracker nodes is $O(n)$ on the size of the swarm. This is made worse if a node ends up as a tracker for more than one swarm.
2. The swarm can be attacked if a malicious player announces many peer IDs for peers that then refuse to exchange blocks. This could impact performance, or even disrupt the swarm (harder).
3. If tracker nodes cannot be trusted, then the system breaks down.
### Gossip-Based Peer Sampling
Gossip-based Peer Sampling[^4] implements a protocol that forms bounded-degree random graphs in a decentralized fashion. The overall idea of _pushpull_ gossip-based peer sampling is that node $o$ keeps a $d$-length (where $d \in \mathbb{N}$) set of peer entries in local state (called a _view_), where each entry in this view contains a node ID, its address, and an _age_ counter which tells the node how old that entry is.
Node $o$ then engages in a periodic, round-based protocol where, at each round, it builds a message containing:
1. a random sample with $d/2 - 1$ descriptors from its current view;
2. a descriptor containing $o$'s id and address itself, and age set to $0$.
Then:
1. Node $o$ sends that message to a random neighbor $u$ selected from the view;
2. $u$ sends back to $o$ a similarly constructed message (_pushpull_);
3. $o$ _discards_ $d/2$ descriptors, typically the ones with the highest age, and replaces them with the descriptors received from $u$;
4. the ages of all descriptors in this new view are increased by a one unit.
This effectively guarantees that every node has a random sample of the swarm, and that nodes in this sample are likely to be alive.
For this to work, we need the set of nodes that previously functioned as a swarm's tracker to participate in the peer sampling protocol. Peers can then join the swarm by bootstrapping their views against one of the DHT nodes and from then onwards maintain their own views by running the protocol themselves.
The main advantage of gossip-based peer sampling over a plain BitTorrent tracker is that the load on DHT nodes is bounded, irrespective of the size of the swarm. Indeed, because of the way the protocol is constructed, the average number of contacts a node receives per round is always $1$. Random overlays generated in this way can also scale to very large sizes without ever disconnecting, even with extreme churn.
**Concerns.**
1. The requirement that nodes have to connect to a different node at every round to exchange views can be impractical due to the cost of establishing connections, particularly in NATed environments;
2. it is unclear how resilient such overlays are to attacks, particulary nodes that deliberately inject bogus descriptors in view exchanges. These can eventually disrupt or disconnect the overlay, with the added issue that correct peers may not notice it;
3. once again, if DHT nodes cannot be trusted, the whole thing breaks down.
### Peer Sampling with Random Walks
The excessive number of connections in regular peer sampling occurs because sampling and overlay maintenance are intertwined. There are approaches, however, which decouple those by creating a stable random overlay and performing peer sampling through distributed random walks. Note that it is imperative that such random walks are distributed, as otherwise they would require the same number, if not more, connections than gossip-based peer sampling (e.g.[^6]).
Wormhole Peer Sampling[^7] uses a network of randomly connected public nodes to which private nodes connect and form a base stable overlay. This base stable overlay is rewired very slowly or in the event of node failures, both of which are not frequent events. Nodes then fire periodic random walks by pushing their own identifiers over the network by means of Metropolis-Hastings (i.e. degree independent) random walks. After following random walks for about TTL rounds, the node where the identifier lands should take it as a valid sample.
The key to Wormhole Peer Sampling is that it works despite limited connectivity between neighbors. I still do not have a clear understanding of all of its parameters though.
**Concerns.**
1. On the surface this looks more complex as it requires a network of public nodes. Yet, the DHT probably also requires nodes are publicly reachable (including from one another) so this might not be an issue after all.
2. Once again, it is unclear what happens if nodes start misbehaving and, say, corrupt random walk samples.
[^5]: A. Demers et al., "[Epidemic algorithms for replicated database maintenance](https://dl.acm.org/doi/10.1145/41840.41841)". _Proc. of PODC'87_, 1987.
[^1]: $S$ is also a time-varying set though, even if just because storage nodes are not required to be always available. But we will avoid overcomplicating things for now.
[^3]: J. Knoll, "[BitTorrent Tech Talks: DHT](https://engineering.bittorrent.com/2013/01/22/bittorrent-tech-talks-dht/)". _The BitTorrent Engineering Blog_, 2013.
[^4]: M. Jelasity et al. "[Gossip-based peer sampling](https://dl.acm.org/doi/10.1145/1275517.1275520)", _ACM TOCS_, 2007.
[^6]: C. P. Fry and M. K. Reiter, "[Really truly trackerless bittorrent](http://reports-archive.adm.cs.cmu.edu/anon/2006/CMU-CS-06-148.ps)", _Carneggie Mellon University_, Tech. Rep. CMU-CS-06-148, 2006.
[^7]: R. Roverso, J. Dowling, and M. Jelasity, "[Through the wormhole: Low cost, fresh peer sampling for the Internet](https://ieeexplore.ieee.org/document/6688707)", _Proc. P2P'13_, 2013.