# Building Something Like BitTorrent Part 3: The RPC Interface
## Specifying the Interface
We're ready to cover the actual RPC interface that Kademlia peers expose to allow for inter-peer communication. Once we've implemented and exposed this RPC interface, implementing the overall DHT API will be easy. This will be a short but important note: the RPC implementations are deceptively simple with little complexity, but require careful thought to avoid sinister bugs (one of which I'll cover at the end).
There are **4 inter-peer RPC functions** that Kademlia exposes:
- `find_node(Key key)`: returns the $k$ closest peers to `key` in the non-local peer's router
- `find_value(Key key)`: either returns the chunk associated with `key` or the $k$ closest peers to `key` if the chunk is not stored by the non-local peer
- `store(Key key, Chunk chunk)`: asks the non-local peer to store the (`key`, `chunk`) mapping locally
- `ping()`: returns the contact information of the non-local peer
The simple specification above should be clear enough in combination with two clarifications.
## Closeness
For the `find_node` and `fine_value` RPCs, the local peer (the "sender") asks the non-local peer (the "receiver") to return the $k$ closest peers to a given Key (recall $k$ is the limit on peers per kbucket). We'll use the Key distance metric that we defined in Part 1: `dist(k1, k2) = k1 ^ k2`.
Using the binary tree structure of the router we can implement this as a simple recursive algorithm:
```
closest_peers(BinaryTree tree, Key search key, int count) {
if tree is a leaf:
return tree->kbucket[:count]
match_bit = key[tree->level] // (recall that the tree's level corresponds to the bit of the key that we split the node on)
first_tree = match_bit ? tree->one_child : tree->zero_child
second_tree = match_bit ? tree->zero_child : tree->one_child
buffer = closest_peers(first_tree, key, count)
buffer.add(closest_peers(second_tree, key, count - length(buffer)))
return buffer
}
```
## Updating Peer Info
Each RPC call counts as an interaction for both the sender and the receiver, so each needs to update the other's contact information in its router. This will look different for the sender and the receiver:
- sender: update the receiver's information in the router AFTER receiving a valid response (avoid the case where the router re-inserts stale contact information)
- receiver: update the sender's information BEFORE processing the RPC request
Recall that "updating the receiver's information" refers to the rough router insertion/eviction algorithm from the previous note. I'll include my C++ implementation of the insertion/eviction algorithm here to emphasize the importance of **synchronization** once we add concurrent RPC handling:
```
void Session::update_peer(Key& peer_key, std::string endpoint) {
// attempt to insert peer and evict lru peer if stale
Peer* lru_peer;
while(true) {
// try to insert the new peer's contact information and return if succeeded
this->router_lock.lock();
bool inserted = this->router->attempt_insert_peer(peer_key, endpoint, &lru_peer);
this->router_lock.unlock();
if (inserted) {
return;
}
// get the LRU peer and check if it is still alive
Peer other_peer;
bool lru_ping = this->ping(lru_peer, &other_peer);
if (lru_ping && lru_peer->key == other_peer.key) {
return;
} else {
this->router_lock.lock();
this->router->evict_peer(lru_peer->key);
this->router_lock.unlock();
...
}
}
}
```
The first half of the implementation is familiar; we just try to insert the peer into the router (take note of the synchronization; we require atomic access to the router to prevent corruption of the router's kbuckets by other potential usage).
However, after this our eviction procedure has been complicated in a few ways:
- if we see the LRU peer is still alive, we also check that the contact information received matches that stored in the router (this is to handle the case where a peer has died and rejoined with the same endpoint under the guise of a new Key)
- after we try to evict the peer we do not return; rather we retry the entire algorithm again
This second difference is important. Note that **we do not retain atomic access to the router for the entire algorithm**: rather we (i) relinquish the router lock after attempting insertion, (ii) ping the LRU peer, and (iii) re-acquire the router lock to evict the LRU peer. We cannot hold the router lock for the entire algorithm because contacting the LRU peer (calling `ping`) requires access to the router.
So, why do we need to retry the algorithm? We temporarily relinquish control over the router, and during this time it's entirely possible that the LRU peer could have already been evicted, and so an immediate attempt to insert the new peer and return would result in an overflowing kbucket.
## The Ping Bug!
I noted above that we need to call the insertion/eviction algorithm for each of the RPC functions above (both as the sender/receiver); however, I'm going to make an exception for one RPC call because of a neat bug that I discovered. The intensity of my frustration this caused can only be matched by that of the glory of resolution when I identified and fixed this bug.
An edge case scenario that induces this bug proceeds as follows
- Peer $P_1$ joins the DHT
- Peer $P_2$ joins the DHT under key/endpoint $(K_1, E_1)$
- $P_2$ contacts $P_1$
- $P_1$ successfully inserts $(K_1, E_1)$ in its router
- $P_2$ leaves the DHT
- Peer $P_2$ rejoins the DHT under key/endpoint $(K_2, E_1)$
- $P_2$ contacts $P_1$ via RPC X. In $P_1$'s handling of X:
- $P_1$ unsuccessfully attempts to insert $(K_2, E_1)$ in its router
- $P_1$ retrieves the LRU peer $(K_1, E_1)$ (i.e., $P_2$'s old contact information)
- $P_1$ pings $E_1$. In $P_1$'s call to ping:
- $P_1$ unsuccessfully attempts to insert $(K_2, E_1)$ in its router
- $P_1$ retrieves the LRU peer $(K_1, E_1)$ (i.e., $P_2$'s old contact information)
- $P_1$ pings $E_1$. In $P_1$'s call to ping:
- $...$
This is an edge case because it occurs precisely when (i) $P_2$ dies and rejoins, and (ii) $P_2$'s old contact information is the LRU peer in the corresponding kbucket. **The ultimate issue is that every RPC handler (including `ping`) may include a call to `ping`** in the event that the receiver needs to check if the LRU peer is still alive. In the (not-so-rare) case where two different keys share the same endpoint (e.g., in the rejoin scenario above), then it is possible that the RPC handler call to `ping` $E_1$ may itself have a call to `ping` $E_1$. Infinite recursion!
The way I solved this was just **removing the call to the router insertion/eviction algorithm in the sender's call to ping**. This should be ok since the vast majority of calls to ping will be from the insertion/eviction algorithm (and so we'll end up updating this peer's contact information anyway). This is a negligible price for solving a disastrous bug with a non-negligible probability.
Why is it non-negligible?
First, the probability of $K_1$ and $K_2$ (as used above; the two keys used by the peer when it first joins and when it rejoins) being located in the same kbucket is $> 1/4$. It includes the case where the first bit of is the opposite of $P_1$'s Key's first bit, in which case they both get inserted into the kbucket at level 1 of the router tree), which has probability $(1/2)^2$.
Second, if we assume $P_2$ rejoins the DHT before its old contact information gets evicted from $P_1$'s router then we guarantee it will never get evicted until
- $P_1$ contacts $P_2$ via the old contact information (e.g., during a search)
- $P_2$ contacts $P_1$ (e.g., during a search)
- $P_1$ attempts to evict the old contact information
Any one of these situations will trigger the infinite recursion above. In particular, the first two will eventually occur given enough random searches, and the third will eventually occur given enough peer churn. Additionally, $P_1$ will never refresh the old contact information after $P_2$ leaves the DHT, so the old contact information will eventually be pushed to the LRU position of the kbucket.
Finally, I can speak from the empirical evidence; before I identified this bug, tests for the search algorithms (which I'll cover in the next note) would occasionally descend into infinite recursion (evidenced by the logs with infinite `PING` calls) in a maddening but reliable fashion.
In addition to the fix above, finding this bug added another layer of uncertainty to Kademlia that I've mentioned several times above and in prior notes, but that was not apparent until this bug surfaced: **keys may be unique, but the peers (specifically, the *endpoints*) they refer to may not be.** The scenario where a node leaves and rejoins is sufficient evidence for this, and needs to be factored into the overall design of the DHT. This included adding contact information to all RPC responses (not just pings) to overwrite stale router entries.