owned this note
owned this note
Published
Linked with GitHub
# Swarm Networking
Swarm seems to try to use DHT replication for storage. As usual, Blake and Rodrigues[^blake_03] come to mind in that this is not a good idea unless churn is under control, so this is the framing I started reading this with.
**DHT.** Swarm uses a forwarding Kademlia in which connections are postulated to be "quasi-permanent". I was again under the impression this would be at odds with a dynamic system (churn means stable connections are not a reality) and this is indeed the case, but it turns out that if "active acknowledgements" are added onto a forwarding Kademlia, then the expected latency of recursive routing can be actually smaller.
This is stated both analytically[^wu_06], and empirically[^heep_10]. The lookup variant described in [^wu_06], however, requires nodes along a query path to contact the initial sender _directly_, which is at odds with the "backwarding" (forwarding backwards along the query path) responses that Swarm employs to preserve sender privacy. I am therefore not convinced that this will perform well under churn. In fact, multihop backward forwarding will probably be a big problem under churn and will fail to reduce lookup path lengths on timeouts.
**DISC.** Data is stored as $4\text{KB}$ chunks, which are either regular content-addressed or "single-owner" chunks. Content-addressed chunks are what one expects, whereas "single-owner" chunks decouple the chunk's key from its content. At first I thought "single-owner" chunks were a form of mutable data, but the book states that:
"_(...) if the owner of the private key signs two different payloads with the same identifier and uploads both chunks to Swarm, the behaviour of the network is unpredictable._
Since the identifier is part of the address, I get this to mean that single-owner chunks are not meant to be mutable, meaning I do not really get their point (maybe I will as I read on).
**Encryption.** Chunks are encrypted with a symmetric key $s$ which might be provided externally. For each of the $127$ $32$-byte segments that make up a $4\text{KB}$ chunk, Swarm derives a separate key by computing $k_i = H(s\ ||\ i)$, where $0 \leq i \leq 127$ is the index of the chunk, where $H$ outputs a $256$-bit ($32$-byte) byte string. Encryption then consists of XOR'ing $k_i$ with the actual content fo the segment.
Since segments have separate keys, they can be decrypted separatelly. One could provide the key for a segment $k_i$ without revealing $k$, which would allow a third party to learn the contents of segment $i$ alone. This looks a bit too granular to me, but apparently Swarm needs this for on-chain computations of some sort (again, maybe this will make sense as I read on).
**Replication.** Swarm replicates a chunk over the $r$ closest nodes to its "intended" location. By looking at Swarm code, nodes will attempt to keep between $8$ to $18$ nodes per Kademlia $k$-bucket, and will attempt to sync with all the nodes in the highest proximity-order bucket ($8$ to $18$ peers).
Because a node will be in the proximity neighborhood for at least as many peers, the number of peers connected for sync is about twice as big ($16$ - $36$). There are several things about the synchronization approach that worry me.
**1. Churn.** Nodes entering or leaving the network will trigger full syncs in all of their neighborhoods. Although this does not necessarily mean that everything needs to be copied to the replacement nodes every time[^1], I do not see this working well under heavier churn - there will be a lot of data copying going on. In fact, this is _exactly_ the situation in Blake and Rodrigues' paper.[^blake_03].
**2. Storage guarantees.** Nodes adopt an eviction strategy in which chunks deemed with "lower value" are evicted in favour of chunks with "higher value". This means there are no storage guarantees - at any time, a node might decide to just purge your data because something of higher value came along. Then, the text says things like:
"_If chunks are synced in the order they are stored, this may not result in the node always having the most profitable (most often requested) chunks_"
If this is correct, it says that valuable chunks are popular chunks, which means unpopular content will likely disappear from the system over time making it wholy inadequate for private content, and very adequate for popular; e.g., Bittorrent-like use-cases instead, except the performance of retrieval is likely nowhere even near to Bittorrent. I need to read the incentives section first but this alone would make the system quite restrictive.
**3. Order-by-value syncs.** The book starts saying that chunks are totally ordered within a peer by a timestamp that indicates when that peer stored the chunk, which hints at a very simple approach for checking whether or not a given peer is in sync with the other; i.e., you can assign an integer to every chunk (totally order them) and then peers trying to sync with you just have to keep track of what that integer was.
But then, it says later:
_If chunks are synced in the order they are stored, this may not result in the node always having the most profitable (most often requested) chunks. Thus it may be advisable to sync chunks starting with the most popular ones according to upstream peers and finish syncing when storage capacity is reached. In this way, a node’s limited storage will be optimised._
Popularity is a dynamic entity, so this will cause ordering to change. The single-integer sync mechanism therefore does not work anymore - in fact, nodes are trying to solve an optimization problem at that point; i.e., which subset of the chunks from my neighbors should I keep to maximize profit, and trying to do so over a synchronization protocol. This does not sound simple/efficient to do at all.
[^1]: The book flip-flops a bit about this but pull sync apparently orders records in a stream by insertion order, so checking for sync with a peer amounts to transmitting and comparing an integer. If a node disconnects and reconnects, the re-sync cost can therefore be small -- just check an integer with a bunch of nodes and figure out there is nothing to do.
[^wu_06]: Di Wu, Ye Tian and Kam-Wing Ng, "Analytical Study on Improving DHT Lookup Performance under Churn," Sixth IEEE International Conference on Peer-to-Peer Computing (P2P'06), Cambridge, 2006, pp. 249-258, doi: 10.1109/P2P.2006.4.
[^heep_10]: B. Heep, "R/Kademlia: Recursive and topology-aware overlay routing", 2010 Australasian Telecommunication Networks and Applications Conference, New Zealand, 2010, pp. 102-107, doi: 10.1109/ATNAC.2010.5680244.
[^blake_03]: Charles Blake and Rodrigo Rodrigues, "High Availability, Scalable Storage, Dynamic Peer Networks: Pick Two,"
9th Workshop on Hot Topics in Operating Systems (HotOS IX), https://www.usenix.org/events/hotos03/tech/full_papers/blake/blake.pdf