Ideal pubsub

The goal of this work is to come up with a theoretically optimal pubsub algorithm and estimate its properties with respect to bounding conditions: network bandwidth and latency.

It would be helpful to have theoretical maximum speed of message dissemination withing p2p networks and have an understanding on how far is any specific pubsub algorithm (like GossipSub, Episub or any other) from the maximum possible performance.

Special thanks to Tanguy Menduist for good ideas, discussions and review

Model assumptions/simplifications

  • All nodes have the same bandwidth
  • All connections have the same latency
  • Considering dissemination of just a single message (i.e. no other messages are disseminated simultaneously)
  • Any node may send a message to any node (i.e. either any node is connected to any node or at least connected to a subset large enough)
  • For calculations let's assume bandwidth 10 bits/sec == 1 byte/sec

Abstracting over TCP complexity

The real network protocols (such as TCP) are complex enough and not that straightforward for estimation (here are some comments regarding TCP complexity) thus we consider some simpler approach for our model:

Let's assume abstract networking stack with absolute bandwidth utilization. I.e. a message of size msgSize bytes and a given bandwidth (bytes per second) would be 'transmitted to the wire' in throughput-time := msgSize / bandwidth seconds. Sending the same message simultaneously to N receivers would take N * throughput-time = N * msgSize / bandwidth

Message transmission model

Let's consider the transmission-time which it the period between:

  • sending node initiate sending of a message to a receiving node. I.e. writing the first message bit to the wire
  • receiving node receives the last bit of a message

Let's assume:

transmission-time = throughput-time + connection-latency

Where:

  • throughput-time - the value was introduced above: how long does it take to 'push' all the message bytes to the network through the sending peer outbound bandwidth
  • connection-latency - how long does it take a single byte to travel over network from a sending node to a receiving node

Where is receiver inbound bandwidth?

In our model we are assuming the same bandwidth (both inbound and outbound) for all nodes and we are assuming that a node receives a message just once. Thus inbound available bandwidth of a receiving node is equal or greater than outbound available bandwidth of a sending node at any moment during message transmission.

What about message validation delay?

Message validation delay could be treated as a part of message transmission latency without breaking model accuracy. I.e. let's define latency = connection-latency + validation-delay and consider just the resulting latency parameter in the model

Intuitions behind 'ideal pubsub'

Single delivery per node

In ideal pubsub every node should receive a message just once. While any following duplicate delivery wouldn't make things worse for a receiving node the node sending that duplicate message could do better: the node could utilize its bandwidth with sending a message to any other node which didn't receive the message yet.

Once receiving a message a node should 100% utilize its outbound bandwidth

Once a node receives a message it utilizes 100% of its outbound bandwidth for transmitting the message to other peers.

For model simplicity let's assume that any node may send a message to any other arbitrary node in the network. For relaying a message let's assume that a sending node atomically acquires other node which isn't acquired yet (and thus didn't receive or is not receiving a message)

Optimal outbound bandwidth utilization

With the networking assumptions above it looks obvious that optimal bandwidth utilization would be sending a message strictly sequentially to N receivers rather than in parallel (or any other combination different from strictly sequential)

If throughput-time := msgSize / bandwidth, when sending sequentially:

  • message to receiver-1 would be sent after throughput-time
  • message to receiver-2 would be sent after 2 * throughput-time
  • []
  • message to receiver-N would be sent after N * throughput-time

Comparing to the parallel sending when the message is sent after N * throughput-time to all receivers, the sequential approach would yield (N / 2) * throughput-time average sending time

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Ideal pubsub algorithm. Case 1: atomic message

For the initial effort of building the optimal pubsub algorithm let's assume that the whole message should be received and validated by a node prior to broadcasting it further.

Considering the intuitions above let's try to develop the optimal algorithm simulation (expressed in Kotlin):

class Node {
    // function call indicating a message is delivered to this `Node`
    suspend fun deliver() = coroutineScope {
        // time to transmit a single message through the node bandwidth
        val throughputDuration: Duration = 
                params.bandwidth.getTransmitTime(params.messageSize)
                
        while (true) {
            // while there are nodes without delivered message, acquire next one
            val receivingNode: Node = acquireNode() ?: break
            // simulate throughput delay
            delay(throughputDuration)
            // now a message is considered sent and 'asynchronously' flies over network 
            // for `latency` period until delivered to receiving node 
            // while this node continue with sending to the next peer
            async {
                // simulate `latency`
                delay(params.latency)
                // `receivingNode` received a message and starts broadcasting it 
                // asynchronously
                receivingNode.deliver()
            }
        }
    }
}

Here are some results collected by this algorithm for the network of 10K nodes and a message of size 1Mb (for complete data please refer to Source code and simulation results):

Bandwidth Latency First deliver (ms) Last deliver (ms) Hops
10Mbits/s 0s 1000 14000 13
10ms 1010 14060 13
50ms 1050 14300 13
100ms 1100 14600 13
100Mbits/s 0s 100 1400 13
10ms 110 1460 13
50ms 150 1700 11
100ms 200 2000 9
1024Mbits/s 0s 9 126 13
10ms 19 187 9
50ms 59 384 6
100ms 109 580 5
Bandwidth Latency First deliver (ms) Last deliver (ms) Hops
10Mbits/s 50ms 1050 14300 13
25Mbits/s 50ms 450 5900 13
50Mbits/s 50ms 250 3100 12
100Mbits/s 50ms 150 1700 11
1024Mbits/s 50ms 59 384 6

[source]

Conclusion

If Ethereum would need to broadcast solid blocks of size 1Mb, then the minimal theoretical requirement to the network bandwidth is 50Mbits/s with the assumption that a block dissemination time should be at most 4 seconds

Ideal pubsub algorithm. Case 2: decoupled message

NOTE: the algorithm described below turned out to be NOT optimal (see explanation here)

What if a message could be split onto several parts (equal parts for simplicity) and every part can be transmitted and validated independently?

In that case the optimal pubsub algorithm could do better: a node may start relaying a message as soon as it receives and validates its first part

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Basically it just speeds up relaying a message to the first peer at each hop. The optimal algorithm simulation would look like:

class Node {
    // function call indicating a message is delivered to this `Node`
    suspend fun deliver() = coroutineScope {
        // time to transmit a single message through the node bandwidth
        val throughputDuration: Duration = 
                params.bandwidth.getTransmitTime(params.messageSize)

        val firstThroughputDuration: Duration = 
                throughputDuration / params.decoupledChunkCount
                
        // comparing to the previous alorithm just the first sending 
        // occurs faster 
        delay(firstThroughputDuration)
        
        while (true) {
            // while there are nodes without delivered message, acquire next one
            val receivingNode: Node = acquireNode() ?: break
            // now a message is considered sent and 'asynchronously' flies over network 
            // for `latency` period until delivered to receiving node 
            // while this node continue with sending to the next peer
            async {
                // simulate `latency`
                delay(params.latency)
                // `receivingNode` received a message and starts broadcasting it 
                // asynchronously
                receivingNode.deliver()
            }
            // simulate throughput delay
            delay(throughputDuration)
        }
    }
}

The chart below gives some intuition on how dissemination speed improves with message decoupling level:

EDIT: Red line shows new fixed ideal pubsub comparing to this (legacy) approach (blue line)

image

[source]

Parts dissemination question

Assuming all message parts could be independently verified does it make any sense to disseminate parts differently (i.e. sent different parts to different peers)?

Under this model the intuition is that disseminating message parts differently would not result in any gains comparing to the algorithm above. But this is still an open question.

The assumption turned out to be incorrect :
the ideal pubsub could do better.

image
(src)

The approach on the left illustration was described in this write up and was assumed optimal.
With both approaches (left and right) at the moment t2 we have disseminated 3 message parts. However at the moment t2 only 3 peers may utilize their outbound bandwidth to relay a message part with the left approach. While with the right approach 4 peers may start relaying a message part.

It is not yet clear at which point the publisher needs to start disseminating Part-2 and the following parts, but it looks pretty obvious that at least at the step next to t2 the right approach would perform better.

image

The third options looks even more better: there are still 4 peers who may utilize their outbound bandwidth, but there are 2 message parts (instead of just 1) which are disseminated starting from t2

Conclusion

Message decoupling may potentially yield good dissemination speed gain. However the speed gain looks logarithmic of message parts number so there is no much sense to aim high level decoupling

Ideal pubsub algorithm. Case 2.5: infinitely decoupled message

What if a message could be decoupled to the possible limit and every individual byte could be validated (or validation is omitted at all) and relayed immediately? Then a node may 'stream' a message simultaneously while receiving it.

With this speculative assumption the optimal algorithm simulation would look as follows:

class Node {
    // function call indicating a message is delivered to this `Node`
    suspend fun deliver() = coroutineScope {
        // time to transmit a single message through the node bandwidth
        val throughputDuration: Duration = 
                params.bandwidth.getTransmitTime(params.messageSize)

        // a message is transferred to the first peer at the same time it is received 
        // so no delay at all for the first message relay
                
        while (true) {
            // while there are nodes without delivered message, acquire next one
            val receivingNode: Node = acquireNode() ?: break
            // now a message is considered sent and 'asynchronously' flies over network 
            // for `latency` period until delivered to receiving node 
            // while this node continue with sending to the next peer
            async {
                // simulate `latency`
                delay(params.latency)
                // `receivingNode` received a message and starts broadcasting it 
                // asynchronously
                receivingNode.deliver()
            }
            // simulate throughput delay
            delay(throughputDuration)
        }
    }
}

Some results

Bandwidth Latency First deliver (ms) Last deliver (ms) Hops
10Mbits/s 0s 1000 1000 9999
50ms 1050 4700 73
100ms 1100 5800 47
100Mbits/s 0s 100 100 9999
50ms 150 1050 18
100ms 200 1500 13
1024Mbits/s 0s 9 9 9999
50ms 59 353 6
100ms 109 560 5

[source]

Nice edge case

Nice edge case appears with zero latency: the message is transmitted to the whole network in the time it takes to transfer it to a single peer. A long pipe is just formed where everyone is forwarding the bytes as they come in to the next node

Source code and simulation results

Simulation source code

Simulation source: https://github.com/Nashatyrev/jvm-libp2p-minimal/blob/experiment/ideal-pubsub/tools/simulator/src/main/kotlin/io/libp2p/simulate/main/IdealPubsub.kt

To run simulation:

> git clone https://github.com/Nashatyrev/jvm-libp2p-minimal -b experiment/ideal-pubsub
> cd jvm-libp2p-minimal
> ./gradlew :tools:simulator:run -PmainClass=io.libp2p.simulate.main.IdealPubsubKt

To change simulation parameters just edit IdealPubsub.kt source and run gradlew again

Results

For convenience the results of the simulation are presented as Google spreadsheet here:

https://docs.google.com/spreadsheets/d/11zAtnjiVZqwp5St0Yi-t2siETXAC1RKkZINV6mA5nIg/edit#gid=411569334

There are a lot of parameter combinations so for more convenient viewing there is a pivot table where subsets of parameters could be filtered (for that you may need to make an editable copy)