# Ideal pubsub (Revised) The goal of this work is to come up with a theoretically optimal pubsub algorithm and estimate its properties with respect to bounding conditions: network bandwidth and latency. It would be helpful to have theoretical maximum speed of message dissemination withing p2p networks and have an understanding on how far is any specific pubsub algorithm (like GossipSub, Episub or any other) from the maximum possible performance. This write up is the revised and fixed version of the initial effort: [Ideal pubsub](https://hackmd.io/WvGbtlgrT22RJpJ5sV248Q?view) The initial effort has *incorrect* [assumption](https://hackmd.io/WvGbtlgrT22RJpJ5sV248Q?view#Parts-dissemination-question) which leads to incorrect conclusions and simulation results. Please refer to the original paper on: - [Model assumptions](https://hackmd.io/WvGbtlgrT22RJpJ5sV248Q?view#Model-assumptionssimplifications) - [Intuitions behind ideal p2p message dissemination](https://hackmd.io/WvGbtlgrT22RJpJ5sV248Q?view#Intuitions-behind-%E2%80%98ideal-pubsub%E2%80%99) ## Spoiler The resulting formula of optimal message dissemination time is the following: ``` time = message_size / bandwidth + 2 * latency ``` ...with an error which could be neglected in the case when we consider dissemination of a large message across large network ## Ideal pubsub with decoupled message (Revised) Just to recall: - Message is decoupled onto `part_count` parts of equal size - _Any_ part could independently be send from _any_ peer to _any_ peer - Message dissemination is complete when _all_ peers obtain _all_ message parts - In _ideal_ pubsub none of peers should receive any duplicate message parts - Once a peer obtains at least 1 message part it should ideally utilize it's outbound bandwidth 100% until message dissemination is complete (all parts delivered to all peers) - As shown in [Optimal bandwidth utilization](https://hackmd.io/WvGbtlgrT22RJpJ5sV248Q?view#Optimal-outbound-bandwidth-utilization) it is suboptimal to send 2 or more parts in parallel. The same is applicable to a receiving side. - Worth to mention that in our model a node may simultaneously both send and receive a data without compromising efficiency ### The general idea behind ideal dissemination - During the initial phase disperse any available message parts across all nodes as fast as possible. The goal is each node has something to transmit. So opposite to the wrong [assumption](https://hackmd.io/WvGbtlgrT22RJpJ5sV248Q?view#Parts-dissemination-question) made before we need to transfer every next part to a node which has no any parts yet - According to intuitions described [here](https://hackmd.io/WvGbtlgrT22RJpJ5sV248Q?view#Parts-dissemination-question) the next goal it to try to keep the number of different message parts balanced across all nodes as much is possible. During the initial phase the best strategy the publisher may achieve is to send different message parts to different nodes ### Zero latency case Let's consider the simple case first -- zero network latency. Message parts are of equal size and input/output bandwidth of all nodes are the same in our model. Thus we may split dissemination process to rounds with equal periods (`message_size / part_count / bandwidth`) ![image](https://hackmd.io/_uploads/rkAMQc43T.png) Let's say the *initial phase* is when at least one node has zero message parts. It's easy to see that the initial phase takes `log2(node_count)` rounds. At the end of this phase `node_count` message parts are disseminated across the network: every node (except of the publisher) has 1 message part. The *initial phase* is then followed the *main phase*. During this phase every node has something to transmit and simultaneously sends and receives data at any single moment. Let's now assume there is an algorithm which may match all nodes as both sender and receiver such that every node transmits a message part to a receiver node which still doesn't obtain that part. (in fact such algorithm was [prototyped](https://github.com/Nashatyrev/jvm-libp2p-minimal/blob/f5092ab8fb499f363407c25c38d9594d0ef6e450/tools/simulator/src/main/kotlin/io/libp2p/simulate/main/IdealPubsub2.kt#L89-L228) which yields theoretically possible performance for parameters with `2^N` values ) During every round of the *main phase* every node receives a missing message part, thus the main phase ends in `part_count - 1` rounds (1 message part was received during the *initial phase*) The whole dissemination process would then take $$ rounds = log_2(\texttt{node_count}) + \texttt{part_count} - 1 $$ $$ round\_time = (\texttt{message_size} / \texttt{part_count}) / \texttt{bandwidth} $$ $$ time = \texttt{message_size} / \texttt{bandwidth} * \frac{log_2(\texttt{node_count}) + \texttt{part_count} - 1} {\texttt{part_count}} $$ $$ time = \texttt{message_size} / \texttt{bandwidth} * (\frac{log_2(\texttt{node_count}) - 1} {\texttt{part_count}} + 1) $$ For `part_count = 1` the time would be `message_size / bandwidth * log2(node_count)` which matches intuitions and [estimations](https://hackmd.io/WvGbtlgrT22RJpJ5sV248Q?view#Ideal-pubsub-algorithm-Case-1-atomic-message) for disseminating non-decoupled message When `part_count -> ∞` the time would tend to `message_size / bandwidth` which matches [these simulations](https://hackmd.io/WvGbtlgrT22RJpJ5sV248Q?view#Some-results) for `latency = 0` ### Non-zero latency case To get better intuition tt might be easier to continue thinking in terms of rounds, where one round is the message part transfer time (i.e. `bandwidth/message_size/part_count`) and assume the latency is a multiple of a round. E.g. on the diagram below the latency is 2 rounds. I.e. the message transmission period is 1 round and then it 'flies' over network for the next 2 rounds. E.g. on the diagram below the publisher transmits the `Part-1` during the first round (`t0-t1`) and then it 'flies' to the `Peer-1` during next 2 rounds `t1-t3`. At the moment `t3` the `Peer-1` receives `Part-1` When `Peer-1` receives `Part-1` at the moment `t3` it immediately starts sending it to the `Peer-5` which would receive the part in 3 rounds at the moment `t6` ![image](https://hackmd.io/_uploads/BkiTG37n6.png) Let's take the same reasoning for more general case with network `latency >= 0`: - The *initial phase* goal is to make all the nodes 'active' as fast as possible so they could start transferring message parts to those node which doesn't obtain those parts yet. Let's call the number of such active nodes depending on the time `t` as function `active_nodes(t)` E.g. on the diagram above the initial phase ends at the moment `t6` when all 5 peers have at least one message part. - Let's also assume a perfect scheduling algorithm exists that is able to 100% utilize both outbound and inbound nodes bandwidth. I.e. it instantly obtains the global network state and may plan such that - every node transmits a part in every round (during the *main phase*) - every node receives a part in every round (during the *main phase*) - no duplicate parts are sent to a node considering those parts still 'flying' through the network E.g. on the diagram at the moment `t4` the algorithm for `Peer-1` should be aware that: - `Peer-1` should NOT send `Part-1` to the `Peer-5` because at the moment `t7` when the part arrives `Peer-5` would already has that part delivered - `Peer-1` should NOT send `Part-1` to the `Peer-3` because it would make collision with the `Part-2` scheduled for sending from the `Publisher` node *Note*: the algorithm for the zero-latency case [was adopted](https://github.com/Nashatyrev/jvm-libp2p-minimal/blob/402b8a5db911e195901ae7de519e82ec277ac5c3/tools/simulator/src/main/kotlin/io/libp2p/simulate/main/ideal/IdealPubsub2.kt#L115) for the generic case but it doesn't yield the same 100% performance for the latencies > 0 as it does for zero latency. It doesn't mean no such perfect algorithm exists however it might probably be an NP-hard problem. But the goal of this write up is to estimate the upper bound of dissemination performance so we will assume that such perfect algorithm exists. #### Math formulas for arbitrary `latency` Having the above reasoning and an abstract function `active_nodes(t)` we may now calculate the total throughput of all the nodes in the network at any time `t`: $$ global\_throughput(t) = \texttt{bandwidth} * active\_nodes(t) $$ Having the throughput function we can calculate how many bytes were *sent* till the moment `t`: $$ sent(t) = \int global\_throughput(t) dt $$ For arbitrary `latency` the number of *delivered* bytes could be calculated as follows: $$ delivered(t) = sent(t - \texttt{latency}) $$ Considering that no duplicate information is delivered to any node we may assume (with some negligible optimism) that dissemination is complete when the total number of delivered bytes across the network is equal to `message_size * node_count`. ##### Error There is a minor inaccuracy in the above calculations though. At the very end of the dissemination process there are few nodes that are still missing a part, while all others are complete. The function `delivered(t)` relies on the global bandwidth while the remaining few peers would obviously utilize smaller bandwidth while receiving their last missing message parts. Thus the formula above is slightly optimistic, but the error tends to 0 when `part_count` tends to infinity. #### Function `active_nodes(t)` ##### Math formula Let's try defining the function `active_nodes(t)` via a math formula based recursively on `delivered(t)` funtion . For this purpose we will make the assumption that if `N * part_size` bytes are delivered across network then there are `N` active nodes which received 1 message part: $$ active\_nodes(t) = \begin{cases} t < 0: & 0\\ t \geq 0: & min(1 + \lfloor \frac {delivered(t)} {\texttt{part_size}} \rfloor, \texttt{node_count}) \\ \end{cases} $$ Most likely the resulting system has no feasible analytic solution. However the derived formulas may help to get some intuition on how parameters may affect the resulting time `t` ##### Algorithmic function However the above approach suffers the same accuracy issue as [described above](#Error) but in this case inaccuracy amplifies over time. Thus let's derive the precise algorithmic function `active_nodes(t)`: [ActiveNodeCountFunc.kt](https://github.com/Nashatyrev/jvm-libp2p-minimal/blob/402b8a5db911e195901ae7de519e82ec277ac5c3/tools/simulator/src/main/kotlin/io/libp2p/simulate/main/ideal/ActiveNodeCountFunc.kt#L94) ##### Convergence The [Math formula](#Math-formula) actually converges to the [Algorithmic function](#Algorithmic-function) when the `bandwidth * latency` product tends up (Long Fat Network) or/and when `part_count` tends up: ![image](https://hackmd.io/_uploads/SJJu_YO06.png) [Source data](https://docs.google.com/spreadsheets/d/11zAtnjiVZqwp5St0Yi-t2siETXAC1RKkZINV6mA5nIg/edit#gid=810366959) ### Simulation results The model simulation class is located here: [DisseminationFunctionSimulation.kt](https://github.com/Nashatyrev/jvm-libp2p-minimal/blob/ff2f16fea3251a9814ddb4078c0a43eff0f33682/tools/simulator/src/main/kotlin/io/libp2p/simulate/main/ideal/DisseminationFunctionSimulation.kt) Simulation results for some set of parameters can be found in this [spreadsheet](https://). #### Data slices ##### Message parts count ![image](https://hackmd.io/_uploads/H1IHpQj06.png) [Source](https://docs.google.com/spreadsheets/d/11zAtnjiVZqwp5St0Yi-t2siETXAC1RKkZINV6mA5nIg/edit#gid=739051846) The chart above shows how quickly (X axis has logarithmic scale) dissemination time approaches the ideal limit (red line) with `part_count` growth ##### Latency ![image](https://hackmd.io/_uploads/HyaR3Mo0p.png) [Source](https://docs.google.com/spreadsheets/d/11zAtnjiVZqwp5St0Yi-t2siETXAC1RKkZINV6mA5nIg/edit#gid=739051846) The dissemination time grows almost linearly with `part_count = 1024` ![image](https://hackmd.io/_uploads/HJHtyedyR.png) [Source](https://docs.google.com/spreadsheets/d/11zAtnjiVZqwp5St0Yi-t2siETXAC1RKkZINV6mA5nIg/edit#gid=739051846) ... and it tends to strictly linearly with the multiplier `2` when `part_count` tends to infinity (`part_count = 1M`) ##### Bandwidth ![image](https://hackmd.io/_uploads/r1bIxVoRa.png) [Source](https://docs.google.com/spreadsheets/d/11zAtnjiVZqwp5St0Yi-t2siETXAC1RKkZINV6mA5nIg/edit#gid=739051846) This chart shows how dissemination *speed* grows with bandwidth. On the first chart with `latency = 0` it grows linearly, on the second chart with `latency = 500ms` the latency quickly becomes prevailing factor and the dissemination *time* approaching its lower limit of `2 * latency` ## Conclusion It seems pretty clear that the ideal dissemination time tends to its minimum when `part_count` tends to infinity. This [simulation slice](#Message-parts-count) shows that the time approaches its optimum quite rapidly with `part_count` growth. When `part_count` tends to infinity `part_size` tends to zero. From the `active_nodes(t)` [formula](#Math-formula) it can be concluded that `active_nodes(t) = 1` while `delivered(t) == 0` and then almost immediately becomes equal to `node_count` (due to very small `part_size`). `delivered(t)` becomes `> 0` at `t = latency`. When all nodes are active it takes exactly `message_size / bandwidth` to send all the data and exactly `message_size / bandwidth + latency` to deliver it. The reasoning above yields the formula of dissemination time when `part_count -> ∞` ``` time = message_size / bandwidth + 2 * latency ``` ### Error However there is still a small inaccuracy. During the initial phase (until `t = latency`) there is still one node (publisher) who is sending the data and thus at the end on the initial phase some (small) fraction of total data would be sent already: ``` initial_sent = bandwidth * latency ``` Error coefficient `e` could then be expressed as the ratio of `initial_sent` to the total amount of data to be sent: ``` e = (bandwidth * latency) / (message_size * node_count) ``` So the refined ideal pubsub message dissemination time formula could be expressed as: ``` time = (message_size / bandwidth + 2 * latency) * (1 - err) ``` To have an intuition on the error magnitude let's take sample parameter values (close to the real ones) and estimate the error: ``` bandwidth = 100Mb/s latency = 100ms message_size = 1Mb node_count = 10000 e = (100Mb/s * 100ms) / (1Mb * 10000) = 0.000125 ``` There is a class of cases (which are out of interest for our estimations): when the `bandwidth*latency` product is quite high or message is very small or there are a few nodes in the network. In those case the error becomes significant and should be taken into account. Extreme cases when `e > 1` mean that the publisher would broadcast the message to all peers during the *initial phase* on its own and the resulting time could be calculate as follows: ``` time = message_size * node_count / bandwidth + latency ```