# Maximizing PoS blockchain throughput Disclaimer: This is work in progress. Smart contract platforms like Ethereum have network parameters that dictate how often a block is to be produced, and how much computation can be performed in a block or how big a block's size can be. The selection of these parameters determine the maximum amount of computation per second that the overall network can perform. We want to maximize that number by choosing the right parameters. Protocol execution consists of cycles called rounds. Each round has a leader (validator), responsible for proposing a block at the beginning of that round. We mean by *block time*, denoted by $\Delta$, the length of a round. Block time is spent by first executing transactions and including them in a new block, and then by propagating that block throughout the network. Ideally, a leader should have enough time to receive and verify his predecessor's block before the previous round ends, so that he can declare it as a parent of his block when his round begins. If that is not the case, then the blockchain would fork. To prevent validators from creating arbitrarily large blocks or blocks with arbitrarily many operations, certain limits are imposed on blocks at the protocol level. These protocol parameters are: 1. **Block time:** *Defined above.* 2. **Block size limit:** Maximum size of a block in bytes. 3. **Block load limit:** Maximum number of operations that can be executed by the transactions in a block. The focus of this work is to enable reasoning for working toward our goal of maximizing the efficiency of our platform. Specifically, we strive to maximize the network power $P_n$, the number of operations that the network can perform on average. At the same time, we strive to minimize *time-to-finality*, a quantity directly proportional to block time. Thus, if it turned out that throughput increased with increasing block time, we would have a trade-off between overall network performance and user experience[^1]. We separate block time into four phases, which make sense in the given context: 1. **Execution phase:** The leader starts executing transactions from the transaction pool sequentially. When the block's size and load approaches the limits, the leader deems the block complete, and includes relevant information such as the parent block, merkle root, etc. 2. **Propagation phase:** The newly created block is propagated to other validators. 3. **Verification phase:** Other validators verify the block by executing the transactions themselves. 4. **Voting phase:** Validators exchange messages and arrive at a consensus on the block. We assume that the execution and verification phases last an equal amount of time, denoted by $\delta_e$, whereas the durations of propagation and voting phases are denoted by $\delta_p$ and $\delta_v$ respectively. Then, block time is calculated as $$ \Delta = 2\delta_e + \delta_p + \delta_v. $$ Here, we assume that $\delta_e$ depends on block load limit, $\delta_p$ depends on block size limit and $\delta_v$ is a constant. We give the following reasons: - $\delta_e$ can be thought of as being linearly dependent on the maximum number of operations performed by a block. - $\delta_p$ pertains to bandwidth, and hence only affected by the block size. - $\delta_v$ is independent of these parameters, because a block's content or execution time does not affect the number or size of the messages that must be exchanged in order to finalize it. - Other factors, such as the number of validators, are out of the scope of this formulation. A single validator has a processing power $P$, defined as the number of operations it can perform per second. We assume that all validators have the same processing power, and the network is used at full capacity, block load being at the limit, and denote it by $w$: $$ w := P \delta_e $$ To find the network's overall processing power, $P_n$, we note the following fact: In each round, the leader dictates which transactions are to be executed. Therefore, $P_n$ can be found by factoring out the time that is not spent by executing new transactions, i.e. propagation, verification and voting phases: $$ P_n = P\frac{\delta_e}{\Delta} = \frac{w}{2w/P+\delta_p+\delta_v} $$ where we substituted the quantities defined above. We make the simplifying assumption that propagation time depends only on the block size $v$, i.e. $\delta_p = \delta_p(v)$. To substitute it in $P_n$, however, we need to have it in terms of $w$, and we cannot use an algebraic relationship express block size in terms of block load. That is because there is not a law which dictates the maximum number of operations of a program a certain number of bytes can encode. To solve this problem, we use on-chain data to obtain *operation density*, an empirically observed quantity defined as the average number of operations encoded per byte of transaction: $$ \rho := \frac{w}{v} $$ Then we can define network power as a function of block load $$ \boxed{ P_n(w) = \frac{w}{2w/P+\delta_p(w/\rho)+\delta_v} } $$ Before we can move onto finding maximizers of $P_n$, we have to explore the relationship between block size and propagation time. ## Block propagation Here, $F(v,t)$ is a cumulative distribution function which gives the ratio of the nodes that have received a block of size $v$, $t$ seconds after it's mined. Being a CDF over $t$, it satisfies the condition $$ F(v, t_1) \leq F(v, t_2) \text{ for all } v \text{ such that } t_1 \leq t_2. $$ Moreover, we *assume* that $F$ satisfies the following condition: $$ F(v_1, t) \geq F(v_2, t) \text{ for all } t \text{ such that } v_1 \leq v_2. $$ The number of nodes that have received a smaller block is always greater than the number of nodes that have received a larger block. In other words, we assume that a smaller block is always propagated faster than a larger block. Then the probability (or rate) of a blockchain fork is calculated as[^2] $$ P_F = 1 - \exp(-\lambda\int_0^\infty(1-F(v, t))\,dt) $$ Substituting $\lambda = 1/\delta_p$ and rearranging the equation, we can express propagation time $\delta_p$ in terms of block size $v$: $$ \boxed{ \delta_p(v) = \frac{-\int_0^\infty(1-F(v, t))\,dt}{\ln (1 - P_{F})} } $$ **Corollary:** Propagation time $\delta_p$ is monotonically inreasing in tems of block size $v$. *Proof:* By algebra, using the second condition for $F$ given above. This simplifies the problem of deducing whether $P_n$ is increasing or decreasing as $w$ goes to infinity. In other words, if we want to keep forking rate (a.k.a. orphan, uncle, stale rate) constant, then increasing block size or load limit would certainly mean that we would have to increase block time. This is kind of obvious, but it is useful to express it mathematically. ## Overall trend of network processing power Propagation time's rate of change with respect to $v$ determines whether network processing power increases, or decreases as we allow for bigger blocks. To illustrate this, we consider the following examples: ### Constant propagation time in terms of block size If $\delta_p(v) = a$ where $a$ is a constant, then $P_n$ is monotonically increasing, the theoretical maximum is obtained when $w$ goes to infinity: $$ \lim_{w\to\infty} P_n = P/2 $$ ![](https://i.imgur.com/TqNsX7h.png) This is a purely theoretical case---propagation time is absolutely dependent on block size. ### Linear propagation time in terms of block size If $\delta_p(v) = av + b$ where $a$ and $b$ are some constants, then $P_n$ is monotonically increasing, and the theoretical maximum is obtained when $w$ goes to infinity: $$ \lim_{w\to\infty} P_n = \frac{\rho P}{2\rho + Pa} $$ The profile looks the same as the constant case. If propagation time has linear complexity, any increase in block load $w$ would be an improvement in terms of network processing power. The bigger the block, the higher the throughput. ### Quadratic propagation time in terms of block size If $\delta_p(v) = av^2 + bv + c$ where $a$, $b$ and c are some constants, then $P_n$ attains a maximum at $w^\ast = \sqrt{\rho^2(\delta_v + c)/a}$. It is monotonically increasing for $w<w^\ast$, and monotonically decreasing for $w > w^\ast$. ![](https://i.imgur.com/3UVX0Mz.png) $P_n$ approaches zero as $w$ goes to infinity. Thus, we have the opposite of the cases above. Depending on where the maximum is, the bigger the block, the lower the throughput. I gave these examples so that it's clear when we would have a trade-off between throughput and time-to-finality, and when we would not. The bottom line is, if propagation time has a complexity less than or equal to $O(n)$, we have a trade-off situation at hand. On other hand, if propagation time has a complexity greater than $O(n)$, we may or may not have a trade-off situation, depending on the parameters. ## Modeling block propagation ### Simplistic gossip protocol We define a simplistic gossip protocol as follows: - We have $N_n$ nodes. Each node has an individual bandwidth of $\phi$ bps. - Node 1 creates a block, and immediately sends the block to randomly sampled $N_p$ peers. - Transmitting the block to $N_p$ takes $v/\phi$ seconds. - The process is repeated by each new node that receives the block. This goes on everyone receives the block. The resulting "time" vs "nodes which have received the block" curve follows the logistic equation. $$ F(t) = \frac{(N_p+1)^{t\phi/v}}{(N_n + (N_p+1)^{t\phi/v})-1} $$ Following the formulation above, the given CDF yields the following propagation time: $$ \delta_p(v) = v\;\underbrace{\left(\frac{\ln(N_n)}{\phi\ln(N_p+1)\ln(1/(1-P_F))}\right)}_{a} $$ This corresponds to the linear case above, with $a$ being equal to the coefficient. If we actually had such a block propagation scheme, we would have had a trade-off between throughput and time-to-finality. ### Bitcoin's block propagation (similar to ours) The gossip protocol described above is not used in blockchains. Block propagation in blockchains involve more messages, where miners announce the blocks they received and send requests to download the ones they don't have, in order to minimize bandwidth usage: ![](https://i.imgur.com/dy2EJS2.png) Deriving an explicit formula for this type of block propagation is not so easy, and perhaps not possible. However, even if we cannot derive such a formula, we could still run simulations to fit approximate models. TBD [^1]: In other words, how fast the network can guarantee a user that a transaction is finalized and will not revert. The lower that duration, the better. [^2]: See [Fork Rate-Based Analysis of the Longest Chain Growth Time Interval of a PoW Blockchain](https://ieeexplore.ieee.org/document/8946241).