# Notes on Arbitrum: reviewing the sequencer Arbitrum is a *classic rollup* on Etheruem that uses a Sequencer to collect user txs, order them and then submit batches of transactions to L1. Today, the Sequencers for Nitro and Nova operate as centralized entities maintained by the Arbitrum foundation. Despite this centralization, Arbitrum maintains a trust-minimized security model and censorship resistance. ![](https://i.imgur.com/XkFKdCj.png) In the common case, the Arbitrum Sequencer receives user transactions and orders them in a sequence with a FCFS (first-come, first-serve) algorithm serving as an input to the execution stage of Arbitrum. Upon receiving the transaction the Sequencer will execute it and deliver the user a receipt or soft-confirmation. Shortly thereafter the Sequencer will collect enough transactions into a batch and post them to L1 by calling the Inbox method `addSequencerL2Batch`. Only the Sequencer has the authority to call this method. As a result this allows the Sequencer to provide instant soft-confirmations. The transactions have L1 finality once the batch is finalized. In the cases where Sequencer acts malicious, users would need to submit an L2 message via L1 to the delayed inbox. After 24 hours,`forceInclusion` is called moving the L2 message from the delayed inbox to the core `SequencerInbox` and is then finalized. ### Current Drawbacks * Latency races * Centralized Sequencer ## Latency Races Recently the Arbitrum team proposed adding a “time boost” to the FCFS sequencing policy. This was a reaction to MEV searchers running 100,000-150,000 web socket connections to the Arbitrum Sequencer. Many of these sockets were opened purely to spam the network with transactions to be included first. At first the Arbitrum team proposed a new proof of Work like mechanism called add relay connection nonce, which would have required searchers to expend resources on hashing nonces with Keccack 256. Clients with the lowest nonce will then have their transactions broadcast first. This policy received much pushback from searchers and community members and was not implemented. The policy was unpopular because searchers will have increased operating overhead on the order of $10,000s daily, spent on GPUs and electricity for mining low nonces. Since then, the team has pivoted to the aforementioned “time boost” proposal. ### The stated goals of Arbitrum’s new ordering policy are: * Secret mempool where only user transactions are visible to the sequencer * Low latency - every transaction arrives in a short time interval, 500 ms * Short-term bidding - transactions can gain an advantage by bidding for position * Decentralized - policy works with decentralized sequencer If a transaction’s priority fee is $F$, it gets a time boost computed by this formula                                              $\Delta t = \frac{gF}{F + C}$ * Where $g$ is the maximum time boost, 500ms * $C$ is a constant (tbd) * $F$ is a transaction’s priority fee In exchange for the time boost, the L2 gas fee will be increased by $F$. Here as the $F$ increases as a multiple of c there are dimensioning returns as the fee goes up. Every transaction received is time stamped by the sequencer. A transaction can choose to pay a priority fee which will give it a boost up to 500ms. The 500ms parameter presents a tradeoff between incentivizing searchers to buy a boost while minimizing the impact of latency for non-boosted transactions. Upon receiving transactions, the Sequencer would apply the time boost formula above and sequence transactions in increasing order of the adjusted timestamp. This will increase latency for non-boosted transactions because the sequencer would need to wait to see if boosted transactions arrive after within the time interval. ## Looking Ahead to decentralized sequencing Arbitrum is building a fair ordering protocol that can be used in tandem with a decentralized Sequencer quorum. One iteration of this protocol is called Themis, a scheme for introducing first in first out fair ordering of transactions at the coarseness of network latency rather than at the scale of block time. Let’s unpack this a bit. In the context of distributed systems or p2p networks, *latency* is the delay between sending and receiving messages. Coarseness here means that the latency is not uniform and may have fluctuations, leading to inconsistency in the responsiveness of the network. So we can say then that Themis orders transactions at the coarseness of network latency rather than the scale of block time, it implies that transactions can be ordered faster in many cases as opposed to waiting for a pre-determined block time. This can lead to faster transaction processing and improved performance in certain scenarios, as it takes advantage of the variability of network latency rather than relying on fixed time intervals. The **Themis** protocol presents one possible solution. >Themis is a scheme for introducing fair ordering of transactions into (permissioned) Byzantine consensus protocols with at most $f$ faulty nodes among $n \geq 4f + 1$. Themis enforces the strongest notion of fair ordering proposed to date. It also achieves standard liveness, rather than the weaker notion of previous work with the same fair ordering property. Themis has the same communication complexity of the underlying protocol; it is a minimum modification to any existing partially synchronous leader-based protocol. Existing fair ordering protocols require replicas to gossip tx orderings to everyone else as their first step. This results in $O(n^2)$ or $O(n^3)$ communication complexity. Themis on the other hand requires replicas only to send fair ordering to current leader, achieving linear communication $O(n)$ in the optimistic case. The Leader uses $n-f$ of these to compute a fair proposal. The Leader computes a SNARK and provides this with a block proposal. Replica nodes check proofs to ensure fairness. Everything else can be taken as is from the underlying protocol. Themis can be paired with HotStuff or any other BFT consensus protocol. With the emergence of [Espresso’s decentralized sequencing service](https://hackmd.io/@EspressoSystems/EspressoSequencer) which builds a modified version of HotStuff, the two protocols could be combined to fill in the ordering and leader election modules of the current centralized Arbitrum Sequencer. One relevant question is how much decentralization of the sequencer quorum is enough to satisfy social consensus? Will this come at the cost of increasing network latency? Both questions are out of the scope of this analysis but may be enumerated on in future articles. ## Analysis For any rollup using a centralized Sequencer there exists a trade-off between fast soft-confirmations to users (UX) and decentralization. Can you have both? Can you have a low latency high throughput rollup that reduces networking latency close to its physical limits, ie. 250ms for a global round trip? It is suggested that indeed it is possible to get decentralized sequencing at $2-4$ $\Delta$ depending on how strong the synchrony assumptions are for the sequencer. However if the Sequencer uses a responsive protocol like Hotshot it can optimistically be very fast, [not the worst case $\Delta$](https://twitter.com/benafisch/status/1635417195449831428?s=46&t=vk76MlVgPdVEkmtoT5AOrA). Delta refers to a range of potential latencies in a decentralized sequencing system, with the actual latency depending on the strength of the [synchrony assumptions](https://medium.com/mechanism-labs/synchrony-and-timing-assumptions-in-consensus-algorithms-used-in-proof-of-stake-blockchains-5356fb253459) and the efficiency of the protocol used. This would seem to satisfy Arbitrum’s goals with time-boosted orders and maintain forward compatibility with a granular fair ordering protocol like Themis which is compatible with consensus protocols like Hotstuff. As Ethereum’s Data Availability capabilities are expanded both out of protocol with middleware like Eigen DA and in protocol with full data sharding (danksharding), rollups will not be bottlenecked by Ethereum’s DA throughput, only execution time of the rollup. ## References * [How rollups *actually actually* work](https://drive.google.com/file/d/1KOEKNDGLBiLbaUDnIxCV6L1aBJblGPJs/view) [rollup sorcerer, 2023] * [The Sequencer](https://developer.arbitrum.io/inside-arbitrum-nitro/#the-sequencer) [Arbitrum Docs, 2023] * [Sequencing, Followed by Deterministic Execution](https://developer.arbitrum.io/inside-arbitrum-nitro/#sequencing-followed-by-deterministic-execution) [Arbitrum Docs, 2023] * [Transaction Ordering Policy](https://research.arbitrum.io/t/transaction-ordering-policy/127) [Felton, 2023] * [Thoughts on Arbitrum’s Proposal to Score Connections by PoW](https://research.arbitrum.io/t/thoughts-on-arbitrums-proposal-to-score-connections-by-pow/8121?u=apriori) [zkstonks, 2023] * [Add a relay client connection nonce PR #1504](https://github.com/OffchainLabs/nitro/pull/1504) [PlamaPower, 2023] * [Time boost: a new transaction ordering policy proposal](https://research.arbitrum.io/t/time-boost-a-new-transaction-ordering-policy-proposal/8173) [Felton, 2023] * [Themis: Fast, Strong Order-Fairness in Byzantine Consensus](https://eprint.iacr.org/2021/1465) [Kelkar, Deb, Long, Juels, Kannan, 2021] * [The Espresso Sequencer](https://hackmd.io/@EspressoSystems/EspressoSequencer) [Espresso Systems, 2023]