# Real Tradeoffs of Ethereum L2 Rollups
The [recent debates](https://x.com/vitalikbuterin/status/1970258498996048247?s=61) about [centralization](https://x.com/justin_bons/status/1970496163137695886?s=61) in Layer 2 (L2s) [rollups](https://x.com/maxresnick1/status/1970250612399825075?s=61) prompted me to share my position. Rollups are a brilliant and necessary solution for scaling Ethereum today, but they come with real compromises. Any claim of a perfectly trustless, fair, and secure L2 rollup should be met with skepticism. In this post, I'll share my personal view on L2 rollup tradeoffs and what I believe users must consider when engaging with them.
## Why Ethereum Can't Just "Go Faster" Today
First, let's establish a core principle:
**All computer systems are bound by the laws of physics.**
These systems suffer of limited communication and computation [resources](https://hpbn.co/primer-on-latency-and-bandwidth/). For **communication**, data is physically moved from point to point across network channels with constrained capacity, hitting two dimensions:
1. **Latency**: The time it takes to transfer bytes from point A to B, ultimately limited by the speed of light.
2. **Bandwidth**: The maximum number of bytes a channel can carry per second.
For **computation**, every task is a series of instructions that a processor must execute. Each of these instructions consumes a finite number of **CPU cycles**. A processor has a physical limit on how many cycles it can perform per second, and complex operations like cryptographic signature verification consume a significant number of them. These resources--network bandwidth, CPU cycles, and memory--are finite. No software algorithm can create more of them out of thin air. Therefore, how a system deals with these hard physical limits depends entirely on its design.
### Architectural Tradeoffs
Traditional computer systems are based on a **centralized** paradigm, where computation and storage resources belong to a single physical infrastructure, usually managed by an individual organization. In this context, resources can be scaled both [horizontally and vertically](https://www.mongodb.com/resources/basics/horizontal-vs-vertical-scaling), potentially serving infinite demand with optimal performance. This model is powerful, but it comes with critical trade-offs: you are placing your trust in a single organization. These systems suffer from a [**single point of failure** and inherit serious **privacy concerns**.](https://arxiv.org/pdf/1704.08065)
**Decentralized systems** are designed specifically to solve these problems by removing the need for central organizations. However, this introduces its own challenges, as the system's physical constraints are amplified by the scale and geographical distribution of its nodes. To overcome this, we enter the world of complex coordination. By distributing control across thousands of independent participants, they offer powerful guarantees that centralized systems cannot: enhanced **service continuity** (the network runs even if parts of it fail or some participant disappears), **robustness** against powerful adversaries (an adversary must corrupt more parties to tamper with the system), and **censorship resistance** (no single entity can block users' transactions).
### The complexity of coordination
The price for the benefits of decentralization is complexity. To get thousands of parties who don't know or trust each other to agree on a single source of truth, you need a highly sophisticated and resource-intensive **coordination machinery**.
> *This is the core overhead of decentralization.*
Each participant operates under the same rules while locally maintaining a shared state. This is coordinated through consensus, where nodes continuously exchange messages across the network to reach agreement. The tradeoff is clear:
*We add complexity to gain trustless and reliable execution at the cost of efficiency.*
### The EVM case
To see this tradeoff in action, let's look at a familiar component: the **Ethereum Virtual Machine (EVM)**. The EVM is the software that maintains the shared state of all accounts and updates it by applying a deterministic State Transition Function (STF).

*Centralized vs Decentralized architecture of a simplified EVM system*
- **Centralized EVM**. We could have a single, powerful server running the EVM software. Such a "single-node Ethereum" enjoys exceptional performance because it has **zero coordination overhead**. Its speed is only limited by communication latency for the user's transaction to reach the server, CPU cycles to process the STF locally, and the latency to send a response back. This is fast, and it's exactly how applications on traditional web2 systems work!
- **Decentralized EVM**. On the Ethereum Mainnet, every full node must independently download all transactions, massively multiplying the total **communication cost** (bandwidth and latency) needed to propagate the same bytes across the globe. Then, each node must recompute the STF, multiplying the total **computation cost** (CPU cycles) spent by the entire network to validate the same work. This universal replication and verification is a massive bottleneck, limited by the capacity of the network's weakest participants.
Moreover, coordination is not straightforward. In a fully decentralized and permissionless setting, nodes may fail or behave arbitrarily. Therefore, consensus must provide strong security guarantees that further increase the communication and computation needed to reach agreement. For example, the Ethereum mainnet is built on two pillars:
1. **Safety**: All honest participants must eventually agree on the same finalized ledger. Once a transaction is final, it cannot be reverted.
2. **Dynamic Availability**: The network must remain available and able to process transactions, even if some participants are malicious or offline.
Upholding these principles across a massive global network requires a tradeoff. To ensure everything works correctly, Ethereum deliberately sacrifices performance to provide world-class security. This results in two major bottlenecks directly tied to our physical limits:
- **Performance**: The overhead of large-scale coordination manifests as immense, redundant communication (propagating bytes to everyone) and redundant computation (spending CPU cycles on the same task across thousands of nodes). This slows down block processing, resulting in low transaction throughput and a long wait time for transactions to be irreversibly confirmed.
- **Costs**: Ethereum's block space is a scarce public resource. With high demand, users are forced into a constant bidding war to get their transactions included, driving up costs.
These are the fundamental scalability limits of Ethereum generated by its decentralized design. And at our current technological stage, the primary way to build on top of them is with L2 rollups.
## L2 Rollups: The Necessary Compromise
You might be wondering why I spent so much time on network channels and CPU cycles in a post about L2 rollups…
In the last section, we established that to achieve its world-class decentralization and security, Ethereum must accept the overhead of coordination, a direct consequence of these hard physical limits.
**L2 rollups do not get a free pass.** The same laws of physics apply to them. There is no magical software that can erase the cost of communication and computation.
>*There is no free lunch in computer systems.*
The primary goal of an L2 rollup is to offload transaction volume from the L1 by creating a new, more efficient execution environment, thereby providing a better **user experience** (UX) with **faster confirmation** (soft-finalized) and **lower fees**. They achieve their impressive performance by trading some of Ethereum's decentralization for an improved user experience.

*A simplified model of an L2 rollup, showing how a single sequencer provides fast soft-finality to users. The sequencer then batches transactions and submits the updated state to the L1 contract for slower, final settlement. (Note: The proof-based validation of the state is omitted for simplicity.)*
Practically, rollups offload transaction computation to external, often centralized, components that then settle their results back to the Ethereum L1 using smart contracts. This external infrastructure must be **simpler to be faster**.
The majority of rollups on production today use a single server running the rollup **sequencer** that orders and confirms transactions instantly. This is vastly more efficient than achieving global consensus but introduces a single point of control.
Rollup designs introduce an execution path in case of sequencer failures. We distinguish two execution paths:
1. The "*Happy Path*": You interact with the fast, often centralized sequencer, enjoying near-instant confirmations and very low fees. This path provides the speed and UX of a traditional web application, making Ethereum scalable for mass adoption. This is the primary reason to use a rollup.
2. The "*Unhappy Path*" (*[Escape Hatch](https://community.starknet.io/t/starknet-escape-hatch-research/1108)*): This is your emergency fallback. If the sequencer fails, goes offline, or actively censors you, a well-designed rollup allows you to bypass it and interact directly with the L1 smart contract to force through your transaction or withdraw your assets. This path is slow, expensive, and has poor UX - you lose all the benefits of the L2. However, it inherits the security of the L1 and guarantees your funds [cannot be stolen](https://x.com/l2beat/status/1853756976510984623?s=61) or permanently frozen.
It's crucial, however, to understand what the escape hatch doesn't solve. It is a mechanism for asset safety, not a solution for the day-to-day functionalities of the rollup. Risks related to **fair transaction ordering** and **MEV**, where the sequencer can reorder transactions for its own benefit, persist on both paths.
> *..so, what's the point?*
The point is to gain the massive performance and UX benefits of the “best case scenario” for most users, most of the time. In return, they retain the L1’s security as an unbreakable, last-resort guarantee for asset safety. **This is the pragmatic trade-off: accepting centralization risks for major scalability gains.**
## Decentralizing the Sequencer
> ⚠️ *This post focuses on the infrastructure complexity of rollups managed by external sequencing components, to highlight the tradeoffs decentralization introduces. Other approaches, such as [based sequencing](https://ethresear.ch/t/based-rollups-superpowers-from-l1-sequencing/15016), are also valid paths to decentralize L2 rollups on Ethereum. However, they come with their own UX tradeoffs and additional trust assumptions in [preconfirmation mechanisms](https://ethresear.ch/t/based-preconfirmations/17353), which are important but outside the scope of this discussion.*
[Decentralized designs](https://arxiv.org/pdf/2310.03616) have been also explored. They distribute the sequencing responsibilities to multiple independent parties, for example, leveraging coordination via consensus.
The more decentralized the sequencer is, the worse your rollup's performance will become. Every step towards a more decentralized sequencer introduces more coordination overhead, more communication across the network, and more computational work, eroding the very speed and cost advantages that make L2s attractive in the first place.

The diagram above provides a simplified sketch of three common architectures for L2 sequencing. While the [design space](https://arxiv.org/pdf/2310.03616) is [much larger](https://medium.com/rockaway-blockchain/decentralized-sequencer-be2af56b46bf), these examples clearly illustrate how each approach makes a different compromise:
1. **Centralized:** As the green "fast" arrow shows, this model offers the best performance, with near-instant soft-finality. However, as we've discussed, it concentrates power in a single entity, creating a single point of failure and significant risks related to censorship and MEV.
2. **Multi-sig Sequencing**: This approach introduces a committee of signers (*m-out-of-n*), providing distributed authorization and preventing a single operator from pushing a malicious state update. While an improvement, it is not a complete solution. It's often limited to a small, fixed set of parties, is vulnerable to liveness issues if signers go offline, and does little to solve MEV or fair ordering if a single leader still proposes the blocks. The result is a slight performance hit for a moderate security gain.
3. **Sovereign Consensus**: This represents a deeper level of decentralization, where sequencers use a formal BFT consensus protocol to agree on transaction ordering. This provides robust liveness and far stronger guarantees against censorship and MEV. The trade-off, however, is stark. The coordination required for consensus is slow and resource-intensive. In a typical BFT network designed to tolerate `f` failures with `n = 3f+1` nodes, every node added increases the communication and computation overhead, making confirmations significantly slower, approaching the very bottlenecks L2s were designed to escape.
The more a rollup architecture relies on centralizing its components, the greater its immediate performance gains will be. Conversely, the more a rollup decentralize, the more it begins to run into the same coordination problems and physical limits that constrain the L1, and its performance advantages diminish.
Each of these design choices comes with its own set of security guarantees and, crucially, reintroduces a degree of the communication and computation overhead that the centralized model was designed to avoid.
The vastness of this design space makes it incredibly difficult for users and developers to compare the nuanced guarantees of different decentralization solutions. This highlights an urgent need for **a risk framework that can help identify, quantify, and navigate these complex trade-offs.**
## A Pragmatic View on L2 Rollups
I am not against L2 rollups.
They are crucial for scaling Ethereum's capacity, enabling parallel execution environments, and making the ecosystem usable for millions of people. However, users must be aware that when they use an L2 rollup, they are not accessing the same blockchain experience as interacting with L1s. They are not using a fully trustless infrastructure. They are **explicitly trading some of Ethereum's core decentralization and trustlessness guarantees for speed and convenience.**
A well-designed L2 rollup might fully inherit the security of the L1, meaning your funds cannot be stolen by the L2 operator, thanks to cryptographic proofs and escape hatches. But other crucial properties can be weaker:
1. **Practical Liveness**: Your ability to transact might depend on a centralized operator. If their server goes down, the L2 network may halt, or users may be forced to use the slow and expensive escape hatch to access their funds.
2. **Censorship Resistance**: A centralized sequencer can theoretically choose to ignore or delay your transactions.
3. **Fairness**: The sequencer has the power to reorder transactions for its own benefit (e.g., MEV).
### Closing Thoughts
My personal view is that blockchains were created to let information move freely without control of sovereign authorities (as I also stated in the introduction of my [PhD thesis](https://eprints.soton.ac.uk/457412/1/PhDThesis_Stefano_DeAngelis_PhDComputerScience_Cybersecurity_04042022_final.pdf) a few years ago).
The ultimate goal should be to scale the L1 itself; a truly decentralized infrastructure, maintained by a global community of millions, operating at network speed with minimal coordination overhead.
This is the ultimate promise of blockchains: a world computer that is truly decentralized, trustless, open, and permissionless. Rollups are a vital bridge to get us there, but not the final solution.