Understanding rollup economics from first principles by Barnabé Monnot (Feb, 2022)
Rollups guarantee correctness of the off-chain execution, as well as availability of the data behind the execution. Players in the rollup game: Users transact on L2, operators interface between them and the base layer, where data is eventually published.
Costs as "energy sinks" in a rollup system. It highlights that running such a system incurs costs, including L2 operator costs, L1 data publication costs, and congestion costs. These costs are associated with different parties in the ecosystem.
Revenues as "energy sources" in a rollup system.
💡 Transaction value = User value + MEV
A full view of the system, with inflows representing revenue (transaction value + issuance) and outflows representing costs (L2 operator costs, L1 data publication costs, congestion costs). Fees transfer value between parties.
The operator must pay the L1 data publication fee to the base layer. They must pay for it exactly when they publish the data, and at the rate quoted by the base layer. When fees are dynamic, priced in a fee market, L2 congestion costs are also immediate. Users observe the current demand for rollup blockspace and adjust their fees given the available supply. For instance, rollups may want to deploy an EIP-1559-style market mechanism on top of their network to govern inclusion of L2 transactions. An L2 basefee is then available to enable easy estimation of current L2 congestion costs by the users.
Rollup operators must at least receive revenues equal to or greater than their costs to operate sustainably. Issuance is mentioned as a mechanism to maintain this balance. When rollup operators are not making enough revenue to cover their operating costs, they can potentially receive a portion of newly created tokens (issuance) to offset their losses. If a rollup operator is operating at a loss and is "too unprofitable" for an extended period, it may become unsustainable for them to continue. In such cases, they might leave the system. When an operator leaves due to being unprofitable, their share of the issuance (newly created tokens) can be redistributed among the remaining operators. This redistribution helps the remaining operators cover their costs and maintain the overall budget balance of the system.
Derivatives or future contracts can be used to maintain a non-negative balance, especially concerning L1 data publication cost variability. The natural solution is to invoke derivatives, e.g., simple L1 basefee future contracts. In the future, it is possible that protocols will want to reduce the uncertainty in data cost, for example using blockspace derivatives. At transaction time, the user is charged a fee defraying the cost of locking in a future price for the publication of data on the base layer. By reducing pessimistic overpayment, savings are passed on to the users. Investigating the optimal design of such derivatives remains an open question.
Users pay the operator at the time they transact, yet operators must pay for the data publication at the rate quoted by the base layer, which is variable!
The discussion touches on what to do with congestion fees. In Ethereum, these fees are currently burned, but the text suggests redirecting them towards public goods funding to compensate for negative externalities.
Meanwhile, assuming all operators must pay an equal amount in L2 operator costs and L1 data publication costs are charged precisely to the user, we obtain:
In a world where operators compete in an efficient market to win the right to propose a block, the operators must bid away the entirety of their profits, i.e., exactly the congestion fees and the MEV available in their batch. This is value that “slipped” into the system: the first from users protecting against losses due to congestion, the second from ripple effects caused by the initial transaction. This value was never anyone’s to begin with, so why shouldn’t it be captured and redistributed somehow?
Users are willing to pay up to their value for inclusion in transaction fees. Externalities are denoted by dashed rectangles.
The economics of rollup fees by Alex Beckett (May, 2022)
📌 average cost = total cost / number of transactions per batch
Rollups can achieve positive network effects with regards to transaction fees. As more users and transactions are added to the rollup, the average cost per transaction can decrease because the marginal cost is lower than the average cost.
The phrase “Rollups get cheaper with more users” is correct when the average cost decreases while increasing the number of transactions per batch. This positive network effect can only be sustained until the marginal cost equals the average cost. Beyond this point, the cost structure follows a standard short-run cost curve.
This cost structure makes rollups an attractive option for scalability and cost-efficient blockchain operations, particularly in contrast to traditional monolithic blockchains where fees tend to increase as the user base grows. Rollups can keep costs low when they start, but if they get too crowded, costs can rise just like regular blockchains. The biggest reason rollup costs go up is because of the cost of handling transaction data. This cost can grow significantly as more transactions are added. To combat this, hybrid rollups like validiums and volitions have emerged. They aim to lower costs by moving some of the data processing off-chain.
Rollups are Real — Rollup Economics 2.0 by Davide Crapis (Aug, 2023)
The rollup ecosystem is exploring methods of aggregation to reduce costs and increase efficiency. Shared sequencing services, batch posting, and shared provers are examples of aggregation techniques that can optimize data processing and reduce expenses.
A direction that the rollup ecosystem could take is to have more independent rollups that are closely aligned to the L1. We have not seen many implementations yet but there are at least two interesting architecture:
Shared provers that aggregate many SNARKs into a single bigger proof before posting to L1 are one of the most exciting scaling unlocks, especially because they can do these aggregations recursively, offering big gains in efficient utilization of the L1 data market, at the cost of more offchain computation. One thing that seems clear is that, sooner or later, rollups will choose to adopt shared services, either as part of a federation or of an economic union.
Rollup cooperatives involve economic integration among rollups. They may share services like sequencing or batch posting to reduce costs and improve interoperability.
“A cooperative is a group of entities that share or work together to achieve a common objective, such as an economic benefit or saving.” — Wikipedia
Economic flows in rollup communities that adopt shared services (ShS).
Examples of such services include the Espresso sequencer, which is a shared service for sequencing and posting, shared batch posting only, or shared proving.
Rollup federations go beyond economic integration and involve political integration. They share services and governance mechanisms, often connected through a shared bridge to the Ethereum base layer.
“A federation is a group of states that give part of their sovereignty to a central governing authority that enforces certain laws and regulations.” — Wikipedia
Rollup federations share both services and a bridge to the base layer.
For example Optimism Superchain, Polygon 2.0, StarkWare SHARP, zkSync Hyperchains, and other related projects share similar patterns in their architecture. We distill it in the following figure, to isolate the effects we make the realistic assumption that federated rollups automatically opt into shared services and do not incur direct data publication costs.
When rollups think about setting up decentralized services (for sequencing, proving, or validation) they will need to run a consensus protocol. Here is when ecosystems that have enough scale see the opportunity to “upgrade” their native token to a productive asset, which is what Polygon 2.0 plans to do with POL.
The native token is an important economic tool to help bootstrap an L2 ecosystem/economy. Emissions refer to the creation and distribution of new tokens within the ecosystem. These emissions can serve multiple purposes, such as rewarding service operators and funding various projects or public goods that benefit the ecosystem as a whole.
However, when the native token is used to support decentralization via some native proof-of-stake protocol, security may degrade with more dilution. Dilution occurs when more tokens are created, potentially reducing the value of existing tokens. This dilution can affect the security of the ecosystem. Even when the native token is only used for governance, excessive dilution may cause more budget constrained holders to sell, thus potentially leading to ownership concentration. Excessive dilution can cause token holders with limited budgets or resources to sell their tokens. This can result in a concentration of ownership, where a small number of entities or individuals hold a significant portion of the tokens, potentially impacting the ecosystem's governance and decision-making.To address these challenges, it's crucial to have a well-calibrated token emission schedule. This means that the rate at which new tokens are created and distributed should align with the actual demand and growth of the ecosystem. This helps maintain a balance between incentivizing participation and avoiding excessive dilution.
Another important consideration is that making the L2 economy more dependent on the native token (vs. ETH) also makes it less robust to certain failure modes, as exiting to L1 may not be an option. The ecosystem may lose the security benefits provided by ETH as an "outside money." ETH, as the native cryptocurrency of the Ethereum network, is often considered a secure and stable asset. If the ecosystem relies heavily on its native token and less on ETH, exiting to the Ethereum base layer may become challenging in some situations. In the limit the L2 is still secured by Ethereum but loses the security provided by ETH as outside money.
L3 systems are an additional layer built on top of L2 rollups, providing additional sources of L2 fees. These systems have their own budget constraints and can generate revenue from fees, subscriptions, or other mechanisms. These generally target applications that require low execution costs plus easy deployment and are willing to trade-off security. Think games, social media, NFT products that do not need to bootstrap their own economy of services or attract/secure large amounts of liquidity.
Economic flows with L3 systems.
There are different flavors of these that include L3s, validiums, and rollup-as-a-service (RaaS) platforms. For example, Arbitrum Orbit is a platform that enables L3s chains that settle on Arbitrum L2 (One or Nova), and has some configurability, such as selecting an Arbitrum-authorized Data Availability Committee (DAC) versus Ethereum L1 as the data availability layer. StarkNet and other zk rollups projects have also been experimenting with enabling L3s. An extreme example on the ease-of-deployment direction are AltLayer or Caldera that have no-code solutions to deploy “customizable” rollups and give agency to the user to make their own security-efficiency trade-off.
This is essentially an added layer on top of L2s. From the L2 rollup perspective it is an additional source of L2 fees. While the L3 is a new entity in the rollup ecosystem with its own budget balance constraint:
Zero-Knowledge Proof Pricing by Tarun Chitra and Georgios Konstantopoulos (March, 2020)
Focus: Incentives for "proof miners" at timescale of generating a single proof
Design Question: What allocation function must be used for a proof system and circuit in order for proof generation to be "fair"?
Fairness Criteria: