## **Decentralizing Rollup Sequencers: Towards A Rollup-Centric Ethereum** ### **Centralized Sequencers and Their Challenges** Rollups have emerged as an off-chain solution to enhance Ethereum's scalability. They operate through entities known as sequencers, which collect transactions from users, create and execute blocks, and then have the results verified on Ethereum Layer 1. Depending on the verification method used on Ethereum, rollups are classified into two types: Optimistic and Validity (or ZK) Rollups. - **Optimistic Rollups:** User transactions are submitted to Ethereum as proof, along with the execution results. A third party can verify and challenge these results through Ethereum. - **Validity Rollups:** These leverage cryptographic zero-knowledge proofs to demonstrate correct execution without requiring a third party. The proof is then verified on Ethereum along with the results. Both methods ensure that the sequencer has correctly processed the transactions, with their security essentially being inherited from Ethereum. This means a sequencer cannot arbitrarily manipulate results. This is why many Layer 2s still use centralized sequencers, which, from a scalability perspective, is not problematic. However, recent discussions focus on the decentralization of rollup sequencers, not only to enhance security but also to foster transparency in order to attract a broader user base. Several key issues arise from the centralization of sequencers: liveness, censorship, MEV, and governance. - **Liveness:** A centralized sequencer, if halted, could disrupt the entire system. Ensuring robust liveness is crucial to prevent such system failures. - **Censorship:** With complete control over processing user transactions, a centralized sequencer could arbitrarily censor transactions, violating the principle of openness and fairness. - **Harmful MEV:** This includes practices like frontrunning and sandwich attacks, where the sequencer could potentially harm user transactions for their own benefit. - **Governance and Fees:** The decision-making process regarding transaction fees is monopolized by the sequencer, possibly leading to users paying unfairly high fees. These issues underscore the growing importance of decentralizing rollup sequencers as decentralization could mitigate these challenges significantly. But how can this decentralization be effectively achieved? ### ****Monolithic vs. Modular Approach in Decentralizing Sequencers**** ![image](https://hackmd.io/_uploads/ryhqlvHH6.png) When considering the decentralization of sequencers in rollups, there are two critical aspects to keep in mind. First, rollups are inherently a scalability solution. This means that decentralization should not lead to a decrease in TPS — the fundamental scalability purpose of rollups. Second, with the development environment for rollups improving, it’s becoming easier to build products using rollups, especially for those requiring high scalability. This is supported by Rollup-as-a-Service (RaaS) platforms (like [AltLayer](https://altlayer.io/), [Caldera](https://caldera.xyz/), [Lumoz](https://lumoz.org/), [Stackr](https://www.stackrlabs.xyz/)) and Rollup frameworks (such as [Madara](https://www.madara.zone/), [OP Stack](https://stack.optimism.io/), [Polygon CDK](https://polygon.technology/polygon-cdk), [ZK Stack](https://zkstack.io/)). These create an environment where developers can easily build their own rollups without significant hurdles. There are two main approaches to decentralizing sequencers in rollups: monolithic and modular. - **Monolithic:** There are multiple sequencers performing the same sequencer tasks, with consensus reached on a single execution result verified on Ethereum. However, this consensus process introduces unwanted latency and is undesirable from a scalability perspective. Additionally, securing a sufficient number of sequencers for consensus across a number of rollups seems impractical. - **Modular:** This breaks down the sequencer's tasks into smaller, more manageable components. This could potentially offer more flexibility and efficiency in supporting diverse rollup needs, but it requires a more complex coordination and integration system. Both approaches have their pros and cons. The monolithic approach, while much simpler, faces scalability and optimization challenges due to reliance on a single sequencer. The modular approach, while potentially more efficient and flexible, requires sophisticated coordination and might introduce system complexity. For decentralizing rollup sequencers, the modular approach appears to be more viable. This involves dividing the sequencer tasks among specialized entities and optimizing each for efficiency to meet scalability needs. Broadly, these tasks can be divided into three key roles: - **Builder:** Responsible for receiving transactions from users and building the most efficient (profitable) block. - **Proposer:** Executes the block and submits results to Ethereum at an optimal cost. - **Prover:** Rapidly generates proofs for result verification. By focusing on specific tasks, each entity can be optimized more effectively, offering scalability benefits. Additionally, sharing these entities across various rollups will enhance the decentralization of sequencers. A similar approach is seen in Ethereum’s PBS (Proposer-Builder Separation). [Radius](https://twitter.com/radius_xyz) adopts a similar strategy, with the ultimate design goal of Ethereum validators verifying that the promises are kept by each modularized entity. Details about this design will be covered in a separate article. ### Pre-confirmations A crucial consideration in the modular approach is pre-confirmations, a concept that significantly benefits the user. In rollups, for a user's transaction to achieve finality, the proposer's submitted execution results must be verified and stored on Ethereum. This finality virtually cannot be violated, ensuring the security provided by Ethereum (except in rare cases like reorgs). Despite relying on the security of Ethereum, rollups have the flexibility to offer users faster confirmation times, often faster than Ethereum’s 12-second period. For example, in the current rollup systems where a centralized sequencer handles the roles of builder, proposer, and prover, the sequencer can guarantee to store the transactions on Ethereum, achieving pre-confirmation for the user. If the case that pre-confirmation fails to be met, verification through Ethereum can prevent malicious behavior by the sequencer. Pre-confirmations are important for the user experience because they allow users to receive an early “guarantee” (though not the finalization) of their transaction's processed state, enabling them to move forward with the next transactions. In the modular approach, when can users receive pre-confirmations? They receive pre-confirmations after the final block is selected by the proposer. Here's how it works: 1. **Builder's Role:** The Builder creates a bundle with the user's transaction. 2. **Proposer's Role:** The Proposer selects the final block including the bundle, and only then users will receive pre-confirmations for their transaction. This separation of roles between the Builder (who selects transactions) and the Proposer (who finalizes these selections) introduces some latency or uncertainty. This is not ideal from a scalability and user experience perspective, whereas, with a centralized sequencer, a single entity could pre-confirm transaction inclusions that lead to finalization. ![image](https://hackmd.io/_uploads/BJg5WZvSrT.png) ## ****Radius: A New Modular Approach**** One potential solution to the latency problem is to bring the pre-confirmation times to the transaction level, comparable to that of a centralized sequencer. Here, proposers take on the role of builders when buidling a block. As a result, users receive pre-confirmations from the proposer that their transaction will be included in a block and stored on Ethereum. However, while combining the roles of builder and proposer into one (the “sequencer”), there's an increased risk of censorship and sandwiching attacks due to the current MEV economy. This also means giving up additional network benefits that specialized builders can offer, a concern also seen with Ethereum's PBS. Radius is exploring two approaches to mitigate these issues. First, we introduce an encrypted mempool to eliminate sequencer censorship and sandwiching attacks. Second, allocating a part of the blockspace to builders for MEV Auctions, allowing rollups to earn additional revenue. The encrypted mempool ensures that sandwiching and frontrunning attacks are prevented during MEV Auctions. This blog will focus on the first approach: encrypted mempool. The topic of MEV Auctions in a divided blockspace will be explored in a separate blog post. ### Encrypted Mempool ![Untitled](https://hackmd.io/_uploads/S1ntbwSr6.jpg) Radius leverages the encrypted mempool to prevent sequencer censorship and sandwiching attacks. This enables users to submit transactions in an encrypted form. The sequencers commit to these transactions without knowing the contents of the transaction, providing users with protection against censorship and MEV attacks. The commitment, also a form of pre-confirmation, informs users of the execution order of transactions in the upcoming block. An important aspect of the encrypted mempool is ensuring that sequencers cannot decrypt the transactions until they have issued pre-confirmations to the user. A well-known method for the encrypted mempool is threshold encryption. In threshold encryption, the symmetric key is divided into several pieces (shares) and distributed to a key-holding committee during the encryption stage. To decrypt, a certain threshold of the total shares is required to reconstruct the symmetric key. This requires a degree of trust in the key-holding committee as a third party. The trust assumption here is that a certain threshold of committee members will share their portions of the key in a timely manner. In contrast, Radius adopts a time-lock puzzle (delay) encryption instead of the threshold method. In time-lock encryption, the solution to the time-lock puzzle serves as the symmetric key for encryption. Here’s how it works: 1. **User-Generated Puzzle**: Users create a time-lock puzzle. 2. **Sequencer's Role**: The sequencer must solve this puzzle to decrypt the transaction. ![image](https://hackmd.io/_uploads/ByGn-vBrp.png) An advantage of time-lock encryption is that it eliminates the need for trust in a third party, as decryption relies solely on the sequencer solving the time-lock puzzle. When users create the time-lock puzzle, they set a specific time duration (T) for which they already know the solution. The sequencer, unaware of this solution, must expend computational resources to solve the puzzle. This process inherently introduces a delay, as the sequencer needs to spend time (T) solving the puzzle. During this delay, when the sequencer is working to decrypt the transaction, it gives an order commitment to the user, which is a promise about the transaction's inclusion order in the next block. (Currently, this process is implemented in our testnet. In our Testnet work video, which is available on [Radius's Twitter](https://twitter.com/radius_xyz/status/1724082176818573399), viewers can witness the process of users creating encrypted transactions using the Timelock puzzle.) ### Order Validation For Radius, the sequencer gives users a signed order commitment, which is a form of pre-confirmation. For this pre-confirmation to be fully effective, there needs to be a mechanism that ensures the sequencer will keep this commitment. This is where the concept of order validation comes into play. The order commitment, once signed by the sequencer, provides users some level of assurance about the state of their transaction before it achieves finality on Ethereum. To ensure that the sequencer adheres to this commitment, an order validation mechanism allows users to verify whether their transactions have been included in the specified order. After the sequencer submits the block's execution results and proof to Ethereum, users can check if their order commitment has been kept. If not, users can claim through Ethereum, along with their order commitment. Ethereum can then verify trustlessly with smart contracts by comparing the proof with the order commitment. To minimize data storage costs in smart contracts, only the list of encrypted transaction hashes, not the entire transactions, can be stored. Additionally, if a sequencer is found to be malicious through a claim, it will be slashed. These slashing conditions will ensure that the sequencer won’t behave maliciously and keep its order commitments. ![image](https://hackmd.io/_uploads/HyPaZwBSa.png) (Note: The order validation design described here is still naive. We’re currently researching methods for cost optimization, which will be discussed in another article.) ### Using ****ZKP to Protect Sequencers**** Radius has the ability to provide users with faster pre-confirmation times at the transaction level, rather than the block level, using the encrypted mempool and order validation method. This also brings users the benefits of censorship resistance and MEV-resistance, as transactions are encrypted when they receive pre-confirmations. However, this may introduce a potential side effect: attack vectors on the sequencer. Consider a scenario where a malicious user sends arbitrary data instead of a properly encrypted transaction to the sequencer. The sequencer, unable to see the encrypted contents, would start solving the time-lock puzzle for decryption. If the solution found doesn’t lead to the correct symmetric key, the sequencer ends up wasting computational resources. Even if the encryption is valid, if the integrity of the transaction itself isn’t guaranteed, it could result in the waste of blockspace with the sequencer's pre-confirmation. Such attacks with invalid encrypted transactions could lead to a waste of sequencer resources and network instability. ![image](https://hackmd.io/_uploads/Sk7kfPSH6.png) To prevent this, sequencers need to be able to verify the integrity of encrypted transactions and the encryption process, including time-lock puzzle generation, without actually decrypting them. Imposing fees for invalid transactions is one solution, but Radius has chosen the cryptographic zk proof method, as this allows the sequencer to validate the transaction's integrity without revealing its contents to prevent such malicious attacks on the sequencer. With Radius, users generate a zk proof during the encryption process to verify three things: 1. The solution to the time-lock puzzle was used with the encryption key. 2. The encryption was correctly executed. 3. The transaction includes a valid signature and nonce, and the sender has enough balance to pay the transaction fee. By verifying this proof, the sequencer can validate the encrypted transaction without actually decrypting it. This can effectively prevent the types of attacks previously mentioned. Once the proof is verified, the sequencer can issue pre-confirmations and solve the time-lock puzzle to get the decryption key. ![image](https://hackmd.io/_uploads/SJEefPBHa.png) Generating the proof, especially in environments with limited resources like web browsers, can be computationally intensive and negatively impact user experience. Radius developed our own cryptographic scheme (called PVDE - Practical Verifiable Delay Encryption) that keeps the time required to generate a zk proof constant (O(1), less than 1 second), regardless of the size of the time-lock puzzle, thereby minimizing the impact on UX. (Currently, this process has been implemented in our testnet. In our Testnet work video, which you can find on [Radius's Twitter](https://twitter.com/radius_xyz/status/1724082176818573399), there is a demonstration showing users creating encrypted transactions right in their browsers. The video also illustrates the process of generating zero-knowledge proofs (zkp) for these transactions.) ## ****Key Features of Radius: Shared Sequencing Layer**** ### ****Liveness Guarantee**** Radius demonstrates how issues that may arise with pre-confirmations can be resolved using an encrypted mempool, order validation protocol, and zk proof. To ensure sequencers operate in a stable environment, we’re also exploring methods to guarantee sequencer liveness. One primary method is using a leader-based consensus algorithm. In this system, a random set of sequencers forms a sequencer committee. This committee is responsible for the sequencing of a particular rollup for a specified epoch. A leader is randomly chosen from this committee (using a Verifiable Random Function - VRF) to issue pre-confirmation for that epoch. When a sequencer (in the committee, but not a leader) receives an encrypted transaction and its corresponding proof from a user, it first verifies the proof. Once it’s verified, the encrypted transaction is passed on to the leader. The leader then issues an order commitment for the transaction as a pre-confirmation and sends it back to the sequencer, who then sends it to the user and starts solving the time-lock puzzle for decryption. The leader collects the decrypted transactions, creates a block, and after consensus, the final block is submitted to Ethereum. ![image](https://hackmd.io/_uploads/HJAWzwSST.png) The consensus along this process could introduce additional overhead, compromising the scalability needs of rollups. To address this, we are exploring block-building methods that do not require a consensus, such as a deterministic rule-based block-building method. In a deterministic approach: 1. A block is divided into multiple bundles. 2. Sequencers within the sequencer committee are randomly assigned to include transactions in specific bundles. 3. An aggregator then collects these bundles from the sequencers. 4. Using the hash values of these bundles, the aggregator performs a deterministic shuffling and builds the final block to be submitted to Ethereum. This method minimizes the need for consensus among sequencers while easily verifying the block with a deterministic rule-based block-building method. This, however, can be susceptible to withholding attacks by sequencers against the aggregator. Our ongoing research is focused on developing effective countermeasures against such attacks to ensure the integrity and reliability of this block-building process. ![image](https://hackmd.io/_uploads/SJRzfvBBp.png) ### ****Enhancing**** Interoperability One recurring challenge in rollups have been the issue of fragmentation. This can be uniquely addressed through the sequencing layer. When multiple rollups share a single sequencer, the blocks determined by this sequencer execute across these rollups. This decision-making process by the sequencer allows each rollup to communicate with others sharing the same sequencer, enabling what we call atomic inclusion across multiple rollups. For a flashloan transaction occurring within multiple rollups on Ethereum, it must be included in the blocks of both Rollup A and Rollup B. These transactions have an all-or-nothing condition, meaning they must either all execute successfully or all fail. The sequencer has control over including these transactions in the upcoming blocks of Rollup A and B. Using the previously mentioned order validation mechanism, we can ensure the execution of these transactions and guarantee the atomicity of the flashloan transaction. ![image](https://hackmd.io/_uploads/rJgEMPHrp.png) A key challenge in rollup interoperability comes from the misconception that inclusion in a rollup's block guarantees successful execution. In other words, atomic inclusion does not necessarily ensure atomic execution. Transactions included in a block may be reverted due to state changes during execution. For example, if a sequencer intentionally fails the pre-confirmation linked to a flashloan transaction, taking the slashing risk, other sequencers that have already executed the flashloan transaction may incur losses. To maintain atomicity under the all-or-nothing condition, sequencers that have completed their operations must roll back to the state before the flashloan transaction execution. A shared sequencer can quickly detect this situation before finality, allowing for a quicker rollback and minimizing costs. ![image](https://hackmd.io/_uploads/SJMBzwHSa.png) The sequencing layer stores the latest block state across connected rollups and oversees rollup activities that need cross-rollup communication, such as flashloan transactions. Essentially, the layer functions as a sequencing-specific rollup, ensuring reliable data storage. If it detects a failure of an atomic flashloan transaction, it can easily determine the state to which each connected rollup should roll back to. Nevertheless, achieving such interoperability is complex involving various considerations. We remain committed to researching and exploring this area to improve the overall process. ### ****Towards a Rollup-Centric Future, the Modular Way**** ![image](https://hackmd.io/_uploads/rJgIGDrSa.png) As numerous rollups emerge in the future, the need for decentralized sequencers becomes essential to ensure transparency and stable rollup operations. Based on a modular approach to decentralizing sequencers, Radius provides efficient decentralized sequencing and censorship and MEV resistance using an encrypted mempool, while addressing the issues of fragmentation and rollup interoperability. We continue to research, develop, and collaborate with various teams for an ideal PBS design for rollups. You can find our collaborations on the [Radius Ecosystem](https://www.theradius.xyz/ecosystem) page. Together, I’m excited to contribute to the realization of a rollup-centric Ethereum ecosystem.