Vanilla Based Rollup is a conceptual design for a rollup. It suggests "Vanilla Based Sequencing" - a form of decentralised sequencing where the L1 block proposers are eligible to be the L2 sequencer during their L1 slot. The design is geared towards addressing practical problems of the rollups - addressing low initial participation by sequencers, enabling user experience features on par with centralised sequencing designs and suggesting mission-aligned revenue sources for the various actors. Next, the design explores a practical and viable mechanism for participation and orchestration of the duties of the L1 proposers-L2 sequencers. It suggests the convergence of the various block-building pipelines together by extending the existing interfaces and a new software GMEV-Boost. Last, the design suggests the mechanism for participation by L1 proposers in a vanilla based rollup.
Vanilla Based Rollup is a conceptual design for a rollup. As the name suggests its roots come from the based rollup concept. Similarly to the original based rollup concept, vanilla based rollups are sequenced by the L1 proposer set, thus retaining its advantages - liveness, decentralisation, sovereignty, L1 alignment, etc.
Vanilla based rollups extend the based rollups through the introduction of "Vanilla Based Sequencing". This is a form of decentralised sequencing aimed at addressing practical problems of original based sequencing and enabling user experience on par with centralised sequencing. The main differentiator between based and vanilla-based sequencing is the mechanism for the selection of an L2 sequencer if the current L1 proposer has not opted to be a sequencer for the rollup.
Vanilla Based Rollup concept is also geared toward the support of preconfirmations. Current rollups with centralised sequencing offer a quick preconfirmation of inclusion and execution. This is now the standard that the users should come to expect for their transactions. The design suggests preconfirmations design and implementation mechanism for vanilla based rollups.
Furthermore, this paper goes into exploration of how vanilla based rollups can be implemented. It suggests the extension of existing API interfaces and software components in order to implement the vanilla based rollup concept in practice. The two main development suggestions are GMEV-Boost - a drop-in, backwards-compatible replacement for MEV-Boost - and Conditions API - a backwards-compatible extension to the Ethereum External Builder API.
Last, the design suggests the mechanism for participation by L1 proposers in a vanilla based rollup. It outlines the mechanisms for registration, activation and deactivation of L2 sequencers and provides the foundation for implementation of sequencer selection verification.
Vanilla Based Sequencing is a design for decentralised sequencing mechanism for rollups led by the L1 proposers. Its main goals are:
In the design, L2 sequencing is performed by L1 proposers who have opted-in to participate as L2 sequencers and be punished for misbehaviour. The default L2 sequencer is the current slot L1 proposer. If the current L1 proposer has not opted-in to be a part of the rollup protocol, another opted-in L1 proposer is chosen at random to replace them.
The L2 sequencers have monopoly power over the rollup sequences during this L1 slot and can provide services (i.e. preconfirmations) to the users with quality on par with centralised sequencing.
The majority of the current Ethereum rollup landscape consists of rollups with centralised and/or permissioned sequencing. These centralised sequencing layers both introduce trust assumptions and improve user experience.
The goal of a decentralised sequencing protocol would be to address the negatives, while maintaining or improving the positives that centralised sequencing offers.
Current rollups with centralised sequencers offer superior UX compared to both L1 and rollups with decentralised sequencers.
Firstly, due to their constant monopoly, they offer a quick preconfirmation of inclusion and execution. This is now the standard that the users should come to expect for their transactions.
Secondly, the rollups with currently centralised sequencers fulfil the rollup-centric future promise offering orders of magnitude cheaper transactions compared to L1.
A goal for decentralised sequencing design would be to enable a rollup protocol to offer equal if not better UX compared to centralised sequencing in terms of preconfirmation time, guarantees and cost of transaction.
Current rollups with centralised sequencer generates revenue from the transactions fees in their rollups.
A goal for a decentralised sequencing design would be to enable all the participating actors, including the rollup protocol itself, to be profitable.
Rollups with centralised sequencing cannot offer synchronous composability with other rollups and are forced to remain fragmented unless they opt into a single centralised shared sequencer. This can lead to multiple rollups having a single trusted centralised sequencer and further exacerbating the downsides of centralised sequencing.
A goal for a decentralised sequencing design would be to enable composability with other rollups without the requirement to trust a single centralised sequencing layer.
Rollups with centralised sequencing rely on a single centralised sequencer to be live in order for the system to be operational.
A goal for decentralised sequencing design would be to derive its liveness guarantees from the Ethereum L1 liveness guarantees.
Rollups with centralised sequencing requires the users to trust them that they will not censor them. This makes the centralised sequencer a single point of failure and subjects the rollup, among other things to geopolitical risks.
A goal for a decentralised sequencing design would be to remove the need of trust towards a single party and ensure a sizable and diverse set of sequencers, so that within an acceptable timeframe censored transactions will reach an honest sequencer that will be willing to sequence this transaction.
The major decision of a decentralised sequencing design is the selection of the next L2 sequencer(s). Two groups of design ideas are currently being explored by the rollup teams - “Free for all” and “Leader election”.
Free for all designs see the role of the sequencer as completely open at any time. In “Free for all” sequencing any user can act as sequencer and submit a sequencing transaction in L1. The order of inclusion in the L1 block of the (possibly multiple) sequencing transaction is decided by the L1 proposer and the L1 block building pipeline.
The leader election design sees a single sequencer be elected to have monopoly over the sequencing rights for some timeframe (usually denoted in L1 blocks). Two subgroups of leader election designs ideas are being explored by the various rollup researchers - “External Sequencing” and “Based Sequencing”.
External sequencing sees the leader election to be performed through an external to L1 consensus algorithm. The rollup has its own set of participants that have opted to be part of the consensus algorithm (with its crypto-economical incentives - i.e. staking) for selection of the L2 sequencer.
Based sequencing sees the leader role (L2 sequencer) be assigned to the current L1 proposer. In order to be part of the rollup protocol selection process the L1 proposers need to opt-in for additional slashing conditions for their L1 stake.
This document outlines a design iteration over the based sequencing concept that aims to fulfil all the outlined goals of decentralised sequencing. The main difference between the original based sequencing concept and the vanilla based sequencing concept is the protocol behaviour when the current L1 slot proposer has not opted-in to be an L2 sequencer.
An important design consideration aimed by the design is to address the “cold start” problem - achieve the desired goals of decentralised sequencing even early when the participation of L1 proposers is expected to be low.
The following sections will review the three crucial design suggestions of "vanilla based sequencing" design.
Vanilla based sequencing L2 sequencer selection starts with the same general idea as original based sequencing - that the L2 sequencer is the L1 proposer. For a L1 proposer to become eligible to be a certain rollup L2 sequencer, the L1 proposers would need to opt into slashing conditions - punishment for misbehaviour as a sequencer in the rollup.
The design recognises that only a subset of the L1 proposers would opt-in to be L2 sequencer for the rollup. This nuance requires the mechanism to define the sequencer selection in the two possible scenarios:
In the primary selection case the current L1 proposer has opted in to be a L2 sequencer (depicted in green above). The L1 proposer is automatically assigned as L2 sequencer for the duration of this slot by the rollup protocol. During this period, this proposer is able to provide the best security and timeliness guarantees - that the sequencing transaction(s) will get included in this block and exactly in this block (if no reorgs happen). As L2 sequencer the L1 proposer is able to get additional revenue from the transaction fees and extracted value from the rollup transactions.
One drawback of the original based sequencing concept is the appearance of “sequencing gaps” - an L1 slot whose proposer has not opted-in to be L2 sequencer. These gaps lead to an unpredictably long sequencing time - the next opted-in L1 proposer might be beyond the current L1 lookahead. Such gaps result in rollup service degradation.
In the fallback selection case the current L1 proposer has not opted in to be a L2 sequencer (depicted in blue above). In order to fight sequencing gaps, an L1 proposer is drawn at random from the other opted-in L1 proposers and is assigned to be the L2 sequencer. As L2 sequencer the L1 proposer is able to get additional revenue from the transaction fees and extracted value from the rollup transactions.
Unlike the primary selection case, the L2 sequencer no longer has monopoly over the L1 slot too. This necessitates that the L2 sequencer employs strategies to maximise the chances of inclusion of their sequencing transaction in the assigned slot and exactly in the assigned slot. Such strategies might include, but are not limited to, sending the sequencing transaction to the public L1 transactions mempool with an increased base tip.
In the diagram above we’ve equated one rollup slot - the time that a single L2 sequencer has monopoly rights of sequencing - to one L1 slot.
Depending on the usage and mechanics of the rollups, some rollups might want to temporarily or permanently lower the sequencing frequency by adjusting the rollup size to multiple L1 slots.
In a rollup slot spanning multiple L1 slots, the sequencer selection criteria stay the same but are applied only considering the last L1 slot of the rollup slot.
Similarly to any type of sequencing, vanilla based sequencing requires the sequencers to be punished for misbehaviour. Such punishment mechanism can vary and is a design decision of the rollup protocol. Several popular options are:
In the context of vanilla based sequencing, the only important requirement is for such a punishment mechanism to exist in order to disincentivise the sequencers to misbehave.
Most of the selection mechanics are performed off-chain and are verified on-chain. This verification is part of the onchain sequencing process and is part of the logic of either the smart contract that deals with sequencing for this rollup or the finality mechanism.
First option is to have the selection performed in the L1 rollup smart contracts. The contracts need to verify the eligibility of the sequencer to be the current L2 sequencer. Due to the lack of access of L1 execution layer to the current L1 proposer, this check needs to be performed as a separate transaction in subsequent L1 block, or an optimistic challenge mechanism needs to be employed.
Second option is for the rollup to include verification of the correct selection of the sequencer within their finality mechanism - validity or fraud proof.
In both cases, the selection can be known in advance based on the current L1 lookahead, and can be efficiently verified offchain.
The mechanisms of opting-in and selection verification are the subject of an additional research document.
Being a sequencer for one or multiple rollups increases the sophistication requirements of the opted in L1 proposers. An important factor for the quality of service for any vanilla based sequencing rollup is the high L1 proposers participation rate. Increased L1 proposer sophistication requirements and high L1 proposers participation are at a proverbial “tug of war”.
To combat this war a transaction list building delegation mechanism is proposed on top of vanilla based sequencing rollup. The vanilla based sequencing design can be fully functional without this delegation mechanism, but will require increased sophistication of the L1 proposers.
In a MEV-boost manner, the opted-in L1 proposers are offered the ability to delegate their transaction list building to a secondary block-building pipeline.
The list building pipeline is a subject of an additional research document.
Current centralised sequencer rollups offer a superior UX compared to L1 and decentralised sequencing designs. Such a UX is now becoming a minimum standard expected by the users. One major component of this UX is the ability to quickly pre-confirm the inclusion and/or execution of a transaction to its sender. It is a naturally important requirement for the vanilla based sequencing to strive to reach and surpass the expected UX.
Two types of preconfirmations are expected of the system.
First type of preconfirmation is transaction inclusion preconfirmation. This preconfirmation guarantees the inclusion of a transaction in the subsequent rollup slot. These are useful for use cases like simple transfers.
Second type of preconfirmation is the stronger execution state preconfirmation. It allows specifying the desired values of parts of the state of the rollup pre or post execution of the transaction. These are useful for more complex use cases like DEX trades and/or arbitrage.
Both preconfirmation types require the sequencer to commit to the inclusion of a certain user transaction. The main difference comes in the ordering of the transactions. Within the context of the sequence, the inclusion of preconfirmed transaction can be located anywhere in the sequence. The state preconfirmation transaction requires a specific ordering of the transactions list up to this transaction.
Inclusion preconfirmations require simple checks of transaction validity - account balance, nonce, etc.
Execution state preconfirmation requires more sophistication to commit and price. Transactions prior to the target one can change the pre-execution state and make the desired post-state invalid, thus rendering the whole preconfirmation invalid. In practice this means that the sequencer must maintain and commit to an ordered list of transaction at the top of the list.
To increase the UX usability and enable wallets to hide away the complexity of retries in case of rejection or preconfirmation reneging, a deadline
field is suggested. Such a field enables wallets to retry without the user re-signing the transaction.
Both types of preconfirmations post a certain constraint on the transaction list that the sequencer can sequence. Such constraints limit to various degrees the value that the sequencer can extract from the sequence. Therefore both preconfirmations require additional payment by the user to the proposer in exchange for their guarantee.
The mechanics of preconfirmations and their pricing are the subject of a separate research document.
Transaction fees and MEV are the two major value sources of rollup. All of them are captured at sequencing time (assuming the sequenced transactions can be finalised).
To ensure that the protocol is generating revenue and is not forced into altruism, a portion of the sequencing revenue is suggested to be captured by the rollup. The specific proportions and mechanics are design decisions of the rollup protocols themselves.
A simple example mechanism can see the rollup embedding a commission fee Z% over the L2 sequencer balance increase. Such commission fee is aligned with the success of the protocol, as more transactions indicate high quality service and lead to increased revenue for both the rollup and the sequencers.
The rollup design naturally lends itself to become part of a wider universal synchronous composability (USC) mechanism.
Assuming multiple rollups using the vanilla based sequencing design, USC can be achieved for the L1 slots whose proposer has opted-in to be L2 sequencer to two or more rollups. In these slots the L1 proposer becomes a shared L2 sequencer. This shared L2 sequencer can offer additional cross-rollup services like atomic messaging and super transactions.
Regardless of the selection type, the sequencers have several ways to violate the protocol. In order to disincentivise the violations and misbehaviour the rollups are required to embed punishment mechanisms (discussed in the Sequencer Selection section). In case of violations, the sequencers are expected to be punished.
Below you can find a short list of violations and faults applicable specifically to vanilla based sequencing that the actors are expected to be punished for. This is list is by no means exhaustive and rollups should adjust it to their specific design.
This violation is characterised by the L2 sequencer missing to get L1 sequencing transaction included within their rollup slot. This fault can be objectively proven by the L1 smart contract.
It is important to note that the cause in primary sequencing can only be attributed to the L1 proposer or its delegation pipeline. Within the context of fallback sequencing the liveness fault can be caused by the L2 sequencer's inability to guarantee the inclusion of the sequencing transaction on time due to a lack of monopoly over the L1 slot.
This difference might change the severity of the punishment of the L2 sequencer. Furthermore, this is a risk that the fallback sequencers must account for and mitigate as much as possible - for example through increased L1 base tip on the sequencing transaction.
This violation is characterised by the L2 sequencer reneging on their preconfirmation commitment. The specific way it can be proven to the L1 smart contracts is a design decision of the rollup but using a signed commitment is strongly suggested.
The following are two lists to help the reader to differentiate what is strongly required for a rollup to be considered using vanilla based sequencing, and what is a design decision to be made by the rollup. Both lists are likely to change over time and are subject to community consensus.
Vanilla based sequencing increases the security of the rollup protocol through decentralisation of the sequencers group. By involving the L1 proposers as L2 sequencers the design provides the highest possible timeliness guarantees.
A major focus of vanilla based sequencing and the ability to support UX on par if not better than centralised sequencing. This is achieved through preconfirmations and enablement of composability with other rollups.
No actor in the vanilla based sequencing design is asked to be altruistic. All actors, including the rollup protocol itself, generate revenue for the services they are providing.
Vanilla based sequencing is a neutral design concept that is easily extendable to synchronous composability. Due to the reuse of L1 proposer as L2 sequencer, the L1 proposer can enable composability between rollups.
The liveness guarantees of any rollup comes from the liveness of its sequencers as a whole. Due to the reuse of L1 proposer as L2 sequencer, the vanilla based sequencing concept inherits the Ethereum liveness guarantees.
Rollups with centralised sequencing requires the users to trust them that they will not censor them. This makes the centralised sequencer a single point of failure and subjects the rollup, among other things to geopolitical risks.
Unlike centralised sequencing, vanilla based sequencing rollup censorship resistance increases with the increase of the set of L1 proposers opting in to be L2 sequencers. Due to their equivalence, the trust assumptions towards the L2 sequencers are similar to the ones placed on L1 proposers themselves. The diversity of the L1 proposers group and the clear economic incentives to opt-in makes no single sequencer a long-term single point of failure and lowers geopolitical and technological risks.
One major component of UX where centralised sequencers excel is the ability to quickly pre-confirm the inclusion and/or execution of a transaction to its sender. It is a naturally important requirement for vanilla based sequencing to strive to reach and surpass the expected UX.
This document outlines the design and mechanics to support preconfirmations in Vanilla Based Rollups. It introduces two new transaction types in order to support the two preconfirmation types:
In order to facilitate predictable fees, both transaction types will have their own predictable pricing - as a specific preconfirmation premium percentage on top of the EIP base fee.
In order to achieve UX on par with centralised sequencing, the two transaction types include a deadline
field to enable the wallets to retry preconfirmation requests without involving the user.
In order to incentivise the correct behaviour by sequencers, the design outlines signature-based commitment to preconfirmations - a signature of the transaction hash and the L1 block the committed transaction should appear.
The main goals of the preconfirmations are in line with the overall vanilla based sequencing goals. Preconfirmations are an important pillar of the UX of a rollup and are becoming the standard expected by the users. The design aims to keep and improve this standard through:
As suggested in the Vanilla Based Sequencing design, two types of preconfirmations are expected of the system.
The first type of preconfirmation is transaction inclusion preconfirmation. This preconfirmation guarantees the inclusion of a transaction in the subsequent rollup slot. These are useful for scenarios like simple transfers.
The second type of preconfirmation is the stronger execution state preconfirmation. It allows specifying the desired values of parts of the state of the rollup after execution of the transaction. These are useful for more complex use cases like DEX trades and/or arbitrage.
In order to support the two types of preconfirmations the design suggests introducing two new EIP2718 transaction types specific to vanilla based sequencing.
The two transaction types serve as preconfirmation requests and require the sequencer to respond with a preconfirmation commitment or rejection. These two transactions will enable the sequencers and wallets alike to differentiate the request and its expected properties. Having two separate transactions offers several distinct advantages.
Firstly, two transaction types will enable the two types of preconfirmations to have separate tailor-made fields for their use case. Secondly, two transaction types will enable the transaction pricing mechanism to work separately and find the correct pricing for each. Lastly, having the transaction types separate allows the rollups to choose to support one or both of them.
Both transaction types must include a field - deadline
. Deadline indicates the latest possible L1 slot that this preconfirmation can be honoured.
The goal of the deadline
field is to further optimize the UX and enable wallets and/or nodes to hide complexity away from the user. There are two scenarios where the deadline
field can optimize UX.
Firstly, a preconfirmation request arriving late in the rollup slot of the sequencer might be rejected by the sequencer due to insufficient time to simulate and commit to the transaction. An inferior UX (especially considering hardware wallets) would be to re-request the user to sign the transaction. Having a deadline
field allows the wallet to reuse the same signed transaction and send it to the next sequencer for preconfirmation.
Secondly, a reneged preconfirmation has an inferior UX. Having a deadline
field enables a wallet that has detected preconfirmation reneging to re-submit the preconfirmation transaction to the next sequencer.
In both cases, the deadline
field enables the wallet software to hide away the complexity of resubmission, without requiring further action by the user. On the sequencer side, the deadline
introduces a simple check if the deadline
has passed before committing to it.
Both transaction types will have separate transaction pricing. The mechanism of pricing will be similar to EIP1559. This type of pricing enables easy price discovery by wallets and keeps the UX on par with centralised sequencing.
Preconfirmed transactions are fundamentally a higher-priced service compared to type 2 or legacy TX. Firstly, they require additional processing to handle. Secondly, they have an opportunity cost to the sequencer due to block building restrictions.
In order to account for this, each new transaction type will be priced as a percentage premium
on top of the type 2 base fee. This way the base fees for all transactions will be constantly discovered and dynamically adjusted by total usage of the protocol in order to provide stable, predictable transaction fees.
The premium
parameter for each transaction must be embedded in the protocol and be a percentage increase on top of the base fee.
Let inclusion_preconfirmation_fee_premium
and execution_preconfirmation_fee_premium
be the targeted preconfirmation transactions premiums.
The respective base_fee_per_gas
that a transaction must pay for each of the two transactions can be calculated as *_preconfirmation_base_fee_per_gas = base_fee_per_gas * (1 + *_preconfirmation_fee_premium)
.
Similarly to EIP1559, if the gas used by the block exceeds the block target the base fee for all types of transaction goes up, and vice versa.
The inclusion preconfirmation transaction follows the same structure as ethereum EIP1559 type 2 transaction with the following additional field:
deadline
- int
indicating the latest L1 block that this preconfirmation request is valid to be committed and included in.As the only differentiating field of the inclusion preconfirmation compared to type 2 transactions is the deadline
field, the main changes needed in order to support inclusion preconfirmation transactions are within the dapps and wallets.
On the dapps side, the new transaction type needs to be supported, suggesting a deadline field. On the wallet side, the process of showing and signing the transaction must include the new field.
The execution preconfirmation transaction follows the same structure as ethereum EIP1559 type 2 transaction.
The following fields are added on top of the type 2 fields:
deadline
- uint256
indicating the latest L1 block that this preconfirmation request is valid to be committed and included in.storage_list
- List of objects mapping an address
to a list of tuples (bytes32, bytes32)
of storage slot bytes32
and its expected value.code_list
- List of tuples (address, bytes32)
of account address and its expected codehashbalance_list
- List of tuples (address, uint)
of account address and its expected balanceIf left unbounded, all three constraining lists can greatly expand the transaction size. A greatly expanded transaction size is undesirable as it can lead to increased bandwidth and processing requirements for the nodes.
In order to combat this, the number of items in each list must be limited by the protocol. The specific limit is subject to a decision by the rollup protocols themselves.
storage_list
example:
[
[
"0xde0b295669a9fd93d5f28d9ec85e40f4cb697bae", // Account address
[
[// Tuple (slot, value)
"0x0000000000000000000000000000000000000000000000000000000000000003",
"0x000000000000000000000000000000000000000000000000000000000000000a"
],
[
"0x00000000000000000000000000000000000000000000000000000000000000e7",
"0x000000000000000000000000000000000000000000000000000000000000000b"
]
]
],
[
"0xbb9bc244d798123fde783fcc1c72d3bb8c189413",
[
[
"0x0000000000000000000000000000000000000000000000000000000000000002",
"0x000000000000000000000000000000000000000000000000000000000000000c"
],
[
"0x00000000000000000000000000000000000000000000000000000000000001e9",
"0x000000000000000000000000000000000000000000000000000000000000000d"
]
]
]
]
code_list
example:
[
[
"0xde0b295669a9fd93d5f28d9ec85e40f4cb697bae", // Account Address
"0x0000000000000000000000000000000000000000000000000000000000000be9" //Code Hash
],
[
"0xde0b295669a9fd93d5f28d9ec85e40f4cb697bae",
"0x0000000000000000000000000000000000000000000000000000000000000cd8"
]
]
balance_list
example:
[
[
"0xde0b295669a9fd93d5f28d9ec85e40f4cb697bae",
"0x0000000000000000000000000000000000000000000000000000000000000abc"
],
[
"0xde0b295669a9fd93d5f28d9ec85e40f4cb697bae",
"0x0000000000000000000000000000000000000000000000000000000000000def"
]
]
The process of composing the execution preconfirmation transaction is ultimately a responsibility of the user facing layers - the dapp and the wallet developers. Two types of user personas can be defined as the archetypical users of the execution preconfirmations - Regular dapp users (unsophisticated dapp users wanting to constraint the end result - e.g. outcome of a trade) and sophisticated users (e.g. arbitrage bots).
With regular dapp users ultimately the intent of the user is only well understood within the context of the dapp. Within its context, it is only possible to correctly constrain the desired execution outcome via the dapp application itself.
With sophisticated users, the intent is complex and is acceptable to assume that they can take care of the correct constraining of their desired execution outcome.
The preconfirmation transactions act as a request for preconfirmation from the user to the L2 sequencer. This request needs to be responded to by committing or rejecting the request.
Preconfirmations will continue reusing the standard way of submitting a transaction towards an RPC is to use the eth_sendRawTransaction
method. As part of the specification of the rpc call it can either return a transaction hash or zero hash.
A zero hash response must be treated as an indication of preconfirmation request rejection. A non-zero hash response must be treated as an indication of preconfirmation request acceptance.
In order to commit to preconfirmation, a user receiving a non-zero response should obtain a preconfirmation commitment. This can be done through a newly introduced RPC method eth_getCommitment
.
The response of this method must be signed_commitment
- bytes representing a signed commitment.
signed_commitment = sign(tx_hash, l1_blocknumber)
The constituents of the commitment are:
tx_hash
- the transaction hash of the preconfirmation transactionl1_blocknumber
- the L1 block number that should contain the sequencing transaction that includes the preconfirmed transaction.One way that a sequencer can renege on their promise is to omit the transaction they have promised to include. This is a punishable offence and requires a renege detection mechanism.
While the specific mechanism of renege detection and proving (e.g. ZKP, optimistic challenge, attestations) is ultimately the design decision of the rollup, the signed_commitment
provides a strong foundation for it.
A rational block builder has a natural incentive to withhold commitments until the end of their monopoly slot in order to maximise their profit. This will effectively render preconfirmations slow and will hurt the UX of the rollup.
While this kind of behaviour is hard to detect and enforce punishment for, this document suggests an optional “Robustness incentive” approach by the rollups. Such an approach can see additional revenue (e.g. as part of the rollup revenue) be offered to the sequencers that are consistently offering high quality service.
In order for a sequencer to commit to an execution preconfirmation, they need to ensure that the state prior to the execution of the transaction allows for the specified resulting conditions to occur. During the rollup slot, the pre-state conditions are subject to change by other transactions, and more importantly by other execution preconfirmations. This means that in order to avoid committing to conflicting preconfirmations, the sequencer needs to process the execution preconfirmation transactions synchronously rather than asynchronously.
While this synchronous processing can introduce a bottleneck, recent developments of the execution client EVM implementations offer increased processing speeds to the point that synchronous processing of these transactions won't affect the overall performance of the sequencer and the system.
Processing execution preconfirmations will not introduce a bottleneck at least initially. Sequencers are technically able to process execution preconfirmations at a rate of ~200MGas (>1k TPS) (excluding L1 sequencing and proving). That's ~40x more than the current demand based on L2Beat data.
eth_getCommitment
RPC callThe goal of this document is to outline how the external block building pipeline for L1 and the various L2 block building pipelines can converge within the context of Vanilla Based Sequencing. The goal is to showcase the additions and modifications required to existing software in order for this pipeline convergence to be implemented.
The goal of based sequencing is to enable the L1 proposers to be L2 sequencers. An implicit requirement for based sequencing rollup is to have a mechanism for the submission of L2 sequencing transactions into the existing L1 pipeline. The actor that receives this submission needs to orchestrate the inclusion of these transactions into the L1 block. In this section, we refer to this actor as the “orchestrator”.
There are two well-suited actors to be chosen as the orchestrator. On one side there are the actors that build the blocks - the L1 builders, and on the other are the L1 proposers.
The L1 builders have the natural advantage of already being sophisticated and the L2 sequencing transactions can become a part of their block building pipeline with little-to-no modification - as if the L2 sequencer is another searcher.
Choosing the L1 builders as orchestrator, however, introduces and/or exacerbates several risk factors.
Firstly, in practice, the group of L1 builders is small in size and requires huge technological and business sophistication to enter. This makes the creation and operation of a new based rollup dependent on a small group of players. Unfortunately, this risk has previously been materialised within the ecosystems of the major blockchain network development stacks - a new network needs to spend considerable resources to persuade major validators to join in and support their upcoming network. A similar scenario is a distinct possibility with L1 builders as orchestrator and can hurt the rollups sovereignty - a desired characteristic for any rollup.
Secondly, the geopolitical risks of the small group of builders are transferred to the rollup. Any materialised geopolitical risk (i.e. state induced censorship) is transferred to the rollup itself. This could be particularly problematic for privacy-oriented rollup sequencers, who would be subject to liveness fault risks to no fault of their own. An example of this would be the state of the major builders sanctioning the rollup transactions.
The second approach that this document explores is assigning the L1 proposer as the orchestrator. This approach requires more modifications of the existing software and interactions but is ultimately based on the wide, diverse, decentralised group of Ethereum L1 proposers.
The L1 proposer as the orchestrator enables permissionless creation and operation of a vanilla based rollup. The rollup team needs the rollup to appeal to a diverse set of rational actors - the L1 proposers. Furthermore, they can operate L1 validators themselves and be their bootstrap sequencers - an approach practically impossible with builders. This protects the rollups sovereignty.
Furthermore, this approach prevents additional expansion of control and consolidation of power to the L1 builders. An approach where the L1 proposer requires the inclusion of the L2 sequencing transactions, and the L1 builders are required to oblige or lose their opportunity to build the current block, gives the control of sequencing to the proposers. Such a system utilises the security of the PBS - as long as there is 1 block builder willing to meet the requirements, the system will operate as expected, and even if no builder wants to build this block, the standard fall back to local block building can be used.
The architecture suggests a new software GMEV-Boost - a drop-in replacement software to MEV-Boost. Its main goal is to enable the convergence of L2 pipelines into the L1 pipeline and enable L1 proposers to specify validity conditions towards the external building pipeline.
The architecture introduces the concept of primary pipeline and secondary pipelines.
There is a single primary pipeline - the L1 pipeline. The goal of the primary pipeline is to build and broadcast L1 blocks. It is currently in production and is implemented via MEV-Boost. GMEV-Boost will be fully compatible with MEV-Boost and stakers wanting to only service the L1 pipeline can use the GMEV-Boost as drop-in replacement for MEV-Boost.
The system design supports multiple secondary pipelines. The goal of the secondary pipelines is to produce L2 sequencing transactions. The produced sequencing transactions are then communicated as conditions to the GMEV-Boost software of the proposer. GMEV-Boost then handles their combination and propagation to the primary pipeline for inclusion in the L1 block.
In order to facilitate these communications two new APIs are introduced - Conditions API and Pipelines API.
The Conditions API is an extension to the Ethereum Builder API. Its goal is to enable the proposer, through GMEV-Boost, to communicate to the external builders the validity conditions they have for the externally built blocks. In practice, this means that the Conditions API must be implemented within the relays in order to enable builders and proposers to communicate with each other these requirements.
The communication between the proposer and the relay needs to be an authenticated one. This is in order for the relay to be able to authenticate validity conditions submissions coming from the proposer and mitigate impersonation attacks.
This authentication is in the form of cryptographic signatures by the proposer. This necessitates the GMEV-Boost software to have access to the validator keys of the proposer. As the GMEV-Boost is a software run by the stakers themselves, access to the keys can be done via the same mechanism the validator software uses - i.e. access to a keystore file or Web3Signer. Importantly, GMEV-Boost is a software run and controlled by the stakers themselves and no third party is getting access to their validator keys.
The Pipelines API serves the purpose of enabling communication between the L2 rollup nodes and the GMEV-Boost. It enables the secondary pipelines to both discover the connected (to this GMEV-Boost instance) validators and communicate the individual conditions each of the pipelines has.
In order for the GMEV-Boost software to trust requests from the secondary pipeline, the communication between the two components needs to be authenticated. In a similar manner as the authentication between Consensus and Execution Layer clients, the rollup nodes need to authenticate with the GMEV-Boost via JWT tokens.
Once individual conditions are submitted, it is the responsibility of the GMEV-Boost software to combine and aggregate the secondary pipelines individual preferences into combined conditions for the primary pipeline.
The goal of the primary pipeline is to produce an L1 block. It is a pipeline already in production through the MEV-Boost software and the external block building APIs.
The existing pipeline is extended to allow the specification of validity conditions from proposers to block builders - through the relays. Through this API the proposer can express their requirements of inclusion of the L2 sequencing transactions within the L1 block in order to ensure their duties are fulfilled.
Specification of the Conditions API can be found in a later section.
/eth/v1/builder/conditions/{slot}/{parent_hash}/{pubkey}
endpoint. The relays authenticate the new validity conditions, record them and discard all existing bids./relay/v1/builder/conditions
endpoint. Whenever there is a change in the conditions_hash
of the slot that the builder is building for, the builders need to construct a new bid that complies with the indicated conditions.The rest of the external block-building pipeline interactions remain the same.
An important consideration for proposers is the timing of the submission of validity conditions. The proposer needs to account for the time needed by the block building pipeline to produce a block after the validity conditions have changed. In practice, this means that the proposer must place a validity condition submission deadline or risk missing a slot.
The goal of а secondary pipeline is to produce and submit the L2 sequencing transactions to the proposer for inclusion within an L1 block. Each pipeline is represented by the rollup node and is authenticated via JWT authentication between the rollup node and the GMEV-Boost software as part of the Pipelines API.
Specification of the Pipelines API can be found in a later section.
/gmev/v1/validators
endpoint of the Pipelines API. This enables the rollup nodes to know when they are selected as a sequencer (primary or fallback selection)/gmev/v1/conditions
endpoint of the Pipelines APIThe subsequent lifecycle of the sequencing transaction is in the scope of the GMEV-Boost software.
The goal of the GMEV-Boost software is to be an orchestrator and convergence point between the various secondary pipelines and the primary one. There are several important responsibilities of the GMEV-Boost software.
Firstly, it implements the Pipeline API. On one hand, this provides a way for the secondary pipelines to discover the validators registered with this staker. On the other, it is the method for secondary pipelines to submit their individual conditions.
Secondly, it is responsible for accepting or rejecting individual conditions based on the current validator status. For example, if there is no validator chosen to be a proposer in the current (or subsequent) slots, the secondary pipelines are not supposed to be submitting individual conditions.
Thirdly, it has the duty to facilitate the conditions combination, signing and submission processes. It aggregates the various individual conditions and prepares the data for signing and submission, facilitates the production of the authentication signature and lastly calls the relays with the newly combined validity conditions.
Lastly, it is GMEV-Boost responsibility to cover for synchronisation edge cases. Such a case might be a late submission of validity conditions where the GMEV-Boost might need to fall back to local block building as no bids can come on time.
The GMEV-Boost component is an ideal point for further extension of the based rollups ecosystem with desired features like Universal Synchronous Composability.
Introduces 1 new endpoint in the builder API implemented by the relays and 1 new endpoint in the relay API.
class ValidatorConditionsV1(Container):
top: List[Transaction, MAX_TOP_TRANSACTIONS],
rest: List[Transaction, MAX_REST_TRANSACTIONS]
Note
L1 transactions that are sequencing L2 transactions do not require top
of the block positioning, however, the equivalent pipeline and APIs can be re-used for external L2 block building pipelines. This can enable the sequencer to specify execution preconfirmations as top-of-the-block conditions for the L2 block builders.
class SignedValidatorConditionsV1(Container):
message: ValidatorConditionsV1,
conditions_hash: Hash32,
signature: BLSSignature
Endpoint: POST /eth/v1/builder/conditions/{slot}/{parent_hash}/{pubkey}
Description: Sets the validity conditions for this slot by its proposer. Requires signed message.
Request: SignedValidatorConditionsV1
Notes:
application/json
. If SSZ, the content type should be application/octet-stream
.gzip
. Compression is optional.conditions_hash
is a keccak hash over the SSZ encoded message
. It is used for quick identification by the caller if the validity conditions have changed.conditions_hash
.Example Request:
"message": {
"top": [
"0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8dddddd"
],
"rest": [
"0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8eeeee",
]
},
"conditions_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
Endpoint: GET /relay/v1/builder/conditions
Description: Get a list of indicated conditions for validators scheduled to propose in the current and next epoch. Builders are expected to be polling these regularly in order to satisfy requirements.
Notes:
Example Response:
[
{
"slot": "1",
"validator_index": "1",
"entry": {
"message": {
"top": [
"0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8dddddd"
],
"rest": [
"0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8eeeee",
]
},
"conditions_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
"signature": "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"
}
}
]
GMEV-Boost offers two new endpoints for use by the secondary pipelines. Both endpoints require authentication via JWT token.
class ValidatorDataV1(Container):
validator_index: uint256,
pubkey: string
Endpoint: /gmev/v1/validators
Description: Get the validators registered with this GMEV-Boost instance
Response: List[ValidatorDataV1]
Notes:
Example Response:
[
{
"validator_index": "42",
"pubkey": "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
}
]
Endpoint: /gmev/v1/conditions
Description: Submits conditions for the current slot
Request: ValidatorConditionsV1
Notes:
application/json
. If SSZ, the content type should be application/octet-stream
.gzip
. Compression is optional.Example Request:
"message": {
"top": [
"0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8dddddd"
],
"rest": [
"0x02f878831469668303f51d843b9ac9f9843b9aca0082520894c93269b73096998db66be0441e836d873535cb9c8894a19041886f000080c001a031cc29234036afbf9a1fb9476b463367cb1f957ac0b919b69bbc798436e604aaa018c4e9c3914eb27aadd0b91e10b18655739fcf8c1fc398763a9f1beecb8eeeee",
]
}
The Builder API will have minimal modifications to its current behaviour. The only optional change would be in the bid submission endpoint /relay/v1/builder/blocks
where an optional field conditions_hash
can be added to the request.
This field will enable the builders to specify against which conditions_hash
their bid is and will cover race condition edge cases where conditions have changed in flight.
"message": {
...
"value": "1",
// Optional
"conditions_hash": "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"
},
...
}
The Relay API will have minimal modifications to its current behaviour. The only optional change would be in the validator registration endpoint /eth/v1/builder/validators
where an optional field conditions_version
can be added to the request.
An omission or value of 0
indicates no support for conditions submission, whereas a non-zero value indicates the version of the Conditions API.
Example Request:
{
"message": {
...,
// Optional
"conditions_version": "1",
"pubkey": "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
},
"signature": "..."
}
The main modification for the rollup nodes is the requirement to use the Pipelines API.
Endpoint: /gmev/v1/validators
Queried on startup in order to acquire the list of registered validators within the GMEV-Boost instance. This is required in order for the Rollup node to determine when they are selected to be a sequencer and start the list building process.
Endpoint: /gmev/v1/conditions
Used for submission of sequencing transactions when one of the registered validators within GMEV-Boost is selected to be sequencer via the primary selection mechanism the rollup node, in addition to publicly broadcasting the sequencing transactions, should also submit them to the GMEV-Boost via the Submit Conditions endpoint.
In order for an L1 proposer to become eligible to be selected as a sequencer of the rollup, they must register themselves as an L2 sequencer. The registration process includes:
Both these actions are performed onchain in L1 through smart contracts.
The sequencer registry contract is the contract keeping track of the currently registered sequencers, their metadata and their status.
interface ISequencerRegistry {
struct ValidatorProof {
uint64 currentEpoch;
uint64 activationEpoch;
uint64 exitEpoch;
uint256 validatorIndex;
bool slashed;
uint256 proofSlot;
bytes sszProof;
}
struct Sequencer {
bytes pubkey;
bytes metadata;
address signer;
uint256 activationBlock;
uint256 deactivationBlock;
}
/**
* Registers the sequencer without activating them.
*
* Authorised operation. Requires a signature over a digest by the validator via their BLS pubkey.
* Requires EIP-2537 in order to verify the signature.
*
* Rollup contracts will use the signer address when enforcing the primary or secondary selection.
* The signature must authenticate the validator and must be over an authorisation hash.
* The authorisation hash must be verifiable by the contract and must include a nonce in order to guard against replay attacks.
* The nonce derivation is a decision of the implementer but can be as simple as an incremental counter.
* The implementation MUST check if the authHash is a keccak256(protocol_version,contract_address,chain_id,nonce).
*
* @param signer - the secp256k1 wallet that will be representing the sequencer via its signatures
* @param metadata - metadata of the sequencer - including but not limited to version and endpoint URL
* @param authHash - the authorisation hash - keccak256(protocol_version,contract_address,chain_id,nonce). The authorisation signature was created by signing over these bytes.
* @param signature - the signature over the authHash performed by the validator key
* @param validatorProof - all the data needed to validate the existence of the validator in the state tree of the beacon chain
*/
function register(
address signer,
bytes calldata metadata,
bytes32 authHash,
bytes calldata signature,
ValidatorProof calldata validatorProof
) external;
/**
* Changes the sequencer signer and/or metadata.
*
* Authorised operation. Similar requirements apply as in `register`
*
* @param signer - the new wallet that will be representing the sequencer via its signatures
* @param metadata - the new metadata of the sequencer - including but not limited to version and endpoint URL
* @param authHash - the authorisation hash - keccak256(protocol_version,contract_address,chain_id,nonce). The authorisation signature was created by signing over these bytes.
* @param signature - the signature over the authHash performed by the validator key
*/
function changeRegistration(address signer, bytes calldata metadata, bytes32 authHash, bytes calldata signature)
external;
/**
* Activates the sequencer finalising the registration process
* Implementers must make sure that the sequencer meets the activation (i.e. stake) requirements before changing their status.
*
* @param pubkey - the validator a BLS12-381 public key - 48 bytes
* @param validatorProof - all the data needed to validate the existence of the validator in the state tree of the beacon chain
*/
function activate(bytes calldata pubkey, ValidatorProof calldata validatorProof) external;
/**
* Deactivates the sequencer.
*
* Authorised operation. Similar requirements apply as in `register`.
* Implementers of the staking process must make sure that the sequencer is no longer active before withdrawal disbursal
*
* @param authHash - the authorisation hash - keccak256(protocol_version,contract_address,chain_id,nonce). The authorisation signature was created by signing over these bytes.
* @param signature - the signature over the authHash performed by the validator key
*/
function deactivate(bytes32 authHash, bytes calldata signature) external;
/**
* Forcefully deactivates a sequencer.
*
* The caller must provide proof that the validator is no longer active or has been slashed.
*
* @param pubkey - the validator a BLS12-381 public key - 48 bytes
* @param validatorProof - all the data needed to validate the existence and state of the validator in the state tree of the beacon chain
*/
function forceDeactivate(bytes calldata pubkey, ValidatorProof calldata validatorProof) external;
/**
* Used to get the eligibility status of the sequencer identified by this pubkey
* @param pubkey - the validator a BLS12-381 public key - 48 bytes
*/
function isEligible(bytes calldata pubkey) external view returns (bool);
/**
* Returns the saved data for the sequencer identified by this pubkey
* @param pubkey - the validator a BLS12-381 public key - 48 bytes
*/
function statusOf(bytes calldata pubkey) external view returns (Sequencer memory metadata);
/**
* Used to get the activation status of the sequencer with this signer address
* @param signer - the associated signer address of a sequencer
*/
function isEligibleSigner(address signer) external view returns (bool);
/**
* Returns the data for a sequencer by its index
*/
function sequencerByIndex(uint256 index)
external
view
returns (address signer, bytes memory metadata, bytes memory pubkey);
/**
* Number of Blocks after activation that the sequencer becomes eligible for sequencing
*/
function activationTimeout() external view returns (uint8);
/**
* Number of Blocks after deactivation that the sequencer becomes ineligible for sequencing
*/
function deactivationPeriod() external view returns (uint8);
/**
* Returns the total count of sequencers at this block number
*/
function eligibleCountAt(uint256 blockNum) external view returns (uint256);
/**
* Returns the protocol version used for authorising the digests.
*/
function protocolVersion() external view returns (uint8);
}
Some of the functions in the repository must only be triggered by the L1 validators themselves. A signature - signature
- produced through their BLS keypair is used for authentication of the validator. The signature is over 32 bytes - authHash
.
The implementations can choose what authHash
is. A suggestion is keccak256(protocol_version,contract_address,chain_id,nonce,function_identifier,params_hash)
where:
protocol_version
- version of the authorisation protocolcontract_address
- the address of the contract intended to be authorising this message - the registry contractchain_id
- the EIP155 chain id of the intended network - mainnet
nonce
- a number used once as anti replay attack mechanism.function_identifier
- the function identifier of the intended functionparams_hash
- keccak256 hash of the function parameters - f.e. signer
and metadata
The implementations must recover the 48 bytes BLS12 pub key - pubkey
- of the caller through the signature recovery process over the signature
and authHash
. Note that BLS signature recovery is not possible until EIP-2573 is included (currently scheduled for Pectra).
Next, the contract must check the existence of the validator in the current validator set. The implementations can achieve that via a two-step process:
beaconRoot
through the contract at BEACON_ROOTS_ADDRESS
introduced with EIP4788 in the Dencun upgrade for the slot specified through the proofSlot
parameter.pubkey
belongs to an active validator through SSZ Merkle inclusion multiproofs with the beaconRoot
and the provided validatorProof
, and performing the checks of is_slashable_validator function specified in the consensus spec.An L1 proposer can start the registration process by triggering the register
method. This method requires authorisation and should comply with the authorisation protocol outlined above.
Upon successful check, the implementation must create an entry for the sequencer - Sequencer
. It should associate its signer
address and metadata
bytes with the pubkey
. The signer
is the address that the new sequencer will use to operate. It will be the expected sender address when sequencing and committing to preconfirmations.
The metadata
is information about the sequencer and how the external parties can connect to it. It should include but is not limited to version and endpoint URL.
A registered sequencer must activate before becoming eligible for selection. The separation between registration and activation allows the rollup protocol to embed specific activation rules. An example of such activation rules can be a minimum stake posted by the sequencer or activation delay.
The activation is performed via the activate
method. Implementers must check that the activation rules are met. Similarly to registration, a mandatory check is that the validator is active. Additional checks might include calling a staking contract to check if a stake has been posted.
If all checks are passed the implementations must set the sequencer activationBlock
to the current block number and include it in the list of sequencers.
Note: While the sequencer is active it is not yet eligible for selection as activation timeout needs to pass. Its role is similar to the activation period enforced by the beacon chain.
In case a registered sequencer wants to change their registration - i.e. updating their metadata - they can call the authorised changeRegistration
method. The implementations must associate the new signer
and metadata
with the pubkey
.
In case a registered sequencer wants to stop being a sequencer they can call the authorised deactivate
method. The implementations must change the sequencer deactivationBlock
to the current block number so that interfacing contracts can enforce withdrawal timeouts.
Note: While the sequencer is deactivated it can still be selected by the selection algorithms until the deactivation timeout period passes. Deactivated selected sequencers will miss their slots.
Having the sequencers be active L1 validators is an important property of vanilla based sequencing. Within primary selection, it ensures the highest level of censorship resistance over sequencing transaction inclusion.
One possible corner-case scenario sees an opted-in sequencer exit the Ethereum validator set or be slashed. In order to cover this case a publicly available function forceDeactivation
is provided. The caller can supply proof that the validator is no longer active. The implementations must change the sequencer deactivationBlock
to the current block number.
Note: While the sequencer is deactivated it can still be selected by the selection algorithms until the deactivation timeout period passes. Deactivated selected sequencers will miss their slots.
In order to ensure a predictable lookahead sequencer queue, the randomness selection algorithm for fallback selection requires a stable total count of eligible validators during the lookahead period. This requires the enforcement of activation and deactivation timeouts. Choosing the number of blocks X
timeout effectively specifies the lookahead queue of proposers.
Implementations need to make sure that they respond to the function eligibleCountAt(uint256 blockNum)
method considering both the activationBlock
of newly activated sequencers and the activation timeout.
Similarly, implementations of the fallback mechanism must consider sequencers eligible within timeout blocks after their deactivationBlock
and the deactivation timeout.
All rollups have their own sequencing contracts. The following paragraphs list modifications to the rollup contracts that need to be done in order to support vanilla based sequencing.
The rollup contract must implement the fallback selection for sequencers - selecting a random sequencer from the opted-in proposers.
In order for this selection to be replicated offchain and provide sufficient lookahead to sequencers and followers alike, the selection must be based on a property known in advance.
An example could be taking the blockhash
of the X
-th parent block of the current block as a seed to a randomness algorithm. Using blockhash
is a good candidate as one of its constituents is the beacon chain randomness beacon and is hard to manipulate. The parameter X
specifies the lookahead provided by the rollup to the following nodes.
In order to select the sequencer the SequencerRegistry
contract provides two methods - eligibleCountAt
and sequencerByIndex
. Through, these two methods a random active sequencer from the currently eligible sequencers set is selected. Similarly to the beacon chain proposer selection algorithm, if a deactivated sequencer is selected, the algorithm must retry until an active eligible one is selected.
One of the major requirements of vanilla based rollups is to implement the primary and fallback sequencer selection algorithms. These are used by the sequencers to identify if and when they are selected to be the sequencer in one of the two modes.
An intuitive approach can see the L1 smart contracts requiring sequencing transactions from sequencers that are 1) not current L1 proposers in case of primary selection and 2) not the selected sequencer in case of fallback selection. Such an approach, however, is currently not implementable within the Ethereum L1 - there is a lack of information within the execution layer about the current block proposer.
Firstly, the current proposer cannot be proven via SSZ multiproof due BEACON_ROOTS_ADDRESS
being the bacon tree hash root of the parent block - thus the information about the current slot is not yet available.
Secondly, the address available via the COINBASE
opcode cannot be deterministically linked with the proposer. Since the merge this opcode returns the fee recipient set by the block builder, but this is not necessarily the proposer.
Both of these obstacles prohibit the rollup contracts from rejecting sequencing submissions in-flight. Implementation of such an approach requires an L1 hard fork to expose the current block proposer within the execution layer.
In order to tackle this, it is suggested to implement an optimistic selection - performing the selection algorithm off-chain while allowing for proving and/or punishing submissions by ineligible sequencers afterwards. There are several arguments for this choice.
First, an honest following node will always know off-chain who is the valid sequencer and whose sequence they should apply to their state.
Second, such an optimistic approach “optimises the happy path”. If the sequencers are acting honestly, only one sequencer will submit a sequencing transaction - the selected one.
In order to account for malicious cases two workarounds can be chosen.
The first option is to introduce a challenge period (something existing in the optimistic rollups already). If a sequencer acts maliciously and submits a sequencing transaction when not selected as a sequencer their submission can be proven ineligible (through SSZ Merkle multiproof) and the sequencer can be punished and the challenger is rewarded.
The second option is to introduce a validation function in the subsequent blocks (or alongside the proving transaction for zk rollups). This transaction aims to provide eligibility proof (through SSZ Merkle multiproof) for the sequencer - proving that they were the correctly selected sequencer.
Both options introduce drawbacks to the sequencing flow but are implementable immediately. It is suggested that if/when Ethereum L1 provides a way for the execution layer to access the current block proposer, the implementations switch to a safer validation approach.
Regardless of the chosen optimistic selection option, the two selection types require their distinct ways of proving eligibility/ineligibility. Within the context of the primary selection, the proof must be an SSZ multiproof that the sequencer is/is not the proposer of the slot that the sequencing transaction came in. Within the context of the fallback selection, the selection verification can be achieved deterministically by the L1 smart contracts by performing the deterministic fallback selection algorithm outlined in the previous section.
Many rollups have their own sequencing contracts. This stake is normally used as a crypto-economical incentive for correct behaviour.
While staking is not a requirement for a rollup to be “Vanilla Based” (bonds can be utilized as well) the suggested SequencerRegistry
interface is designed with staking in mind.
Firstly, the separation between registration (performed via register
) and activation (performed via activate
) provides a useful entry point for staking contracts. The rollup staking contract can only permit the triggering of activate
if the staking requirements are met.
Secondly, the deactivationBlock
field within the Sequencer
data enables the staking contracts to enforce withdrawal conditions.
In this section, you can find several important notes and considerations when implementing a rollup staking mechanism.
Firstly, vanilla based sequencing is not dependent on the asset and amount being staked. It is the design decision of the rollup which asset and how much of it should be staked in order to meet the activation criteria. Furthermore, the rollup designers can opt for a more complex staking mechanism enabling stake delegation and/or validator restaking (i.e. EigenLayer-style restaking and LST).
Secondly, it is important to consider the time to finality when releasing the stake of deactivated validators. As the finality is when most of the misbehaviour is detected (fraud proofs or proving invalidity) it is important to consider a grace period after deactivation before making the stake withdrawable.
An important part of the flow of interaction with a vanilla based rollup is the discovery of the subsequent sequencers. In this section, it is outlined of the mechanism to discover and communicate with the subsequent sequencers.
In order for a wallet or a user to discover the current sequencer and the sequencers in the lookahead queue they need to get several pieces of information from the L1 beacon chain and the rollup SequencerRegistry
.
First, the wallet needs to determine what are the known L1 proposers for the next X
slots - with X
being a parameter set by the rollup team and should ideally be a single epoch (32 slots).
Second, the wallet needs to get the list of current eligible opted-in L1 validators and cross-check it against the next X L1 proposers. Any match can be considered a primary selected sequencer - the L1 proposer will be the L2 sequencer in this slot.
For slots where the L1 proposer has not opted in to be a sequencer, the wallet needs to run the deterministic fallback selection algorithm as specified in the “Deterministic Fallback Selection” section above. Keep in mind that these sequencers will be revealed X
slots ahead of time. The randomly selected L2 sequencers will be the sequencers for the respective slots.
Last, the wallet must obtain the metadata
information for the sequencer they want to communicate with.
As part of the metadata, the sequencers must communicate an RPC endpoint. This endpoint must be the RPC endpoint for transaction submission by the sequencer and the requests it should service should include, but are not limited to:
As slots change, the sequencers URLs are going to change too. This is a complexity that is to be handled by the wallet software.