# Aztec X BlockScience Report: RfP - Sequencer Selection ![](https://hackmd.io/_uploads/rkkXn9lJT.png) *Authors: [BlockScience](https://block.science/), August 2023* ## Executive Summary This report evaluates the proposed ‘Fernet’ and ‘B52’ sequencer selection protocols for the Aztec network, and provides a recommendation based upon an in-depth analysis of the two proposals. These recommendations are intended to assist the Aztec Labs team in selecting a single proposal to serve as the foundation for the detailed design, implementation and testing of Aztec’s sequencer selection protocol. As part of this analysis, an array of requirements were distilled from expert knowledge leveraged during discussions between BlockScience and the Aztec Labs team, and both the Fernet and B52 proposals were evaluated against these requirements. The two designs share enough similarities that their associated assessments were also similar on several margins — but by utilizing requirements analysis it was possible to identify more precisely where the trade-offs lie between the two proposals. The Fernet proposal is at its core leveraging pseudo-randomness to manage a trade-off between 1) the certainty required for sequencers (and implicitly) provers in order to operate profitably, and 2) sufficient uncertainty to mitigate incentives to exploit the network. Fernet is a relatively simple proposal wherein the role of a sequencer as block builder is conditional upon their (pseudo-random) selection, while the mechanism for proving block validity remains unspecified. Fernet’s simplicity is its biggest strength and its biggest weakness. Simplicity renders the protocol legible to users, reduces the potential for edge cases in the incentive space, and allows sequencers more flexibility in how they approach operations (e.g. do their own proofs versus enlisting provers). The downsides include sensitivity to exploiting the random oracle used alongside the Verifiable Random Function (VRF) to generate the scores for proposer rankings. The B52 proposal takes inspiration from the concept of proposer/block-builder separation (PBS), aiming to bring MEV auctions directly into the Aztec protocol. It is more precise to say that B52 *enforces* separation between the role of blockbuilders (as proposers) and provers (as builders). [Flashbots’ MEV-boost](https://docs.flashbots.net/flashbots-mev-boost/introduction) in the Ethereum network provides evidence that PBS is beneficial to users, helping to e.g. mitigate transactions censorship, and incorporating PBS at the protocol level is intended to combat centralization within a service provider market. Where Fernet is silent on how provers and block-builders relate, B52 is explicit—but explicitly structuring a market that formally separates the role of the block builders and provers comes at the cost of a significant increase in complexity. Furthermore, it is not yet clear if Sybil attacks can be reliably prevented when the same actor orchestrates block-building and proving activity, even when those activities are computationally disjoint. After careful consideration, BlockScience recommends that Aztec proceed with the **Fernet** proposal for its sequencer selection protocol, but that insights from the design of B52 be incorporated into open source tools facilitating a prover market. Where B52 would attempt to enshrine separation as protocol enforced *rules*, this recommendation acknowledges the value of PBS, but suggests addressing it via software enshrined *norms*. This reduces complexity, shifting the burden of exhibiting preferences for PBS onto the market actors. At the same time, the degree of empirical separation between these functions should be carefully monitored and reported as a means of identifying and mitigating potential centralization risks. ![Table 1: Assessment chart.](https://hackmd.io/_uploads/BJJ0vBxJ6.png) *Table 1: Assessment chart. Categories described below.* **Assessment Chart - Categories** The following categories were adopted to structure and analyze various risks within the system. This framework was decided upon by BlockScience, but resulted from extended discussions and prioritizations of potential risks provided by the Aztec Labs team. **Technical** relates to the "mechanism design" of the protocol; what an actor is technically able to do. **Economic** relates to the "market design" of the protocol; what an actor is incentivized to do. **Desired Property** means this is a behavior we want the system (humans + implementation) to exhibit. **Vulnerability** means this is a behavior we do not want the system (humans + implementation) to exhibit. **Color code:** * <u>Red:</u> The proposal is at risk of not meeting requirements. * <u>Yellow:</u> The proposal is potentially at risk of not meeting requirements. * <u>Green:</u> The proposal fulfills the requirements. **Certainty:** * <u>High:</u> The analysis is highly certain to be accurate. * <u>Medium:</u> The analysis is likely to be accurate. * <u>Low:</u> The analysis has high uncertainty surrounding it. **Note:** A low certainty rating may merit more analysis if a change in the color code would alter the decision. Conversely, a low certainty rating may be irrelevant if a change in the color code would not change the decision. Our efforts were directed precisely towards increasing the certainty on the items we deemed most relevant to forming a reliable recommendation. --- **Table of Contents** [TOC] --- ## Extended Recommendation Rationale After careful consideration, BlockScience recommends that Aztec proceed with the **Fernet** proposal for its sequencer selection protocol, but that insights from the design of B52 be incorporated into open source tools facilitating a prover market. The most concrete difference between Fernet and B52 is the approach to soliciting proofs. Fernet takes the form of a *constraint satisfaction problem*; the protocol is agnostic to how the sequencer constructs or solicits proofs for the transactions included in a block, but it nonetheless enforces that such proofs must be present. Each round, sequencers receive a score generated by a VRF, the highest ranking sequencer who proposes a fully proven block wins the block rewards, and runners-up may receive “uncle rewards” as a means of ensuring liveness. Even if one is agnostic to how transactions are selected and proofs are constructed, it is prudent to ensure that the friction associated with taking these actions is lowered. To this end, the current recommendation includes developing a peer-to-peer (p2p) network for soliciting proofs, and compensating provers for submission of valid proofs, as contemplated in the B52 proposal, or to find a similarly suitable mechanism through another RfP process. However, rather than attempting to enforce prover-block-builder separation via *mandating* the use of this p2p network, the network should be viewed as an optional “secondary labor market”. In this view, provers are akin to specialized “subcontractors“ for sequencers. Fernet can be likened to the “free market” solution to a computational labor market problem, whereas B52 is more similar to a “nationalized” solution to that same computational labor market problem. Specifically, B52 is opinionated about precisely how the blocks and proofs are constructed and ultimately relies upon prover commitments as the basis for the election of a block. The effort to enshrine MEV in the protocol is best understood as working ahead of existing and upcoming proposals to incorporate the Flashbots MEV-boost architecture directly into the Ethereum Protocol. While research supporting such proposals is compelling, it has not yet been deployed to production. In practice, [empirical evidence suggests](https://ethresear.ch/t/empirical-analysis-of-builders-behavioral-profiles-bbps/16327) that MEV-boost as an option in the Ethereum computational labor market is beneficial; many but not all actors choose to use it, and that choice may be conditional on the content of the transactions itself. In light of these observations, the BlockScience team feels it is prudent to align the Aztec design more closely with the extant Ethereum transaction market, rather than a proposed future one. An additional benefit of Fernet as a “free market” solution is its *adaptive capacity*; should the most cost-effective way for sequencers to reliably produce valid blocks change due to external conditions, the market can be expected to adapt. Furthermore, the costs associated with completing and orchestrating proofs provide a lower bound for transaction costs on Aztec; these costs are estimated to be higher in B52 than in Fernet. Finally, in Fernet it is possible that innovative sequencers may find more efficient ways to orchestrate provers, without needing to advocate for protocol changes to see those benefits into production. Ultimately, protocols are as much defined by what they do **not** mandate as by that which they do mandate. Viewing a protocol as a constraint satisfaction problem, rather than as a computing procedure, can be helpful for unlocking the collective intelligence power of the network participants. Fernet is not without its weaknesses. In particular, Fernet relies heavily on a source of pseudo-randomness for its simplicity. While the uncertainty introduced by the scoring and ranking system helps to align incentives and address the centralization and censorship risks, this mechanism means that all of these properties rely on the integrity of the entropy source. As currently architected, Fernet relies on RANDAO as its Random Oracle. Potential exploits to RANDAO have been documented, but as yet not observed in production. The BlockScience team took these concerns seriously and endeavored to better understand this vulnerability. One important finding is that the VRF, which uses the Random Oracle output as an input, serves as a critical threat mitigation vector. From a zoomed-out view, the chain of information flows from the Random Oracle, to the VRF, to each Sequencer deciding whether to build a block based on their private scores, to the final block being selected based on the highest scoring sequencer to propose a valid block. The concern is less about whether one might bias the RANDAO outputs and more about whether it is possible to systematically bias this process. It is our opinion that this risk can be mitigated by downstream steps. Furthermore, should the RANDAO scheme prove to be unsafe, a new Random Oracle could be selected, and the remainder of this logical process can be left intact. Along this line, a future differentiating feature for Aztec could be to provide an Aztec-native high quality Random Oracle, which could eventually serve as a substitute for RANDAO. Finally, it is important to note that “free market” solutions are not magic solutions. Markets are information processing and resource allocating systems. Aztec’s privacy-enabled Layer 2 network may be understood as a computational labor market. The demand side is made up of actors who wish to purchase the service of canonizing (public and private) transactions, while the supply side is made up of service providers who 1) assemble the would-be-canonized transactions into fully proven blocks, and then 2) canonize them by registering them on the Layer 1 blockchain. Regardless of which proposal is chosen, it is understood that orchestrating this labor may be costly; there are capacity constraints and costs carried over directly from the Layer 1 blockchain, in addition to Layer 2 native prover-sequencer coordination costs. Although it is clear that private transactions are an important addition to the Web3 space, it is not yet clear what kind of pricing premium private transactions will garner. The task of structuring the Aztec Network’s incentives to facilitate equitable, decentralized, and censorship-resistant market equilibrium is non-trivial. Building upon this analysis, the BlockScience team believes that Fernet is a stronger foundation from which to design, implement and orchestrate a robust market for provably valid private (and public) transactions. # Report: Request for Proposals This report is an analysis resulting from an ongoing collaboration between Aztec and BlockScience. It serves as an additional and third-party evaluation of the “Fernet” proposal and the “B52” proposal to guide towards a future [sequencer selection protocol](https://medium.com/aztec-protocol/aztec-sequencer-selection-finalists-122dbc6bb4). This analysis is the result of research on the Aztec Protocol, the rollup and L2 landscape, the sequencer role in particular as well as the two aforementioned proposals. Parallel to this research, the Aztec and BlockScience teams held regular meetings discussing specifics for the Aztec protocol overall and the sequencer selection protocol in particular. From these talks, the BlockScience team derived requirements and prioritizations for analysis, resulting in the structure of this report. ## Sequencer Decentralization Aztec - as most L2 rollup systems - has relied upon a centralized sequencer to canonize transactions. The term **Canonization** is introduced to characterize the service the Aztec network provides: verifying, proving and ordering transactions, bundling them into rollup blocks to have them accepted as canonical by registering them on L1. While decentralization of this role is a relevant factor for most L2 rollups, it is especially so for Aztec due to censorship risks faced by its status as a privacy-preserving solution. First, privacy-preservation comes with increased complexity for sequencing compared to other L2s. This specialization required might decrease the pool of service providers available for sequencing. Second, privacy-preservation can come with an increased scrutiny by other actors, most notably regulatory bodies, making sequencers a target for censorship. These risks - not as inherent for non-privacy preserving rollup solutions - surface a higher need to decentralize main protocol roles and specifically raises the requirement for permissionless sequencing. ## Uncertainty Through conducting this report, it has become clear that there are complicating factors to fully evaluating the proposals as-is, such as not yet existing data on demand and supply side considerations as well as non-finalized research on proof sizes, fee markets etc. Especially uncertain are considerations on costs of coordination and transaction fees / MEV, where no final decisions can be made without additional data, especially in the absence of similar-enough comparisons in the market. Uncertainty is addressed by choosing a path, in this case a specific sequencer selection proposal, and then following that path by iteratively adding details to the designs (often by implementing and testing) while continuing to reduce uncertainty via all practical means (such as building simulations, deploying test nets, and doing market research). ## Fernet The “Fernet” proposal, authored by Santiago Palladino (Aztec Labs), aims for randomized sequencer selection. To be eligible for selection, sequencers need to stake assets on L1 (Ethereum). In each round, staked sequencers privately calculate their round-based score, derived from a VRF. The proposed VRF uses a SNARK of a hash over the sequencer private key, the current block number and a random beacon value from RANDAO. <u>**1. Proposal Phase:**</u> *Duration: 2 L1 blocks (24 seconds)* Sequencers are assumed to only commit a block proposal if they determine their score for a given round as likely to be either winning or eligible for an uncle reward. Block proposals consist of the sequencers VRF output, the commitment to (i.e. a hash of) the transaction ordering in the proposed block and the identifier of the previous block. <u>**2. Proving Phase:**</u> *Duration: 50 L1 blocks (10 mins) - subject to change* After the proposal phase ends, no more proposals can be made for this round. While prover coordination is out of scope for this proposal, provers will likely build proofs for blocks with the highest scores and the block content known to the provers. After a block proof has been built, it can be submitted on L1, as long as it is valid. Valid proofs need to refer to an eligible proposal and include the VRF proof from the sequencer. After a valid proof is submitted, anyone can submit the block content on L1, likely as EIP-4844 blobs. <u>**3. Finalization Phase:**</u> Once the proving phase has ended, anyone can finalize the highest scoring block, if the block has been proven, references the previous canonical block and whose contents have been revealed. Once a block is finalized, rewards are paid out (likely on L2) to the sequencer who proposed it, as well as to the provers involved in generating the proof. Further details can be found in [the proposal](https://hackmd.io/0FwyoEjKSUiHQsmowXnJPw?view) itself. (A new and since-updated version of Fernet can be found [here](https://hackmd.io/@aztec-network/fernet)) ![Fernet_Process Diagram](https://hackmd.io/_uploads/HJ7m_SxJT.png) ## B52 The “B52” proposal, authored by Joe Andrews (Aztec Labs), aims to enshrine MEV in the sequencer selection protocol by incentivizing sequencers to build the most profitable blocks. Sequencers propose blocks on L2, explicitly offering a fee to Provers and an amount of native tokens to burn. Provers - who need to stake on L1 - then commit to prove parts of proposed blocks. Proposals are ranked through a score consisting of the amount of prover signatures (each weighted with a VRF, to reduce the risk of sequencers monopolizing the prover role) and the amount to be burned. <u>**1. Proposal Phase:**</u> *Duration: N seconds + potential offset* Sequencers build blocks and propose them on L2, including a commitment to the ordering, the fee for provers and an amount of native tokens to be burnt. **1.1. Voting and Commitment Phase:** Provers vote on blocks by committing to parts of the block they would build proofs for. Votes (broadcasted on L2 P2P network) include the hash of the respective block, height and index of the proof the prover commits to building, the provers VRF output for this block and a signature. Sequencers collect votes and build a block transcript, likely maximizing the total number of unique votes, while submitting the transcript hash on L1. <u>**2. Proving Phase:**</u> **2.1. Reveal Phase:** Once the Proposal Phase has ended, sequencers need to reveal their block content (can be done on L2) and the content of the block transcript, including specific provers. **2.2. Acceptance and Ranking Phase:** Once a complete block proof has been produced, the sequencer can submit it on L1, where the smart contract checks whether the block proof is correct, the score proof is correct and then ranks the block based on its score. <u>**3. Finalization Phase:**</u> Once the Acceptance Phase has ended, the highest scoring block can be marked as canonical and finalized with an L1 transaction updating the state roots and issuing the rewards. More detail can be found in [the proposal](https://hackmd.io/VIeqkDnMScG1B-DIVIyPLg) itself. ![B52_Process Diagram](https://hackmd.io/_uploads/SyTVureJa.png) # Extended Discussion of Recommendations ## Technical: Desired Property ### Permissionless Sequencer Role *<p style="text-align: center;">Any actor who adheres to the protocol can fill the role of sequencer.</p>* **Fernet** The Fernet sequencer selection protocol requires sequencers to register on L1 by staking funds, followed by a waiting period. In each round, sequencers can privately calculate their score and, if they consider the score high enough, submit a proposal for a block on L1. There are no technical restrictions on who can take on the sequencer role or propose a block in any given round. **B52** The B52 sequencer selection protocol requires no registration or staking for sequencers, but instead determines a per-round score through a function of prover signatures (incentivized through a prover fee determined by the sequencer) and an amount of native tokens to burn (determined again by the sequencer). There are no technical restrictions on who can take on the sequencer role or propose a block in any given round. ### Permissionless Protocol Use *<p style="text-align: center;">Any actor who adheres to the protocol can submit transactions.</p>* **Fernet** The Aztec protocol requires users to build zk-proofs for private function calls, pay for fees and have funds on L2. There are no technical restrictions on on/off-boarding funds to/from L2 or on sending transactions on L2. The Fernet sequencer selection protocol also places no restrictions on users and whether they can submit transactions to sequencers. While individual sequencers might be able to block specific transactions from being included in their blocks, they have no affordances to block or disallow users from submitting transactions to the network. **B52** The Aztec protocol requires users to build zk-proofs for private function calls, pay for fees and have funds on L2. There are no technical restrictions on on/off-boarding funds to/from L2 or on sending transactions on L2. The B52 sequencer selection protocol also places no restrictions on users and whether they can submit transactions to sequencers. While individual sequencers might be able to block specific transactions from being included in their blocks, they have no affordances to block or disallow users from submitting transactions to the network. ### Elegant Reorg Recovery *<p style="text-align: center;">The protocol has affordances for recovering its state after an Ethereum reorg.</p>* The effect of network failure on data processing systems and how to build systems that are resilient in the face of such failures is well studied. Eric Brewer proposed the CAP Theorem in 1998 [(Brewer, 1998)](https://web.archive.org/web/20080625005132/http://www.ccs.neu.edu/groups/IEEE/ind-acad/brewer/index.htm), which poses a trilemma among Consistency, Availability and network partition tolerance in any shared data system. Given that network partition is always a risk, a data system distributed across a network must choose between maintaining availability or maintaining consistency. In 2011, Nathan Marz proposed Lambda Architecture as a "way to beat" the CAP theorem [(Martz, 2011)](http://nathanmarz.com/blog/how-to-beat-the-cap-theorem.html), which uses parallel stream and batch processing subsystems to achieve real-time availability from the streaming subsystem and eventual consistency from the batch subsystem. While Lambda doesn't really beat CAP, it does place its consideration at a good spot architecturally --- namely at the platform level rather than the application level. Three years later, Jay Kreps described the kappa architecture that had many of the properties of lambda architecture with regards to the CAP Theorem, but was a pure streaming solution alleviating the need for the complexities of running parallel streaming and batch subsystems [(Kreps, 2014)](http://radar.oreilly.com/2014/07/questioning-the-lambda-architecture.html). The common thread running through these architectures is the presence of an immutable copy of logs that enable data reprocessing to recover a distributed processing system from failures. Limited by Ethereum’s blob storage retention policy, Aztec's L1 - the Ethereum network - contains such an immutable copy of Aztec's logs as recently as the last finalized epoch. If this retention policy is sufficient for Aztec’s needs, that leaves the intervals between the last finalized epoch and the current safe block and between the current safe block and the current head block as gaps in immutable log storage. At this time, Aztec prioritizes efficiency over resilience to L1 reorganization, and relying on the Ethereum copy of Aztec state between the last finalized epoch and the current safe block is considered a reasonable risk. If in the future this prioritization changes, then secondary log storage will need to be considered. Such change in priorities could be the result of factors such as a significant increase in the rate with which value flows through the Aztec network. Fernet and B52 do not differ significantly on this issue and, if considered an unreasonable risk, would need to consider log keeping and reorg detection mechanisms. ### Practical Soft Finality *<p style="text-align: center;">Prior to "hard finality", sequencers have enough information to begin working on later blocks. (predicated on the assumption the posted block will be accepted on L1)* Aztec's reliance on an L1 to secure it's network needs to be reflected in: 1. **Aztec protocol should measure time in L1 block time.** The duration of phase intervals should be specified in numbers of L1 blocks. Doing so provides pacing for Aztec that naturally accounts for stress on the L1 network. In several instances, times are given in seconds. Best case, these representations of world time are only informal, i.e. non-normative. Worst case, representations of world time become part of the sequencing protocol. 2. **Aztec protocol should define an L1 Safe Block Height to indicate Aztec hard finality.** Aztec should maintain a notion of a "safe block height" for its L1 chain based on total attestations for a block reaching some threshold such as two thirds of all active validators. To be conservative, Aztec may also want to consider making this threshold variable based on a model of Ethereum Network health using metrics such as rate of skipped slots. 3. **Aztec protocol should define a Work-In-Progress (WIP) limit based on its Safe Block definition.** Aztec should limit the number of work-in-progress (WIP) roll-ups . For the purposes of this section, in-progress roll-ups are the sum of those that are in the proposal or proving phase, but not yet included in the L1 chain, added to those included in the L1 chain, but have not yet reached Safe Block Height. Work on a new roll-up should start only if the WIP count is less than the limit. These recommendations taken together allow Aztec to operate within acceptable risk of data being lost to L1 reorganization. Soft finality comes with assumptions of “normal network conditions". This in turn assumes a means to monitor network performance and to recognize when conditions are normal and when they are not. Fortunately, block explorers for both Ethereum and Aztec networks exist to do just this. Analysis of incorporating network condition KPIs into L1 reorganization risk as well as the sensors and oracles to provide these metrics may be of interest for future work. These recommendations affect Fernet and B52 equally. Fernet’s option to start the proposal phase for an N+1 roll-up prior to the end of the proving phase is accounted for. ## Technical: Vulnerability ### L2 Chain Halt *<p style="text-align: center;">A failure in the block proposal process leads to an invalid state requiring a manual restart before new blocks can be produced again.* From the discussion of disaster recovery, above, an immutable copy of logs is necessary to enable distributed data reprocessing to recover cleanly from failures. In addition, inspection of logs can help determine the minimally disruptive point at which the L2 chain can be restarted. Only the L1 chain interval between the head node and safe node poses an unacceptable risk of loss of L2 logs recorded on the L1 chain. By allowing only one WIP Aztec block during this interval, the harm resulting from having to restart from logs is minimized. Aztec’s vulnerability to L2 chain halt affects B52 and Fernet equally as long as recommended reliance on the inclusion of relevant transactions in safe Ethereum blocks is implemented. ### Denial of Service *<p style="text-align: center;">An actor can exercise their affordances to clog or otherwise prevent other actors from using the system.* If a denial of service attack results in a chain halt, then the means of recovery has already been discussed (cf. disaster recovery topic). In addition, the existence of an ability to recover from disaster also opens up the tactic of halting and restarting the network to potentially defend against a denial of service attack. This tactic could be investigated further, if it is of interest. Short of causing a chain halt, denial of service attacks can be detected through monitoring of network client performance. Monitoring at the individual client level is outside the scope of this analysis. ### Sensitivity to Randomness Exploits *<p style="text-align: center;">The protocol is sensitive to exploits due to (pseudo)randomness.* The introduction of randomness in Aztec’s sequencer design proposals comes at the cost of introducing this vulnerability. One option, in principle, would be to stick to deterministic protocols. This section first addresses the question: What benefits of the pseudorandomness scheme make it worth bearing the risk that such a scheme could be exploited? **Why is randomness important?** In computational markets, where various entities contribute resources, ensuring a fair, unbiased, and unpredictable allocation or selection mechanism becomes imperative. This unpredictability prevents malicious actors from gaming the system and guarantees that opportunities to contribute are evenly distributed over time. Pseudorandom number generation provides a mechanism to introduce this unpredictability in a replicable and verifiable manner. **What are some risks associated with incorporating randomness into a protocol?** Exploited pseudorandomness can be more detrimental than a deterministic market design because it creates an illusion of fairness and unpredictability, while in reality, outcomes can be manipulated by those with the knowledge or means to exploit the system. This deceptive semblance can lead participants to believe they're operating in an unbiased environment, even as power and benefits are quietly centralized. Furthermore, such exploitation can introduce vulnerabilities and systemic risks that wouldn't be present in a transparently deterministic system. **Pseudorandomness in the Ethereum Network** Deterministic state machines, such as the Ethereum network cannot produce their own entropy. Therefore, they must construct mechanisms to induce network participants to provide entropy while reducing those same participants' capacity to exploit their private information. The Ethereum network gets its pseudorandom number generation from [RANDAO](https://eth2book.info/capella/part2/building_blocks/randomness/) which is a commit reveal scheme, and a group of actors exercising its affordances to produce a source of entropy on-chain. Actors engaged in supplying inputs to this scheme have potentially exploitable asymmetric information, especially if they coordinate. A detailed assessment of what this looks like in the case of RANDAO can be found in [this analysis from eth research](https://ethresear.ch/t/selfish-mixing-and-randao-manipulation/16081). As of the time of this report, there is no empirical evidence that such an attack has taken place in the past. Nonetheless, it is prudent to consider the extent to which the sequencer proposals are sensitive to this threat. **Fernet** The Fernet sequencer selection proposal leverages a VRF and a random beacon to produce a score, then ranks sequencers by the score[^1]. Of all sequencers proposing a block, the one with the highest score gets priority. [^1]: This scheme is derived from the [Irish Coffee proposal](https://discourse.aztec.network/t/proposal-sequencer-selection-irish-coffee/483#vrf-specification-4) from the Expresso team. Practically, Fernet’s pseudorandom scoring and ranking protocol helps to address the tradeoff between two requirements 1. **Certainty enables efficient sequencer operations:** On one hand, sequencers want certainty about their right to propose a block so they can allocate resources to building the block they will be proposing. This will play into the economics of sequencer operations; specifically, a sequencer may be able to keep its costs down by only doing expensive computational labor when the probability of winning a block is high[^2]. 2. **Uncertainty reduces incentives to exploit the sequencer role:** On the other hand, users of the network want sequencers to have some uncertainty, since uncertainty diminishes sequencers' incentive to expend an even greater amount of resources to find and extract excess value while filling the role of block sequencer; if it turns out the sequencer does not have the highest score, their extra effort is wasted. [^2]: Note that more detailed analysis of the specific economic policies is deferred for later market design work, but it is worth noting that rewards for uncle blocks may play to balancing these incentives. A redundancy scheme might include rewarding the next N highest scoring sequencers who produced valid blocks, even if those blocks are not used. Uncle rewards are alluded to in [Fernet](https://hackmd.io/0FwyoEjKSUiHQsmowXnJPw?view#:~:text=These%20problems%20can,is%20still%20present) and [B52](https://hackmd.io/VIeqkDnMScG1B-DIVIyPLg?view#How-it-works:~:text=The%20smart%20contract%20will%20incentivise%20an%20uncle%20block%20for%20liveness%20guarantees) respectively. It is important to note that this approach has two components with distinct sensitivities: 1. **Random Beacon:** Any actor participating in Aztec who is also participating in RANDAO (or in any scheme serving in the role of *random beacon*), may in principle bias the distribution of the random scoring, giving actors privileged information about when they will win the top rank slot, and in turn undermine the benefits of introducing randomness. 2. **The Verifiable Random Function:** in addition to assessing VRFs for their cryptographic properties the statistical properties of the VRF are relevant. The probability distribution that the score is drawn from will have bearing on the effectiveness of this proposal. Precisely what matters is the conditional probability distribution given knowledge of your own score but not those of others. There are, however, real risks associated with manipulating the random beacon. The choice of specific VRF offers resistance to gaming on that margin. Practically, this security requirement supersedes economic considerations on the conditional probability distribution and, after further analysis, we believe provides enough leeway to select a VRF that is sufficiently insensitive to the identified RANDAO exploit that Fernet acceptably passes this requirement. Remaining uncertainty in this analysis emanates from the fact that there is room for changes in the Fernet proposal with respect to the specific choice of Random Beacon and VRF whilst preserving the protocol architecture. There remains possibilities that conflicting requirements may arise during development or testing. **B52** The B52 sequencer selection proposal takes inspiration from the concept of Proposer/block-builder separation (PBS), aiming to bring MEV auctions directly into the Aztec protocol. It is more precise to say that B52 enforces separation between the role of blockbuilders and provers. A role for randomness was only introduced later in the proposals development as a result of an observation that B52 was susceptible to a 51% attack whereby a sequencer with sufficient stake could guarantee selection as the block proposer, and a feedforward loop would ensure their advantage only grew relative to the other sequencers. The mechanism of randomness in B52 is the same setup as Fernet, relying on a VRF and a random oracle to produce a score, and scores are used for rankings which are then used as inputs to other stages of the protocol. Unlike Fernet, which uses this scheme to determine which sequencer will get to produce a block, B52 uses this scheme as part of prover orchestration and incentivization. Provers get to vote on blocks, but which block is selected from the many valid next blocks is determined in part by each prover’s randomly assigned weight. B52 is susceptible to the same class of vulnerabilities described in the Fernet section, but arguably it is less sensitive to these attacks, because a prover has little incentive to change which transactions they construct proofs for, conditioned on knowledge regarding their scores. However, it is still possible a low scoring prover may choose not to build proofs due to the unlikelihood of their being selected to be compensated for providing a proof. B52 provides a reasonable starting point for the development of a prover market, but this discussion is addressed as part of the economic requirements. On the specific topic of sensitivity randomness exploits, B52 is considered to be relatively safe. Due to the way in which the randomness is used within the protocol, there is relatively little advantage to exploiting randomness to produce economic gains or pursuant to censoring transactions. Certainty in this analysis is rated medium, and could be increased by a formal mathematical sensitivity analysis. ### Occam's Razor: Complexity Minimization *<p style="text-align: center;">The protocol becomes harder to understand and to maintain as a result of more "moving parts" than required.</p>* [Occam’s Razor](https://en.wikipedia.org/wiki/Occam%27s_razor) suggests that the simplest explanation is the best explanation – in protocol design, one might argue that the simplest solution is the best solution. This sounds good, but practically it is not quite so simple. In truth the goal is to find the simplest solution which is capable of regulating the complexity inherent in the system being orchestrated by the protocol. Thus, the lower bound in complexity can be attributed to Ashby’s [Law of Requisite Variety](https://en.wikipedia.org/wiki/Variety_(cybernetics)); it’s not always obvious where that lower bound lies, but requirements analysis such as this document can help identify the simplest viable solution. A balanced protocol design ensures that the mechanisms are robust enough to account for the nuances of real-world interactions while still being transparent and understandable to its participants. The ultimate goal is to promote fairness, efficiency, and trust. Here’s why this balance matters: **Too Simple:** Overly simplistic mechanisms may overlook important subtleties. This might cause: * **Unintended Incentives:** Simplicity could breed game-ability, where participants find ways to exploit the system that weren't anticipated by its designers. * **Inefficiencies:** Failing to account for important factors or participant preferences can prevent a market from achieving an acceptable equilibrium state. * **Perceived Unfairness:** If mechanisms don't differentiate sufficiently among situations, it could seem like they're creating or amplifying inequality, e.g. perceptions of censorship if particular classes of transactions are systematically underrepresented. **Too Complex:** Overly intricate designs can be just as problematic: * **Reduced Participation:** Participants may be overwhelmed or discouraged by the complexity and choose not to engage. This may be exacerbated by increased operating costs and/or uncertainty about upsides to participation. * **Opacity:** A more complicated mechanism might be less transparent, leading to mistrust. Participants might suspect they're being taken advantage of. * **Corner Cases:** Complexity increases the likelihood of unexpected outcomes or loopholes. Edge cases also tend to concentrate behavior and may lead to emergent centralization. For the purpose of this analysis we consider three principles: * **Legibility:** Strive to make the rules and mechanisms understandable. Even if there is complexity behind the scenes, the interface and rules that participants interact with should be as clear as possible. * **Variety:** Design mechanisms that can be adjusted over time. This relates to the modularity of the design and the degrees of freedom which allow actors in the network to adapt to changes in their environment. * **Safety:** provide guardrails and/or computationally enforce rules which should not be violated under any circumstances; e.g. enforcing the presence of proofs for each transaction to be included in any block. **Fernet** is very *legible*; has a simple and clear architecture which is relatively intuitive. Fernet also exhibits high *variety*, potentially at the expense of *safety*. It places less emphasis on rules and more emphasis on the behavior of market participants; this expands the expressive potential and thus adaptive capacity of the network. Additionally, the protocol design is relatively modular. While the emphasis on market behavior over rules may reflect a reduction in *safety*, this concern is largely ameliorated by the enforcement of the “constraint satisfaction” rules. The protocol still strictly enforces the presence of proofs for all transactions. The degrees of freedom offered by the protocol contribute to variety without significantly infringing on safety. Fernet is a well balanced design, offering a viable blend of Legibility, Variety and Safety. **B52** is more complex than Fernet, but it is articulated relatively clearly for its complexity. It is less *legible* than Fernet, in that it takes a larger investment of attention to develop an intuition for how it works, especially if one is not already deeply familiar with the Ethereum MEV landscape. The B52 proposal provides a much stricter set of rules regarding how provers are orchestrated and blocks are built. The enforcement of these rules reduces *variety* but leans into *safety*. The increase in safety may appeal to some participants giving them increased confidence in the protocol, but given the associated increase in technical complexity, that gain would come on faith for most participants. B52’s fitness on this requirement hinges on the perceived importance of proposer-block-builder separation; if PBS is a hard requirement, then B52 would be well balanced. If PBS is not a hard requirement, the extra complexity is likely unjustified. ## Economic: Desired Property ### Low Cost for Public Function Calls *<p style="text-align: center;">Users making public function calls experience costs significantly below the cost for the same function calls in Layer 1.</p>* Due to the way that finality is achieved via rollup verification on the L1, it is never going to be faster to do a public call on the L2 than L1. Thus, the fee paid by users to successfully submit their public function call on L2 will generally need to be lower than the fee paid on L1 directly. A component of this fee must cover the cost associated with creating and submitting the block of L2 transactions to L1 - the more efficient this step, the lower the fee can be for users when submitting an individual public call on L2. This may be thought of as the protocol ‘passing on the savings’ of an efficient transactions block workflow. Between the two proposals, Fernet does not describe the prover coordination problem, which will likely require a component of user fees to incentivize. The B52 proposal explicitly allocates a share of user fees to provers as part of the sequencer’s incentive mechanism to collect potential provers (*ex ante* increasing the chance of a sequencer’s proposal being accepted via prover voting), but does not specify the magnitude of the fee required to include a transaction within an L2 block. This is important, because a user’s public function call directly on L1 will compete with other L1 transactions for inclusion in a block, and hence the fee offered by a user will reflect the probability of block inclusion conditional upon those transactions. By contrast, the fee offered by a user to incentivize inclusion in the L2 block will depend upon the transactions submitted on L2, which will include transactions (private function calls) that are not possible to submit on L1. Thus, it is not possible to evaluate the relative benefits to the user of using an L2 solution vs. a direct L1 solution without further data on the expected use case of public function calls on L2 (e.g. a dependency of a public function call upon a previously-submitted private function call, or a private function call present in the same block but executed earlier). ### Low Cost for Private Function Calls *<p style="text-align: center;">Users making private function calls experience costs at or below the cost of function calls on Layer 1.</p>* Because a completely private function call directly on L1 is only possible with an anonymizing service, ascertaining which proposal would require user fees significantly below the use of such a service depends entirely upon the cost of providing the L2/L1 solution as compared with the cost of that service. This comparison requires *market research* of competing services, against which each proposal can be evaluated across a selection of use cases. To the extent that one proposal is likely to outperform the other in a use-case-agnostic submission of a private function call, it is expected that the B52 proposal, by tying price discovery to sequencer selection through fee share offered to provers, may more easily align its user fees to competing services than the Fernet proposal. In the Fernet proposal, sequencers must stake on L1 *prior* to observing the transactions that can be collected into a block, while leaving the price discovery process between sequencers and provers undetermined. ### Censorship Resistance *<p style="text-align: center;">It’s expensive to maintain sufficient control of the sequencer role so as to discriminate on transactions.</p>* An advantage of the Fernet proposal is its reliance on VRF for sequencer selection. In order to censor a transaction, it must be the case that an entity can ‘guarantee’ that they are repeatedly selected as sequencer while the transaction to be censored is awaiting processing. The VRF selection process will prevent such a guarantee, *provided that sequencer concentration of power is not too high* (cf. the ‘Cartel Monopoly’ vulnerability discussed below). The B52 proposal relies upon a different mechanism for sequencer selection, which depends instead upon the combination of prover signature collection, a burn commitment, and a VRF, with the latter operating as a method of preventing a concentration of power *within the prover network*. But, because it is difficult to identify sequencers who are also provers (cf. below), there is still scope for manipulation of sequencer selection by committing to a high fee payment to those provers who are already under the sequencer’s control (or act as part of a cartel). While a VRF may attenuate the monopolization of prover power by one particular prover, it does not prevent a cartel of provers under control of a sequencer from successfully – and repeatedly – voting for a censored proposal. ## Economic: Vulnerability ### Cartel Monopoly *<p style="text-align: center;">How expensive is it for actors to capture the sequencer role.</p>* The main friction that is introduced to prevent the concentration of sequencer power in a single entity (or a single cartel of several entities) is to significantly increase the risk of loss following an attempt to control the sequencer role. The Fernet proposal exposes sequencers to the risk of L1 stake loss, which provides a channel for risk management by creating a workflow around e.g. observing sequencer concentration and penalizing sequencers accordingly. The downside to this approach is that actively monitoring the ecosystem for e.g. ‘Sybillized’ sequencers, or for patterns of activity that imply sequencer concentration, may be very expensive to develop and (especially) to maintain[^3]. Thus, it is unclear whether a staking system ultimately provides censorship resistance without a clear understanding of the necessary monitor-identify-enforce mechanism that must be implemented. By contrast, the B52 proposal does not expose sequencers to staking risk — instead, sequencers must incentivize provers, who must stake on L1 to participate. This sequencer-prover relationship may unfortunately reinforce the concentration of sequencer power (as discussed in “Censorship Resistance” above), since it is not immediately clear how to counter an entity co-opting both sequencers and provers, ultimately paying the prover fee ‘to themselves’. The aforementioned challenges regarding how to implement a monitor-identify-enforce mechanism remain, but are now elevated to observing *both* sequencers and provers. Thus—without specifying an explicit method for disincentivizing a concentration of sequencer power—the Fernet proposal appears to provide a less complex framework for implementing such disincentivization, with a likely dependence on seizure of the L1 stake in the event of excessive sequencer concentration of power. [^3]: For example, there is a ‘who watches the watchmen’ problem that arises from monitoring the monitors–this is part of a complex institutional design problem that is built upon (and is not usually solved by) the technical substrate being monitored. ### Miner-Extractable-Value (MEV) *<p style="text-align: center;">Is the proposal at risk due to MEV exploitation.</p>* The exploitation of MEV may or may not be considered a negative externality to the decentralization of block construction. MEV exploitation may negatively impact users, as in front/back-running, time-bandit, or sandwich attacks. On the other hand, market-to-market arbitrage opportunities are often useful to help price discovery and resolve imbalances between different exchanges. From the perspective of the sequencer, MEV is a potential profit opportunity and is thus balanced against the Block Rewards (BR) and transactions fees—note that here we break out fees from MEV, but technically they are part of the MEV calculation. The reason for treating fees separately is that the idealized principle behind transaction fees is to provide a method of price discovery whereby users who value inclusion of a transaction within a block the highest are able to signal their value by paying a higher fee for an increased chance of inclusion. Other MEV opportunities which rely upon the actual trading activity contained within a transaction do not facilitate price discovery, and so a separate treatment of fees helps to highlight that modality within the landscape of incentive structures users and sequencers collectively respond to. Ultimately, the exploitation of MEV is understood within the broader context of the ***market design* problem**, in which users demand a service (timely inclusion of transactions in blocks) and sequencers supply the service in return for compensation. This immediately implies two requirements in order to fully understand the ability of either proposal to address MEV, and how this affects the equilibrium of the transactions market: 1. The *demand side* of the market must be understood, i.e. a. The type of transactions (public or private) that a user will use the Aztec network to fulfill; b. The volume of such transactions according to type; c. The responsiveness of transactions demand to fees; d. The responsiveness of transactions demand to externalities generated from MEV (e.g. a worse price due to slippage incurred from front-running); and e. The cost and quality of the next best available service that the user can engage instead of the Aztec network to fulfill transactions. 1. The supply side of the market must be understood, i.e. a. The cost of supplying sequencer infrastructure to efficiently propose blocks, whether: *Internally*, i.e. using own infrastructure as a sequencer ‘entity’, or *Externally*, i.e. using third-party specialized intermediaries who sell transactions ordering services to extract MEV (cf. e.g. a [recent description](https://bloxroutelabs.medium.com/mev-relays-for-ethereum-2-0-980016c72563) of the searcher-relayer-validator workflow for Ethereum prior to the merge); b. The responsiveness of cost to the type and volume of transactions (e.g. the potential additional expense to the sequencer of including a private transaction instead of a public transaction); and c. The opportunity cost of the sequencer, i.e. the next best available return on resources currently devoted to sequencing. These are stipulated as requirements for the following reasons: 1. For the user, a *trade-off* exists between the expected time taken to have a transaction included in a block, and the cost associated with its inclusion. As an example of a fictitious polar case, suggesting a fee of infinity will guarantee inclusion almost surely, while suggesting a fee of minus infinity (i.e. the sequencer pays the user an infinite amount to include the transaction in a block) will guarantee censorship. Deriving the marginal condition where the additional time delay of inclusion is just offset by the additional return from a lower fee, *given the potential MEV landscape*, will identify the equilibrium transaction flow from users. 2. For the sequencer, a *trade-off* exists between the expected revenue earned from exploiting MEV and generating transaction fees, and the cost associated with the discovery and exploitation of MEV given the infrastructure devoted to block proposal and sequencing. As another polar case, if a public transaction provides an arbitrage opportunity with infinite upside, and the (marginal) cost of discovering this opportunity is zero, then the sequencer will generate infinite expected profits by exercising an MEV strategy, *if no other sequencer observes the opportunity*. On the other hand, if the mempool contains only private transactions with very low fees, and the rest of the transactions metadata *do not provide MEV-exploitable opportunities* (such as may be identified by, say, sender-recipient identification in the transaction message), and the (marginal) cost of searching the mempool and ordering transactions is very high, then the sequencer may not find it in their interest to participate in block production at the current block height. Deriving the marginal condition where the additional cost of exploiting MEV is just offset by its additional revenue (in expected terms), *given the potential MEV landscape*, will identify the equilibrium sequencer transaction block inclusion behavior. A *market equilibrium* is: 1. the ratio of private to public transaction types included in a block, 2. the volume of transactions included in a block, and 3. the net fees associated with transactions included where both trade-offs are balanced for both users and sequencers. Note that net fees, and hence the market equilibrium, by necessity also depend upon provers, insofar as they are compensated from transaction fees offered by the sequencer (B52) or by other means (Fernet, TBD). The overall objective, naturally, is to provide a protocol infrastructure that admits a market equilibrium where: 1. Users find it in their best interest to participate, submitting a mixture of private and public transactions that fulfill their timeliness and cost expectations; 2. Sequencers find it in their best interest to participate, regularly competing for the right to propose transactions orders that fulfill their profit expectations; 3. Provers find it in their best interest to prove blocks that are proposed by the sequencer responsible for the current block (their own equilibrium decision is currently out of scope for the sequencer discussion, but it is important to note its required inclusion in the assessment of the success or failure of the sequencer selection problem). It is against this definition of a successful market equilibrium that, ideally, the two proposals would be assessed. Unfortunately, this is not possible given the current uncertainties regarding both sides of the market (users and sequencers), and hence a proper understanding of the impact of MEV on the market by proposal is unavailable at present[^4]. Instead, a more qualitative (necessarily incomplete) assessment can be made based upon structural differences between the two proposals. The largest single difference between the two is that the B52 proposal explicitly includes a value discovery channel from the sequencer to the prover network, and hence to the rest of the ecosystem. This value discovery uses: 1. A fee division proposal between sequencers and provers, in addition to 2. A commitment by the sequencer to burn the protocol’s native token. The fee division proposal provides a means for understanding the value of the block as proven, and works backwards to reveal the underlying cost differences between sequencers and provers in providing the ‘value’ of the proven block. By contrast, the commitment to burn the native token is a *collective network value* of the service a sequencer provides, as burning more of the token reduces its available supply and hence increases scarcity (whether this truly increases the value of the token depends upon the utility of the token for e.g. staking and other services). By contrast, the Fernet proposal does not articulate the prover engagement mechanism. On the one hand this is a strength, as learnings from B52 and other proposals could be leveraged ex post to design an internally-consistent mechanism to incentivize provers. On the other hand, because the sequencer selection mechanism and the prover engagement mechanism are not designed at the same time, it is possible that the available design space for the prover network may be overly restricted by the sequencer selection mechanism. Whether this is actually true is unknown, and so Aztec’s stakeholders may judge whether this risk is worth undertaking when compared with B52’s explicit prover network incentivization mechanism. [^4]: Addressing the market design problem is a necessary next step after a proposal has been selected. BlockScience has extensive experience in market and associated mechanism design that can be leveraged after proposal selection, if desired. ### Coordination Overhead *<p style="text-align: center;">Cost to coordinate a new round of sequencers is making transactions infeasibly expensive.</p>* To the best of our knowledge and based on explicit assumptions, Fernet is likely to be cheaper than B52 when summing up all L1 costs by around ~8%. There is however considerable uncertainty on the costing function and numbers, and in fact it could well be possible that B52 could be cheaper, mainly if the average Number of Proposers is relatively high (eg. more than 150 proposers), on which case the recommendation on this property is reversed. The cost to coordinate one round of sequencing includes high uncertainty about precise gas costs. Additionally the cost to store transaction payloads - needed by participants to execute state updates for new blocks - has high uncertainty on potential implementation, with both proposals mentioning EIP-4844 blobs as a potential solution. To resolve uncertainties around gas costs, we propose collaboratively iterating on the assumptions made for the baseline model. As some of the costs around proofs, verification of proofs and transaction sizes are unknown at this point, no precise recommendation can be made on them. Additionally, we identified issues around the option of storing transaction payloads on EIP-4844 blobs and raised some specific concerns. As this affects both proposals equally, no recommendation for either proposal is concerned. However, we provide extended analysis and documentation in the following section. # Extended Analysis Evaluating the proposals as per their risks towards both cost of coordination and risk to block reorgs required further analysis than can be provided in this summarized report. Getting to the recommendations above required diving deep into various components and identifying sensitivity to design decisions. We provide additional documentation below, serving also to identify where uncertainty might warrant further analysis. ## Costs of coordination and blob storage Attached to the recommendations on costs for coordination of sequencers on L1 are currently several uncertainties around proof sizes, payload sizes of transactions, fees and block rewards, computational costs to verify specific proofs and more. While we identify the most probable from our understanding and attach estimates for specific costs, these are likely to be refined iteratively. Additionally, identifying costs required diving into blob gas and costs, which provide additional uncertainty as critical path elements. While workarounds for not using blobs are available, by instead falling back to earlier solutions using calldata, they are not ideal for optimistic system trajectories. We analyzed the currently available specification for EIP-4844 and identified several risks associated with Aztec using blobs in their current form. Additionally, we identified ambiguity around both demand and supply side costs associated with sequencing and proving. We provide functional forms of our understanding of these costs and recommend, if uncertainty on these costs risks blocking a decision, to collaboratively iterate on the functional form and the likely gas costs to increase certainty of our recommendations. The documents containing this analysis can be found here: [L1 Gas Costs for the B52 and Fernet Proposals](https://hackmd.io/mVgeixccT_mNPDLz6tH0-w?view) ## Block Reorganization and Finality As an L2 rollup to Ethereum, Aztec is critically dependent on the security and consistency of Ethereum. While a complete breakdown of Ethereum is likely to be back breaking for all systems dependent on it, minor hiccups such as reorgs of non-finalized blocks could become bigger risks if Aztecs sequencer selection is synchronized without properly aligning security assumptions of L1 with requirements on L2. To arrive at our recommendations given above, we conducted further research, documented here: [Finality Interaction Effects](https://hackmd.io/@blockscience/Bytl3pLon) # About BlockScience *<p style="text-align: center;">[BlockScience®](https://block.science/) is a complex systems engineering, R&D, and analytics firm. By integrating cutting-edge research, applied mathematics, and computational engineering, we analyze and design safe and resilient socio-technical systems. With deep expertise in Blockchain, Token Engineering, AI, Data Science, and Operations Research, we provide engineering, design, and analytics services to a wide range of clients including for-profit, non-profit, academic, and government organizations.</p>* *<p style="text-align: center;">Our R&D occurs in iterative cycles between open-source research and software development, and application of the research and tools to client projects. Our client work includes design and evaluation of economic and governance mechanisms based on research, simulation, and analysis. We also provide post-launch monitoring and maintenance via reporting, analytics, and decision support software. With our unique blend of engineering, data science, and social science expertise, the BlockScience team aims to diagnose and solve today’s most challenging frontier socio-technical problems.</p>* *<p style="text-align: center;"> Find us on [Twitter](https://twitter.com/block_science), [Ghost](https://blockscience.ghost.io/page/3/), [Medium](https://blockscience.ghost.io/) and [Youtube](https://www.youtube.com/channel/UCPePNv3dJN--aKhFGOa0Rjg). </p>*