---
title: Fair Sequencer
tags: EVM, L2, sequencer, Ethereum, VRF
breaks: false
---
Authors: Mantle Research team (Aodhgan, Franck, Andreas, AFK, Arun)
# Rollup sequencers
Rollups (L2s) provide off-chain execution of transactions in _batches_ and post the result of a batch to L1 or a data availability layer. The result is a significant increase in throughput from around 15 transactions per second on L1 to theoretically thousands of transactions per second for a typical L2 (reportedly up to 40k TPS for Arbitrum, and up to 100k TPS for zkSync).
The off-chain execution is managed by the _sequencer_. At the moment, all the major rollups operate their own sequencers and that comes with some risks:
- the sequencer is a **single point of failure**: if the sequencer goes down or is malicious, the whole rollup is impacted;
- the rollup is **not decentralised**: the sequencer controls the selection and ordering of transactions, which opens the door to censorship, manipulation or [MEV](https://ethereum.org/developers/docs/mev) via ordering of transactions.
## Current trend
Solutions have emerged to remedy the previous problems and a lot of serious research efforts (e.g. [Espresso](https://www.espressosys.com), [Astria](https://www.astria.org)) are delving into the design of _decentralised_ and _shared_ sequencers. Although the idea is appealing it comes at a cost: decentralising the sequencer may create opportunities but remains complex to implement (relying on a consensus protocol) and several key problems pertaining to decentralising the sequencer are still open (e.g. rewards, revenue sharing). In this context, it is not surprising that no major rollup operators offer decentralised or shared sequencers in production yet.
In this article we propose an alternative light-weight practical solution to provide _fair sequencing_ for rollups. We set out to provide a solution for L2 sequencers that has the following properties:
- **fairness**: it protects users from front and back-running (e.g. sandwich attacks) and more generally [MEV attacks](https://chain.link/education-hub/maximal-extractable-value-mev),
- **censorship resistance**: once a transaction is in the mempool, we guarantee it is going to be processed in a *bounded number* of blocks,
- **modularity**: the components in our architecture are independent and in charge of logically distinct tasks,
- **attributability**: every step from _fair ordering_ to the preservation of the ordering in the execution is _proved_ and the proof published to a _Data Availability (DA)_ layer, making it easy to identify the components who misbahave,
- **simplicity**: it is easy to understand and relatively easy to implement.
## The centralised sequencer architecture
In an optimistic or validity rollup, the _sequencer_ is in charge of preparing and executing batches of transactions. Most rollups rely on a _centralised_ sequencer, Figure 1 below, managed by the rollup operator, that performs two tasks: **extracting** and **processing** batches.
![sequencer-fig](https://hackmd.io/_uploads/Hk45S9O9p.jpg)
**Figure 1**: _A centralised sequencer architecture_.
As a result, the rollup operator can decide:
1. which transactions to **include** in a batch, and
2. the **order** of transactions in a batch.
A consequence of this architecture is that if the sequencer is malicious (or biased) the rollup is not _censorship resistant_. To mitigate this, some rollups like Arbitrum or Optimism provide [mechanisms](https://docs.arbitrum.io/sequencer#unhappyuncommon-case-sequencer-isnt-doing-its-job) for users to bypass the sequencer to ensure censorship resistance. However, by utilizing the bypass, a user misses out on low fees on the rollup. Another consequence of this design is that the sequencer _decides the order of transactions_ and can take advantage of MEV opportunities, extracting profits from users' transactions (e.g. sandwich attack).
Note also that there is no guarantee that the _execute batch_ module executes the transactions in the order of the created batch, so even if the _create batch_ module is trusted (no censorship) this does not imply that the sequencer is censorship resistant.
## Decentralised sequencer
To address those issues, a popular approach is to _decentralise_ the sequencer. The idea is to use a Byzantine Fault-Tolerant (BFT) _consensus protocol_, Figure 2, to agree on _i)_ the transactions to be processed in a batch, and _ii)_ the ordering of the transactions in the batch. There are several solid projects e.g., [Espresso](https://www.espressosys.com), [Astria](https://www.astria.org), that develop decentralised versions of the sequencer.
![dec-sequencer-fig](https://hackmd.io/_uploads/rkwKgkcqa.jpg)
**Figure 2**: _A decentralised sequencer architecture_.
The expectation is that the decentralised sequencer _fairly_ orders the transactions and proposes the ordered list to the L2 to execute them. In this context, a common and strong assumption is that the L2 is going to execute the transactions in the order given by the decentralised sequencer, thus will **not tamper with the ordering**.
The decentralised sequencer is claimed to be censorship resistant under another assumption: **enough honest participants** are present in the consensus protocol -- usually 2/3. In this case, a transaction can eventually make it into a batch, as it will be picked up by an honest party. Note that it is not guaranteed that a transaction will make it, as participants may (agree to) use specific ordering policies to order transactions.
The addition of so-called _order-fairness_ to consensus protocols was proposed in [Order-Fairness for Byzantine Consensus](https://eprint.iacr.org/2020/269). The [presentation by M. Kelkar](https://www.youtube.com/watch?v=19KW4V2CloQ) of this paper at CRYPTO is a great overview of the topic. In a nutshell, order-fairness is an agreement by a majority of participants that some transaction $t_1$ _happens before_ before another transaction $t_2$.
All of the above approaches use a BFT consensus protocol and:
1. the round leader in the BFT protocol has absolute control over the choice of transactions and the order within a batch, which is sufficient to permit censorship and certain MEV strategies (e.g. process some specific transactions first; known as frontrunning).
2. a subset of participants may collude and implement strategies that can impact the result of certain transactions negatively (e.g. if they gain control over several successive blocks by chance, known as [Multi-block MEV](https://ease.org/most-governance-contracts-have-an-upcoming-vulnerability-we-should-all-pay-attention-to/)).
:::warning
We may also argue that any consensus mechanism to build a batch is necessarily _unfair_ and not MEV attacks resistant as the participants follow _protocol specific fairness criteria_ that may not be reflected and considered fair in other protocols.
:::
It follows that decentralising the sequencer using a consensus protocol does not necessarily solve the problem of censorship, and certainly does not eliminate MEV attacks.
Note that _rotating a set of sequencers_ (potential solution mentioned by OPtimism) _rotates centralisation_ and is subject to the same censorship and MEV attacks issues.
A _shared_ sequencer is a decentralised sequencer that can be used by multiple chains, and has the same weakness as a decentralised sequencer.
We refer to the Systematisation of Knowledge paper [SoK: Decentralized Sequencers for Rollups](https://arxiv.org/abs/2310.03616) by Shashank Motepalli, Luciano Freitas and Benjamin Livshits for a comprehensive analysis of the main concepts and desired properties of decentralised sequencers.
# Fair Sequencing
_Fair sequencing_ has already received some attention in the past. Some prominent Web3 actors like [ChainLink](https://chain.link) may be credited to have introduced the concept of a [fair sequencing service](https://chain.link/education-hub/maximal-extractable-value-mev). This approach has been refined into [PROF (protected order flow)](https://initc3org.medium.com/prof-fair-transaction-ordering-in-a-profit-seeking-world-b6dadd71f086) to integrate with profit-seeking MEV strategies, targeting L1 proposer-builder-separation (PBS) architecture.
:::info
:bulb: We revisit this approach here in the context of L2 sequencing. Our proposal differs in several aspects from PROF. While pursuing the same goal, preventing MEV attacks and providing a fair transaction execution environment, we do not rely on a BFT consensus protocol.
:::
The PROF proposal relies on a consensus protocol (decentralisation) and allows for customising the notion of fairness. It is unclear how it provides censorship resistance and the [PROF article](https://initc3org.medium.com/prof-fair-transaction-ordering-in-a-profit-seeking-world-b6dadd71f086) reads: "PROF supports a policy for ordering transactions that doesn’t involve _intentional_ extraction of MEV." where _intentional_ may need to be defined more precisely.
With this ambiguity on the implications of a decentralised sequencer, what is the true motivation for the decentralization of the sequencer. What problems does it solve? And does it solve the right problems?
It is common to refer to _fairness_ as unpredictable (and unbiased) and hence hard to manipulate. A fair coin is probably the simplest example of fairness: when flipped, it should yield heads or tails and both events should have 50% chance of occurrence.
## Fair sequencer architecture proposal
:::success
We argue that the problem to solve in sequencing (and batch-building) is not necessarily related to decentralisation. The problems to solve are _censorship resistance_ and _fairness_ which includes MEV attacks resistance.
We propose a solution that separates these issues and is based on _verifiable random functions_ (VRFs) and _zero-knowledge (ZK) proofs_ that has three essential properties and is _provably_ fair:
1. the probability of a transaction being censored is negligible (cryptoeconomics security),
2. the order of transactions in a batch is (pseudo-)randomly chosen, and the (pseudo-)randomness is _publicly verifiable_,
3. we do not assume that the L2 is going to execute the transactions in the proposed random order and, therefore, provide a mechanism to _verify_ that this is true.
:::
A consequence of (1) is that this solution is effectively censorship resistant. A consequence of (2) is that it eliminates MEV attacks as the order of transactions is random and by definition _unpredictable_. In fact building a batch is a provably fair random process. Given a set of transactions, the distribution of the ordering of a set of transactions is uniform. (3) ensures that the executor cannot tamper with the order.
Overall, we have an **end-to-end secure process** to ensure and verify fairness.
In the rest of this article, we first introduce our architecture and its properties in a simple setting. We then discuss security and how to design a (Byzantine) fault-tolerant version of the architecture, and some possible extensions.
Finally, we address the problem of _spamming_ the mempool with transactions and propose some solutions.
## Transaction submission - censorship resistance
The lifecycle of a transaction starts with a user who submits a transaction to a (centralised) mempool (Figure 3). If the transaction successfully reaches the mempool, the user is notified with a _receipt_ that contains the batch number $k$ (alike a block number) in which the transaction will be included. In effect, this schedules the transaction in a batch using a _first-come-first-served_ policy, but the _actual position of the transaction when the batch is executed_ is not finalised yet.
![submission-receipt](https://hackmd.io/_uploads/HypjdMyoT.jpg)
**Figure 3**: Submission phase – Pool Manager
The _pool manager_ builds the batches (and starts a new batch when the previous one is full), schedules batches in the order they are constructed and publishes the batches (e.g. Merkle tree hash) to the DA layer (Figure 3). As a result, a user can _verify_ that the promise to be included in batch $k$ is fullfilled with a Merkle proof. Indeed, if a user sees batch number $k + 1$ and their transactions have not been included before, they can trigger a dispute (similar to fraud proof) and the mempool manager may be slashed. This ensures censorship resistance (once a transaction is accepted by the mempool) and bounded block-inclusion time.
## Fair ordering
Given a batch $\mathbf{b}$ of size $n=|\mathbf{b}|$, extracted by the mempool manager, the (rollup) sequencer requests a _random permutation_ of the set of indices $\{1, ..., |\mathbf{b}| \}$, Figure 4. To do so they query an oracle that provides a _verifiable random function (VRF)_ and returns the permutation $\sigma$ along with the succinct (zk-)proof $\beta$ that a specific pseudo-random function has been used to generate $\sigma$. Both $\sigma$ and $\beta$ are published to the DA layer (Figure 4).
![ordering](https://hackmd.io/_uploads/BJHPsf1jp.jpg)
**Figure 4**: Random ordering phase – Fair Sequencer
The batch to execute is $\mathbf{b'}$. A (zk-)proof that $\mathbf{b'} = \bar\sigma(\mathbf{b})$ is posted to the DA layer, along with the Merkle tree hashes $\#\mathbf{b}$ and $\#\mathbf{b'}$.
Because the permutation is provably random, the ordering is fair.
## Correctness of the execution
At the end of the pipeline the _executor_ processes the batch $\mathbf{b'}$ and publishes the corresponding block to the DA layer or L1. The block contains the list of transactions (and their order) and this can be matched with the hash $\#\mathbf{b'}$ published before the execution. If the two hashes are different the executor misbehaved and can be slahed.
![executor](https://hackmd.io/_uploads/B1AZMLkoa.jpg)
**Figure 5**: Executing phase – Executor
## Summary
:::success
:bulb: The sequencing and processing of transactions is divided into three steps carried out by separate actors (seperation of concerns):
- the **mempool manager** is in charge of building and extracting batches in the order they are constructed,
- the (fair) **sequencer** requests a random permutation and a proof, and computes the permuted batch,
- the **executor** processes the batch and builds the blocks.
The actors have distinct tasks and are separtely accountable for each of them, which makes our design highly **modular**. Publishing the results of each task to the DA layer enables us to identify malicious actors and slash them if they misbehave, thus ensuring **attributability** (identify who to blame in case of a misbehaviour).
:::
Note also that _fair sequencing with dVRF_ as proposed above differs from _threshold encryption_ protocols where transactions in the mempool are encrypted. In the threshhold encryption scheme, users can collude with the sequencer by communicating the encryption of their transaction. In our setting, the choice of ordering is random and does not depend on the content of transactions or their encryption.
# Security, fault-tolerance, performance
## Security
In our design, cryptoeconomic security is ensured via a _staking_ mechanism. The operators (mempool manager, sequencer, executor) must stake some assets. If an actor is caught misbehaving, the staked assets are _slashed_ and additional social measures (temporary ban, etc) can be agreed upon.
As an example, re-staking protocols like [EigenLayer](https://www.eigenlayer.xyz) enable other protocols to leverage Ethereum’s security at low cost.
In our design we have left open the choice of the DA layer. Settlement can happen at layer 1 as the DA layer, or for example use [Celestia's DA layer](https://docs.celestia.org/learn/how-celestia-works/data-availability-layer) or Eigen Layer's [EigenDA layer](https://docs.eigenlayer.xyz/eigenda/overview/).
## Decentralisation
In the simplest settings, we can have a single mempool manager, sequencer and executor. This implementation makes every module a single-point-of-failure.
This can be addressed by a simple fault-tolerant mechanism, detecting when a component is not operating properly or with delays and replacing it with a back up component.
The mempool manager can be _decentralised_ by operating multiple nodes to which users can submit their transactions. Reliable broadcast protocols can be employed that provide guarantees on message receival to a distributed mempool and provide a high degree of security and, moreover, byzantine fault-tolerance (BFT). This increases the chances of a transaction getting in the mempool.
Both the sequencer and the executor have effectively no influence on the batch content and transaction order. Thus, there is a lack of motives to decentralise these components. However, for the sequencer a VRF acts as input, which can be decentralised as is discussed in the following.
## Verifiable Random Functions
Verifiable Random Functions (VRF) can be provided by oracles. Currently [Chainlink](https://docs.chain.link/vrf) and [Supra](https://supraoracles.com/vrf-product/) propose VRF as a service. In our proposed design, both oracles can be used. As per the documentation the [Chainlink VRF service](https://docs.chain.link/vrf) seems to be centralised and a single-point-of-failure, which may be an issue.
However, decentralised byzantine-fault-tolerant VRFs are well understood and are provided by e.g. [drand](https://drand.love/docs/overview/#how-drand-works) or Supra [dVRF](https://supraoracles.com/vrf-product/). Employing such solutions significantly increases the level of reliability of this service. Another feature of Supra [dVRF](https://supraoracles.com/vrf-product/) is that the output (random value) can be made _private_, communicated and known only to the recipient. We can leverage this feature by querying the dVRF oracle in _advance_, as soon as a batch is full in the mempool, hereby reducing the delay between the extraction and the computation of the permutation.
Note that the decentralisation of the VRF means that the secret key used to generate random numbers is _shared_ among a set of partcipants, and increases the reliability of the availability of the service. It is different to decentralising the sequencer.
## Compatibility with MEV
The objective of _fair sequencing_ is to protect users against front and back-running attacks. However, if need be, our approach can accomodate some (hopefully "good", arbitrage?) MEV strategies. This is just a matter of _granularity_. Instead of submitting unique transactions, we can allow users (or block builders) to submit _bundles_ (ordered sequences of transactions) and treat a bundle as a unique transaction in the permutations.
More generally, it is possible to make sure that the randomly ordered list of transactions is a total order extension of some _independent_ partial orders (e.g. timestamps on different nodes).
Ideally, users would have the choice to submit their transactions in a bundle via an intermediary (e.g. DeFi dApp) or to submit directly and privately to the mempool. Note that because different bundles are still randomly ordered, users who opt-in for direct submission are (probabilisticly) protected from front-back running. Moreover, independent bundles are shuffled in a non predictable manner making [multi-block MEV attacks](https://ease.org/most-governance-contracts-have-an-upcoming-vulnerability-we-should-all-pay-attention-to/) (securing consecutive blocks) very unlikely to succeed.
## Protection against front-running and spamming
The architecture we propose is fair in the sense that the ordering of transactions is random. However, malicous actors could try to spam the mempool with identical transactions to try and increase their chances of front running some transactions.
To tackle the issue of spamming, _**priority fees**_ can be introduced. Firstly, to disincentive excessive spamming and protect the mempool a base fee is applied. Nevertheless, this leaves the concern that a user may still spam transactions to gain an improvement in their position and front-run a competing transaction. This may be the case, if the expected gain is higher than the accumulation of the fees. Consequently a base fee alone does not provide a remedy.
A more flexible approach is to allow for a user adjustable priority fee, and where the fee size should impact the probabilistic position distribution in a fair manner. As a result, it becomes equally expensive to issue a single transaction with a high priority fee, compared to multiple with low fees. Furthermore, and similar to the [approach](https://www.youtube.com/watch?v=GHPFWMQOwCk) described by [Offchain Labs](https://docs.arbitrum.io), the priority fee should be considered per gas, rather than the absolute fee value, to disincentivise the bundling of unrelated transactions.
With the above modifications, the risk (or at least the motivation) for spam has been reduced substantially. The approach also provides the means for users to select a suitable ratio for the priority-fee against the expected return, to probabilistically protect the successful execution of their transaction. And consequently make it prohibitively expensive to be front-run.
What this achieves, in essence, is a **provably-fair all-pay auction**. A significant advantage of all-pay auctions is that payment is internal to the system and, therefore, value can be captured and utilized within the protocol to, for example, subsidize protocol operation costs, rather than the value being lost to external factors, such as latency infrastructure. <!-- see [TimeBoost (Offchain Labs)](https://arxiv.org/abs/2306.02179)-->
The approach can be enhanced by sealed-bidding and encryption of transactions to reduce latency races, see e.g. [TimeBoost (Offchain Labs)](https://arxiv.org/abs/2306.02179) or [Sharma et al](https://www.mdpi.com/2079-9292/10/19/2340). However, these mechanisms may be less required in our setting. Consider an arbitrage price - in a first-price all-pay auction only the highest bid can win and thus actors may issue their transaction as late as possible to only issue the highest bid. Ultimately this incentivises investiments into latency races. In contrast in our setting, fees of the same actor for the same price would accumulate, thus increasing the likelihood to capture investments in priority fees instead.
<!--
We can provide quantitative arguments why this adjustment can lead to a more fair competition. One in which any user that submits a transaction for a given prize can win with probabilistic chances in, what could be in effect considered, such a randomised auction.
one who is competing can win, what becomes effectively a randomised auction.
doing so can be made prohibitively expensive through our approach.
-->
## Conclusion
We have a proposed a design for a sequencer that provides censorship resistance (cryptoeconomics security) and fairness (provably random ordering). Our proposal is light-weight in the sense that it does not rely on a consensus layer. It is modular and the three main modules, pool manager, sequencer and executor can be further refined to integrate some fault-tolerance mechanisms.
It is worth noting that we achieve censorship and fairness resistance with a combination of first-come-first-serve (FCFS) policy and random ordering. FCFS allows us to fill in the batches, and each batch is randomly ordered. This prevents transactions from being indefinitely delayed or delayed by an unbounded (finite bnut arbitrary large) amount of time.
What is not covered in this article is the details of the crypto-economics strategy and the rewards and incentives for the participants (mempool manager, sequencer, executor).
---
---
# Appendix
## Censorship resiliance
Decentralising using a BFT algorithm provides a way of reaching a consensus. It does not protect against censorship. Censorship resistance is a function of the _diversity_ of the participants in the consensus protocol.
We argue that the censorship resiliance can be handled by a different approach than decentralising the sequencer.
The general approach taken is as follows.
1. Provide guarantees about the addition of a given transaction to the mempool of the sequencer. Or alternatively the knowledge that it is not added.
2. Provide guarantees about the eventual processing of a transaction.
The first point can be addressed in several ways.
#### Receipts
We can require the sequencer to provide a timestamped signed receipt of the transaction, which would make the sequencer accountable for timely processing.
#### Broadcast primitives
The sequencer may be seperated from the user through a network of other nodes. For example, this may be to protect the sequencer from spam through a group of load balancing nodes. In a network setting, broadcast protocols become relevant, which provide guarantees that a transaction is submitted and received by all nodes or none (including self), even in the presence of faults (or byzantine actors).
Broadcast mechanisms can provide partial ordering, without imposing atomic broadcast. Examples are Reliable, FIFO, or Causal Broadcast, see this [lecture](https://www.youtube.com/watch?v=A8oamrHf_cQ&list=PLeKd45zvjcDFUEv_ohr_HdUFe97RItdiB&index=12). The properties that Reliable Broadcast holds (and that we are interest in), are
- **Consistency / Agreement** : Honest party have common output.
- **Validity** : The output is indeed the senders message.
- **Totality**: If any honest party outputs a given output, all honest parties output eventually.
<!--
- **Validity** : The output is indeed the senders message (if honest).
- **Termination / Liveness**: Honest party obtains output before time `T`,
-->
Several relevant broadcast protocols have been proposed, which provide guarantees on delivery, in the presence of byzantine actors, see [Das et al](https://eprint.iacr.org/2022/052). This opens the opportunity for a trustless network of transmitting nodes to the sequencer. In a (partially) synchronous setting this can provide guarantees on when transactions should be received by the pool manager or sequencer and does not necessarily require trusted receipt issuance.
While this introduces decentralization, the utilised protocols can be substantially more simple and performant than comparable BFT consensus engines required for the decentralization of the sequencer.
## Attack vectors
<!--At present blocks are separated into tiers to address certain concerns, such as high cost for reverted transactions. The order as well as the addition of a transaction to a tier is tier specific.
Users submit transaction via a smart contract, which checks if the transaction would be successful and reverts them at a substantially lower cost, compared to when they would be issued directly.
This requires _executing_ or at least checking some properties of the transaction.-->
### Spam attack and mitigation
We assume the following scenario. In order to frontrun a transaction an adversary must obtain a position in the block that is higher than the transaction that she wants to frontrun. Since the order is random, the adversary cannot guarantee the allocated slot in the block but increase the probability to suceed by issuing multiple, effectively identical transactions.
This raises the following concerns.
(1.) An honest user that issues a single transaction can be easily frontrun, while potentially having to bear the full cost of their transaction. This can be in contrast to models, where users get refunded part of their fees.
(2.) An adversary is incentivised to spam transactions, which lowers the available free bandwidth, degradate the network performance and in the worst case can bring down the sequencer node.
(3.) Depending on the policy, bundling of independent transactions could increase the chance to have a higher probability to obtain a higher position.
(4.) Users could be motivated to submit their txs as late as possible towards the closing of a batch, to gather additional information about competing txs.
However.
In regards to (1.), in the case of a first-price auction, a well informed adversary may have certainty to win an auction (e.g. better information on bet distribution). While in contrast in our model the outcome is still down to chances. For example, given two bidders with almost identical value, in an auction model only the highest bidder can win, while in a fair model the chances would still remain almost equal. This can be achieved in our model.
<!--Similarity can be drawn to a all-pay auction.-->
In regards to (2.), the incentive for spaming is reduced through a) employ an all-pay policy, which makes spamming costly. And b) modifiable priority fees, and account in the randomisation for the fee. More particularly, choosing the probability to being selected linear to the invested fee, would result in a probabilistically fair ordering. This removes incentives to spam identical transactions, as a single tx with a high fee would have the same probability to be selected, as if the fee would be distributed between many txs.
In regards to (3.), one can apply priority fee per gas, see e.g. [here](https://www.youtube.com/watch?v=GHPFWMQOwCk). This would counter the bundling of unrelated txs, as an individual tx has the same probability to obtain a certain position, as a bundle that would combine several of these independent txs.
In regards to (4.), one may address this by sealed-bid auctions.
<!--
## AG notes
Suggestion: address the common critisms with random ordering
- encouragement of spam to try brute force the "malicious" transaction into the "right" slot. This can perhaps be done by fees.
- this approach may result in a larger amount of reverted transactions
- (dis)allow `bundles` of transactions which are popular in eth PBS land
-->
<!-- this is theorized in this talk by flashbots about all transaction ordering policies: https://www.youtube.com/watch?v=l_gkFntA1cA worth a watch
also interesting is this thread from way back about Geth's random ordering policy in miner.go: https://github.com/ethereum/go-ethereum/issues/21350 -->
### Probabalistic analysis
#### Effect of spamming transactions to frontrun
Assume a user $A$ wins a game if any of her txs gets a position higher than a tx issued by oponent $B$. Furthermore, assume $B$ only issues one tx.
Ignoring size of transactions (batches), influence of fees, gas, etc., $A$ wins with probability
$$
p(N,k) = Mean_{m=1}^N(1-\prod_{j=1}^k f(N,j,m))
$$
$$
f(N, j, m) =
\begin{cases}
\begin{aligned}
\frac{m - j}{N - j} & \text{ if } m>j \\
0 & \text{ else }
\end{aligned}
\end{cases}
$$
where $N$ is number of tx in block, $k$ is number of adverse txs, $m$ is the index of the victim tx.
#### Expected Gains
The expected return on such a game is
$$
E = (M-k)p - k(1-p) = Mp-k
$$
where $M$ is the value that can be won relative to the cost of a tx.
Naturally $A$ attempts to maximize $E$, i.e.
$$dE/dk = Mdp/dk-1 = 0$$
#### Example figure
The figure below illustrates the winning chance for $A$, as well as the optimal number of txs to issue (in red), with respect to $M$.
The figure can be reproduced, visting the code [here](https://github.com/apenzk/sims_misc/blob/main/white_rand_black_balls/main.py).
![plot_average__N=1000](https://hackmd.io/_uploads/rk0RStrcT.png)
<!--
![plot_average__N=1000](https://hackmd.io/_uploads/HJN4vdRtp.png)
-->
#### Modifications to the game
As seen, it may be advantageous for a user or adversary to spam transactions, which would consume valuable networking resources.
One option to disincentivise such behaviour is to introduce
- firstly, a minimum base fee to get a slot in a block
- secondly, permit users to increase that fee arbitrarily for a given tx.
- thirdly, make the position influenced by that fee.
A simple model is the following. Let $$s_i=Hash(r || Hash(tx_i)) / \sigma$$ the score, $r$ the randomness from the VRF, and $\sigma$ the scaling factor which accounts for the largest expected fee. $\sigma$ is introduced to keep the score value in the same bit range as the Hash value, but this may not be necessary.
Let $S_i = s_i F_i$ the adjusted score, after considering the fee $F_i$ of the tx. Positions within the block are then distributed from highest to lowest adjusted score.
This has the following impacts:
- Firstly, there is no advantage of spaming txs. Infact it may be cheaper to increase the fee, rather than issue more txs.
- Secondly, $B$ can protect her tx by increasing the fee. As the above figure indicates, this makes the situation significantly worse for a competing user $A$ that wants to frontrun.
- Thirdly, fees could be directed to a pre-defined address, reducing or disabling complex MEV schemes and providing the operator with additional control on redistribution of funds.
- Fourth, it remains conceivable that this may also give rise to the cultivation of a more intricate model among participants reliant on frontrunning, which ought to consider the expected return $E$.