# On MEV Tradeoffs and Solutions
(i) coordinator being selfish and thus intentionally design bad mechanism for ppl to artificially jack up the tax → mev is tradeoff between efficiency and privacy (permissionless public info) → eliminating selfishness by Monarchy (always wins) → **moloch examples refers more to the poa=0 scenarios where you gain advantage at a common value (race to the bottom)**
- ordering protocolls (decentralized buidlers)
- ok I do agree a benevolent, active and willing-to-move god can solve all this problem by censoring any selfish smart contracts
- if better equilibrium achieved: then distribute reward, else: do nothing
(ii) coordinator’s tax (centralized) eroding the incentive for ppl to even using some coordination mechanism → decentralization
- free rider problem
- how to distribute the utility fairly
(iii) coordinator’s tax (centralized) eroding the quality of the coordination device by making it less credible
(iv) coordinator have a limited bandwidth (bounded by computation resources)
(v) assuming solving all of the above, we still cannot decide on a good social welfare function on everyone, i.e., what outcome should the coordinator choose for us? a utilitarian one? a one that favors the weak? (yeah an unfair way to divide free money is fine if we only have one community, but considering x-domain mev there is no way for this to work, it has to be divided fairly) → OSS → 51% tyrrany of the rich -> segregation of community -> refined community for soverign individual -> but x-domain mev corrupts all of this → soverign individual → condorect cycle → Themis/fair ordering → one way to express preferences
(vi) assuming we solving all of above, that is still just one coordinator and its community working perfectly, but we have multiple communities/coalitions and coordinators, and they have different mechanisms, so can they be stable with the presence of each other? → xdomain mev → human values (any notion of fairness) are non-efficient and thus susceptible to xdomain mev (but going full utilitarian might end us in a paperclip maximizer scenario through multipolar traps)
Solution
(i) is solvable imo, e.g., Monarchy/philosopher king solves it(ii) is solvable by vertical integration of everything(iii) is solvable by democratizing mev revenue(iv) is unsolvable unless we live in an edge case timeline(v) is unsolvable unless we live in an edge case timeline(vi) is unsolvable unless we live in an edge case timeline
plus a nice correspondence with the impossibility triangle of crypto:(ii) is decentralization(iii) is security(iv) is scalability
hayek: smart and benevolent is not enough, for he doesn't have the knowledge existent in ppl's mind (thus require perfect info public to planner) → https://www.econlib.org/library/Essays/hykKnw.html
### **Coordination layer definition**
A coordination layer is a way to decide on the ordering of the transactions, it is called coordination layer because it is a way for relief users from the Moloch’s curse, i.e., achieve a consensus on ordering and that transactions to send in a cooperative way such that the Price of Anarchy is eliminated to the greatest extent.
See below for details on why we framed coordination in this way:
[Relieving the Moloch's curse](https://www.notion.so/Relieving-the-Moloch-s-curse-00cc5e7d4d42456fadf99dd1bf68af29)
### **Coordination layer model**
1. takes as input a pool of user txns
2. convert the user txns to txns with attached preference functions
1. preference functions that can be described as a 3d graph shown as below (or as a 2d heatmap with the heat being MEV), with the preference function upgradable from time to time

preference function example
3. the coordination layer order the txns in a way such that the final ordering maximally approximates the *best ordering*
1. the *best ordering* is defined by some kind of social choice function that is common knowledge
2. in case of a tie (condorcet paradox, as guaranteed by arrow’s impossibility theorem), the coordination layer defers the choice to another tie-breaker coordination layer, and so on. Because condorcet paradoxes occur relatively rarely when the number of agents are large, we should (probabilistically) not see them happen.
### Properties
why coordination layer is different from traditional mechanism design (though they seems like the same) for various properties of crypto.
1. communication constraints are heavy (esp x-domain), async latency (less in l2), plus in reality people cannot discover their own utility functions: it is a very high dimensional if not infinite dimensional object. cannot be possibly communicated, and your utility depends on other people’s utility as well (which is a view you do not have)
1. other players might not be monitoring the commitment
2. the coordinator might not be monitoring the communication
3. surrenderring control to the mediator who has power (both parties trust the mediator but doesn’t trust each other) makes the action space much larger and the threat/promise much more credible
4. anti-mechanisms eroding the incentive compatibility of VCG? ⇒ have a mechanism that cheats on behalf of the users (keeps the incentive compatibility, but loses on the objective function/utility)
2. but the commitment is extremely good: because of decentralization you are guaranteed to have a compulsory contract that the other party can know of through communication. Thus you can make very good/credible threats
1. i.e., binding agreements are possible before the start of the game (or at any time), thus it is cooperative game and people can form coalitions much easier (but ofc in reality have to consider sybils, etc,.) →
2. privacy: darkDAO, so mechanism has to be extra robust to prevent anti-mechanism from bribing members → e.g., damage the cohesiveness of the searcher coalition (the coalition value is no longer superadditive), or that the core of the coalitional game is empty in that none of the payoff profiles are feasible → e.g., three player majority game with more than 67% reward
3. utility is easily transferrable (through smart contracts inside coalitions)
1. sometimes the core is nonempty, thus there are a continuum of solutions, thus we need a better way of transferring utility “fairly,” thus relates to harsanyi dividends and shapley value
4. commitment doesn't work without communication, and communication is not solved by blockchain
1. well, partially solved by that once the commitment is seen, it is immediately credible
5. imperfect information, irrationality (no mathematical model), collusion (no model, hard, company has cross share holders, vertical integration is prevalent and hard to reason, esp in blockchain where you don't know the other party)
- traditional mechanism design assumes the person implementing the mechanism is trusted (credible mechanism design)
- solve collusion within the mechanism is hard, many impossibility results (or you can just destroy the communication device by privacy, like darkDAO)
- in economics assume legal channel, but we have cryptography (which solves many impossibility results)
6. mev not only come from conflicts, also come from cooperation (you can include my txn if this state is somehow good for other people, and then comes another people who likes this state) => overlap of preferences (perfect coordination game)
1. this is the no PoA scenario, where the coordinator’s job is to make coordination happen by coordinating across time (putting two txn which should’ve been in different blocks in same block) or across space (putting txn which should’ve been in different state executions into together) ⇒ this is also the private scenario as you don’t know who the counterparty is, so you cannot threat that well? (what if you have ToM)
2. also in private mempool scenario the coordinator cannot possibly filter deadlocks
3. also arises when the communication of preferences to the coordinator incurs externalities (in the case of wasted blockspace or latency-based services)
4. in the second level Moloch case, the communication of preferences to the coordinator is perfect, but there are still imperfections in that the Moloch’s curse exists (if we allow for coordination
### **Coordination layer problems**
1. there are impossibility results for approximation of computationally hard problems in a incentive compatible way, so we cannot assume that the preference function attached in the txn is “true” in the sense that it actually depicts the user’s preference. Instead, the best we can do is to “approximate” the true preferences
1. The best way to approximate the preferences is by running an auction
2. many users are unsophisticated, and thus we need to discover their preferences using some OFA structure instead of having one single centralized builder (even if a smart & benevolent one), because centralized builder has less tacit knowledge/information/ability in specific MEV activities and thus introduce inefficiency. To quote Hayek *“efficiently coordinated planning itself requires the knowledge that only the price system, backed by the judgments of particular circumstances carded out by individuals, can supply.“*
3. anything that is not a direct auction will
1. probably result in an all-pay auction (as in the case of most latency games) which incurs winner’s curse and extra inefficiency (basically means the coordinator is actively malicious in designing mechanisms that intentionally increases its own utility)
2. probably result in an auction that is not combinatorial, meaning that the mechanism/coordinator considers less dimensionality of the preferences (less refined), thus it makes the allocation less efficient
3. be not efficient in aggregating & approximating the true preferences of users (because it is hard to gauge the utility spent on multiple games, especially when the player’s cost in those games are private to the coordinator), and thus leading to the builder building less efficient blocks (because they have less information).
4. indirect auction makes the game less transparent, thus raises the barrier of the game, favoring sophisticated users over ordinary retail users more. This would make the extraction of MEV more centralized and therefore the profit to be more centralized, which both enables the Moloch’s curse and creates a more predatory and hostile environment (worse utility, liveness, and censorship resistance) for the mass majority of users. Moreover, it would be hard to devise and iterate on mechanisms to redistribute the utility because the game opaque.
5. many indirect auctions end up being latency-based, and in any latency-based game, the sequencers/proposers/validators have inherent huge advantages. This implies that they are more likely to collude with sophisticated users and thus creates centralization in sequencers/proposers/validators, which harms geographical decentralization, network security, censorship resistance, and liveness. → draw game for sequencer incentives, alike Monarchy by making the validator always win so there’s no incentive to be malicious or turn the mechanism to her favor
2. in the setting of crypto where the coordinator is passive (a decentralized general computing platform), the Moloch’s curse will exist as long as the coordinator tax (MEV = coordinator tax = Moloch Extractable Value) is extracted and distributed in a centralized way.
1. in Moloch
3. (i) coordinator being selfish and thus intentionally design bad mechanism for ppl to artificially jack up the tax → mev is tradeoff between efficiency and privacy (permissionless public info) → eliminating selfishness by Monarchy (always wins) → **moloch examples refers more to the poa=0 scenarios where you gain advantage at a common value (race to the bottom)**
- ordering protocolls (decentralized builders)
- ok I do agree a benevolent, active and willing-to-move god can solve all this problem by censoring any selfish smart contracts
- if better equilibrium achieved: then distribute reward, else: do nothing
(ii) coordinator’s tax (centralized) eroding the incentive for ppl to even using some coordination mechanism → decentralization
- free rider problem
- how to distribute the utility fairly
(iii) coordinator’s tax (centralized) eroding the quality of the coordination device by making it less credible
(iv) coordinator have a limited bandwidth (bounded by computation resources)
(v) assuming solving all of the above, we still cannot decide on a good social welfare function on everyone, i.e., what outcome should the coordinator choose for us? a utilitarian one? a one that favors the weak? (yeah an unfair way to divide free money is fine if we only have one community, but considering x-domain mev there is no way for this to work, it has to be divided fairly) → OSS → 51% tyrrany of the rich -> segregation of community -> refined community for soverign individual -> but x-domain mev corrupts all of this → soverign individual → condorect cycle → Themis/fair ordering → one way to express preferences
(vi) assuming we solving all of above, that is still just one coordinator and its community working perfectly, but we have multiple communities/coalitions and coordinators, and they have different mechanisms, so can they be stable with the presence of each other? → xdomain mev → human values (any notion of fairness) are non-efficient and thus susceptible to xdomain mev (but going full utilitarian might end us in a paperclip maximizer scenario through multipolar traps)
### Coordination layer choices
1. OOA/worldquant builder
2. mass simulation is like best response function + honest coordinator → providing ingredient and builder does the searching
3. can run multiple auction in parallel that are not conflicting, like time curves → if parallelization approx works well
4. if appchain purely for transfer, then no mev, can do fair ordering, app chain is sharded execution, same as parallelization
### Mental Model for Formulation of the Layer
- so preferences of MEV is actually a lattice, and then you search in this lattice
- in reality, the preference is gonna be passed in the form of a specification (program/smart contract), and then you search on it
- so this makes sense if we have symbolic preferences that can be easily plugged into solvers that represent an abstract domain (think galois connections of concrete domain and abstract domain)
- F need to be monotonically increasing (usually done by using \cup) and L has no infinite ascending chains => solve by widening operator
- fixedpoint of functions, and use galois connections, symbolic abstraction to synthesize abstract transformers
- you can choose builder with some extra properties (which might add or reduce complexity, property must be something about others' txns/global, and you can actually require people to pay fees on this like parallelization market) they support **on top of the social choice function enshrined by the domain**, **in low latency env (low acceptable worst case latency) the advantage of simple properties are enlarged**
- assuming everything else same, builders with more sophisticated users win (because utility is more well-communicated)
### Mental Model for Problems
- liquidation twap is more interesting & important
- zero-sum is bad
- non-replay frontrunning should is not really an issue, and should be solved by application layer, not coordination layer
- composability & privacy => coordination layer is better than application layer at recover composability after breaking atomicity because of batch auction
- problem: best fb block is minimizing user txn
- batch auction = more latency, but latency is negligible to humans but generates great welfare for users at negligible cost to humans
- need to shield user from discovering frontrunning builders + users have a way to discover frontrunning so the model for rationality holds up (rationality assumes permissioned builders)
- pre-trade privacy as a priority?
- micro-sized attacks, 10% slot censor, single slot collusion
### Coordination Difficulties

### Open Problems
gauge how much welfare can be generated
application offloading to coordination
research engineer tasks (parallelization ratio to mev, coordination surplus, ) ⇒ figure out the curve for the non-commutative txns
will orderflow decide who the winning builders are? Does anything else really matter? ⇒ convexity of advantages
---
__coordinator being malicious__: since $c$ is a rational agent, it is able to perform actions that change the equilibrium in its favor leveraging the advantageous position it has under $CT_c$ (e.g., it has more information than individual agents). For example, $c$ can vertically integrate and "steal bundles." We can solve this by:
- adding privacy: which hides information from $c$ so that its action space is constrained
- deploying $c$ as a rollup: validity proofs (e.g., $c$ needs to send a SNARK proving the state transition is the output of a pre-defined $M$) or fraud proofs (e.g., optimistic social sequencing)
- intra-temporal decentralization of $c$: through decentralization of the execution of $M$ within a time-slot (e.g., using parametrized Themis or any distributed systems preference aggregation protocol that votes on preferences over transaction ordering), we've removed some power from each individual entity $c$.
- directly restricting the power of $c$: we can require the execution of $M$ to be done on a computation platform where the programming language has restrictive expressivity so $c$ have less action space.
__coordination cost__: there exists many inefficiencies in using $M$ in the case of an ill-designed $M$ or the non-existence of $M$. Some cases include:
- uncoordinated coordination: gestalt coordinator, it has the problem of ...
- auction by other means: this introduces the problem of (i) uncoordinated coordination (ii) hard-to-redistribute coordination profits, and (iii) bigger room for $c$ to be malicious.
- transparency and democratization of $M$: agents need to be able to access $M$
- intricate $M$: if the mechanism puts a high bar on correctly using it, then many agents will miscommunicate and be preyed on
argument against FCFS ....
__endless coordination because of Moloch's curse__: agents need to coordinate to use the coordination mechanism of $M$. Suppose that there exists some problem with $M$ and we want to solve it, we can devise another mechanism $M'$ for agents to coordinate to solve the problems we have with $M$. But since $M'$ itself is another coordination mechanism, it will possibly inhibit the same problem, and $M'$ cannot self-coordinate to make the problem go away. Thus, we devise a new mechanism $M''$, and so on. You can slay the Moloch, but the curse lives on.
**the incompleteness theorem of MEV**: any mechanism that is strong enough to resolute the MEV of another mechanism will also inhibit MEV, and it cannot resolute its own MEV
In fact, we can write this as something akin to Tarski's undefinability theorem or Godel's second incompleteness theorem: any mechanism that is strong enough to solve the MEV of another mechanism will also inhibit MEV, and it cannot solve its own MEV.
**Incentive Compatibility**:
- the payment should not depend on agent's type => easy way for IC
- VCG prune to collusion (maybe IC and collusion-resistant cannot be achieved at the same time)
- strong budget balance is impossible in addition to other properties