# Notes on Shared Sequencing (Based) 1. Synchronous composability 2. Decentralized preconfs 3. Based sequencing ### Shared sequencing (JD mev.market) - synchronous inclusion - synchronous execution - synchronous composability ### Guarantees the shared sequencer can give on different data structures - each rollup can be its own silo or execute in parallel - ss can choose to do some fancy arbitrage - ss - expose to the market additional value that synchronicity can provide ### Super-transaction - multiple constituent sub transactions - asset transfers happening in real time - final yellow call depend on the green call (some amount of locking) - virtual locking this ss can provide (pay for gas on both rollups) - Proving latency - synchronous composition at the top of the block - synchronous composability at the end of the slot 100ms - need fast real time proving - Snark proving asics (accseal, cysic, fabric) - Liquidity providing - simulate idea of synchronous composability with liquidity providers - liquidity constraints ## Decentralized preconfs - how do we preconfs in a decentralized setting? - user determine who will be the sequencer in the next slot - every slot there is a different sequencer - how do you do slashing (centralized sequencer leveraging social reputation) - Need financial collateral - safety and liveness faults - safety - tx doesn't execute how it was agreed - liveness faults - Builders are tasked to build blocks that respect the preconfirmations (fancy inclusion list) - relays can enforce constraints are respected - need the builder to be collateralized to the same extent the proposer did - market based vs. policy based (FCFS) - harder to have policy based ## Based sequencing - Can we compile an existing shared sequencer to give preconfirmations - Ethereum provides multiple modules; settlement, DA, sequencing - Motivations for ethereum sequencing; L1 security, credible neutrality, composability - A subset of L! proposers opt in to become sequencers - come forward with financial collateral - everyone else is an includer - In order to provide preconfs as a service, ser communicates with net sequencer - request by user and promise given by preconfirmer, includers are given the right to include and settle - Best in class user experience - Next slot settlement - Simply modify mev boost to give preconfirmations - need lots of collateral and full nodes - pricing preconfirmation tips (high uptime and low latency) - Execution tickets are a hard fork - in the meantime can introduce the relay as a (gateway for preconfirmations) - proposer delegates sequencing right to the gateway - collateralized in case they cause a safety fault - Using L1 proposers more maximum security and maximum credible neutrality - Subset of problems related to shared sequencing - MEV sharing - can we identity where the MEV came from ## Ethereum Sequencing (CAKE) Two bottom layers of the cake - settlement and solving, dramatically reduce complexity with shared settlement and shared solving - standard just used Ethereum as shared settlement and shared solver. - complexity blows up with multiple settlement layers you are going to abstract over Lets say we have two settlement layers - basic thing we want is a bridge. Any L1 can suffer a 51% attack or update through governance or have micro re-orgs (Social governance can always change the rules of L1) - need a governance token to do maintenance to update the light clients on both sides. Now we have three social communities. Now lets assume we have k bridges, k+2 social communities, K+2 tokens - Fragmentation of economic security - amount of economic resource that consensus participants put forward to secure the blockchain, total amount dollars 31M eth * price of eth - important because the amount of economic security highly correlated with dollar cost amount to attack the chain. Censorship and Liveness attacks. - Base case is we see one settlement layer gobble up everything, ethereum is the begins to gobble everything up - Solana on Ethereum - SOL is governance token - they want to do emergency maintenance forks The other big problem is around shared solving. What are you solving over. You have multiple providers of blockspace, if you have on producer or supplier of blockspace - it becomes easier to get get composability - Multiple proposers now - arbitrum, optimism - lets simplify and have one single counter party - One of the big advantageous is you can get synchronous composition - in a world where you get super transactions made up of smaller transactions - Endgame you have one master shared sequencer - can craft super transactions that touch multiple execution zones for the whole bundle. Shower thought: how do you deal with reversion? In asynchronous world with k domains - as soon as one of k domains messes up you have to revert k minus one. Reversion is expensive process, need to do DA and SNARK proving even tx reverts. One provider - ultra cheap simulation, try tx and revert within one data center. - Cost - prove computation you have 1000x overhead - CPU consumes 1 jul, prover will consume 1000 jules, similar thing with DA - Consume 1 byte of info doing it on my server vs. All to All is much cheaper - if many txs will be reverting better off with this simulation stage which filters out the reversions. You can think of block building as being this highly sophisticated data and computation compression service - Claim - from efficiency standpoint better to have builders do pruning on one node rather than gossiping to the whole world. - once you have synchrony can have shared liquidity and complex applications - each application you interact with today is a tiny lego - expect as we have more money legos want to start building super structures, the brittleness of synchrony will compound as super structure grows. - Thesis- which shared sequencer should we use? - Ethereum - no new security assumption, most credibly neutral as well, (e.g. ENS wants to L2 and don't want to socially piss off rollups), need to have the root of trust somewhere. - ENS said you know what should probably launch as a based rollup - Aztec recently strated considering based sequencing wave that has some amount of momentum - In order to get rollups to adopt this need to fix major problem which is pre-confirmations. ### Decentralized pre-confs in general - user communicates with sequencer at current slot here is my tx please confirm it, and it is a promise given by the sequencer, similar to centralized sequencer - what we want is trustless stake collateral- can just use financial collateral with slashing, different than centralized reputation like we see today with optimism, arbitrum, etc - Slashing - Safety faults - made a preconf for something and sequenced something als - Liveness- made a preconf and then happened to be off line - From POV of PBS and mev-boost need to do a slight change - current flow is uni-directional builder->relay->proposer, need a back channel flow where user establish constraints on builders, bi-directional flow on information, relays can help enforce this Centralized - fixed sequencer at every slot, decentralized - rotating Slashing - reputational, financial MEV policy-based, market-based What happens if the pre-conf is not correct? - entity giving the preconf is the sequencer who has monoply power, preconfer is taking low risk only liveness, b/w few seconds they create pre-conf and the block Liveness / Safety, could also happen there is a re-org which is not your fault? - pre-conf gadget is two things - a pair of a finality gadget with consensus on who the next sequencers will be - pre-confs always conditional on some sort of finalized stuff, only give preconf on things that are under my control - As a user, don't know what the tip of the chain is? Pre-conf in both cases, can even buy insurance against this rare event. ### Based Sequencing - In order to provide preconfs need collateral, need L1 proposers to opt into preconfs with collateral. 32 slots in beacon chain look ahead, some subset will opt in with collateral, only the subset that opts in are the sequencers. The decentralzied sequencer mechanism is made out of asubset of L1 prposers deemed to be the sequencer. - Variable weird block times which is fine - give some work to the non-preconfirmers (includers) - once user agress to preconf with next sequencer this preconf can be shared publicly and settled immediately by the includer. Take promises then settle them immediately in the next slot. - Is there potential for girefing? In slot n+ 30 an includer includes something that invalidates a promise. - the shared sequencer is a subset of L1 proposer, rollups that use shared sequencer will not allow entity before to change their state. Includers can act as an inclusion list. Forced include transactions sequencer by slot n+ 1 - Couldn't some of the transactions be ordered by inclduers on the base chain? Couldn't that also be revived to touch state on L1? - What this sequencer gives you is synchronous composability b/w L2s, for L1 to L2 is a bit more subtle only guaranteed synchronous composability in the green slots (preconf slots) - excpect if this catches on all slots will be green - Presumably everyone rational will be running MEV-boost++ - The shared sequencer posts a blob, are includers posting blobs, taking same messages, calculating root and posting that, but they are not defining the sequence. Watching sequencer output taking that and putting on chain? They also create an inclusion list. - If you compare to PBS terminology - the sequencer is like the proposer, includer is dumb node. Includer is what is supposed to be the beacon includer of execution tickets. - It does feel like there are griefing vecotrs here also from a practical perspective doubt of includers should post them or sell them and howndo you compensate includes. - Includers still get tx fee - open up the design space. - Includers bc cheap than sequencer? Want faster settlement, allow the non-sequencing L1 proposers to do settlement. - What are the L1 gas costs vs optimisim - one benefit is you can have optimal gas costs. Two reasons, you cna now do data compression over multiple rollups, The more data you have the better the compression. If you are compressing optimism and arbitrum simultaneously. Blobs are chunky 125KB, If you are a rollup you may consume 1.5 blobs, what shared sequencer can do is take data for all rollups compress it and optimally pack it. - Optimism today every 4 minutues is pushing all data into blobs and doing the sequencing and proving the ordering on chian? in this setup you do this every block? - there will be some amount of overhead, it can still happen every two or three slots - whenever its economically meanignful to settle - If optimism never has enough data this has an advantage where you can push blobs every slot - this is very friendly for the long tail dont need to wait to consume a whole blob - can have settlement every slot. - Eventually there will be full usage of cpaacity of blobs - consume as much as fast as produced - most optimal way forward, waiting 100 slots won't make sense because you are consuming all of the data all of the time any wastage. - How does it impact customization of sequencing fees? There is a bit of a design space even if you are not a shared sequencer to restrict it in some way, I only want my blocks to settle in even blocks or odd blocks. You can try and have your own Policy - Arbitrum thinks you can have decentralized sequencing and policy based ordering. Is there an incentive for rollups to join a shared sequencer - execution fees (base fees at layer one and two MEV) - On L1 ration is 80 20 - 800 eth of MEV and 20 execution fees - expect MEV to go down by an order of magnitude - things like sandwhichs will go away, and encryptred mempools, expect cex-dex arbitrage - thanks to next generation dexes - arb opportunity is auctioned off and rebated bacl to LPs, MEV always stays at user level - Expect the ratio of inclome will be 99% execution fees and 1% - join shared sequencer increase execution fees - say someone wantas to buy a large amount of token optimal way to do this is route through different execution environment. - If you have asynchrony you cna't do this - 1inch, matcha, you have 1inch copies not 1 inch whcih works for everything if I want to buy amount of token foing to pauy execution fees there - From perspective of rollup beneficial to join serquencing zone - By default if i join shared sequencer MEV goes to shared sequencer - possible loss of MEV is what you gain from execution fees. - Try to identify where MEV came from and rebate back to sequencer. Beleive they have a mechanism if I generate 100 ETH per day I get at least 100 ETH. If you agree there is negligable loss of MEV or no MEV - agree that joining a sequencer increases fees. - What is best served by shared sequencer vs. Solver? - want shared sequencer to be maximally simply and robust - something like a relay - Need big fat type and high uptime - Want to see all of the algorithm capital order flow to be segregated away from shared sequencer. - One way to accelerate this is encrypted mempools - from POV of the sequencer just receiving white noise and the most natural thing to do is setlte FCFS, as soon as you have more info have ability to be sophisticated. - Want as at a minimum is an encrypted mempool - private mempool, hoping we don't need to use trust, if we use trust lifting ourseves to trusted operators, how do we gap trust SGX, Threshold, delay encryption - pure cryptography - Could you talk about similator about pricing preconfs and providing gaurantees of execution inside of an OFA? - there is a similarity. Lets assumethat I'm the sequencer for a slot a fairly long slot like 12s. Here I receive a preconf and want to respond within 100ms, you can be tempted to think weve jsut created 100 ms blocktimes. We have reduced the latency - but from an MEV standpoint the slot time is 12s. As the sequencer I have the incentive to just sit on this transaction wait until the very end and make an informed decision. The mev slot time doesn't change still ong, in order to price tips properly we need to be able to estimate a total expectation of MEV if i were to wait until the end of the slot. Very important preconf tips are priced correctly. USer and proposer we have a helper called a gateway. The gateway is trusted to price the tips properly. If tips priced too low, then there is less money. The user if charged too much they won't be happy either and won't be willing to pay for precomf and will get angry at the gateway. The gateway needs to price this thing very precisely. In OFAs there are similar problem , you have a window of time flow coming in here solvers or searchers that have to respond, you have settlement happening here, not an OFA expert but there is some uncertainty about the future. Even If i win the ofa auction not the blcokc-builder am the searcher, need to price future uncertainty, MEV gain and MEV loss, will be really cool if someone solves this problem - multiple actors in OFA vs. Single signal in the first one, in both cases there are uncertainty on future flow vs. who the winning block builder will be, and also who the wiling solver will be. Here the competition is lacking and there will be a monopoly - here its a two party auction and you get correct pricing, on the other side u have user, proposer and searchers, this three way auction dont know how to solve yet. -- ## Execution tickets and MEV distribution in shared / based sequencing (Espresso mev.market) - advantages sharding computation across applications - heterogeneity Drawbacks - interop b/w applications - can't have a single tx that update multiple rollup state at once A sequencer can run an auction - PBS (build the most profitable block) - **Atomic transactions between L1 and rollups - this is the value proposition** Few things about the auction - who has the right to update all rollups or subset of rollups - proposers of rollups can offer atomicity, fulfill user itnents with preconfirmations - proposers of auction can be flexible - Super block - all settled to the L1 Shared vs. Isolated auctions - how do you share the revenue? - sequencer gets to exxtract mev - how do you allocate across rollups - is there a stable mechanism which rollups can participate - shill bidding prevents efficient allocation of MEV redistirbution (straw man) - proposers can bid on arbitrary rollups (shill bidders if bid too high risk getting excluded) ` Shill bidding functions as a reserve price - reserve price = minimum price a seller is willing to accept from a buyer - every rollup posts a reserve price instead of a shill bid - rollup is only included in a bundle by bidders who receive reserve prie - if no bidder is willing to include rollup in a bundle for reserve price, the rollup proposes its own block - reserve price can be dynamically adjusted Problem 1: This is a combinatorial problem solution find a good allocation - one bundle + individual rollups allow users to submit better allocations run auction far ahead of actual block to enable this process Problem 2: if auction is run far ahead of time of the bidders don't know the value of the block - they need to bid on the expected value of the block - the expected value of the block probably isn't going to change - this means the same bidders win - Solution: run a lottery instead of an auction Shared Sequencer Lottery - adapt solution of the lottery setting. One problem is adjusting for lottery ticket prices to approximate a clearing price Consensus view of the lottery - Proposer attestor sepation - censorship resistance? - Need to support multiple simultaneous proposers Multiple proposers at once - people bid on individual rollups or bundles, and you have multiple proposers --- ## Proposer Builder election mechanisms (Connor) Maximize user welfare - centralized L2 elections today - defining proposer/builder election mechanism - decentralize dl2 elections of tomorrow - some solutions Most L2s all using trusted sequencers - Who are the candidates - trusted vs. permissionless vs. based - why election - early vs. just-in-time - where election - in protocol vs. out of protocol Who are election candidates - trusted - elected by protocol governance - permissionless - anyone can seek election - based - l2 proposers are decided by l1 proposer mechanism Early vs. Justin time election - preconfirmations - reward smoothing IP vs. OP - OP - assumes x is building the block but delegates to y - IP - PK known of who will be producing block Decentralizing the Proposer rule - should an elected L2 proposer outsource block building? - can we remove the need for out of protocol election mechanisms without trusted third parties For outsourcing - proposer enforced rules can provide equivalent guarantees to to the proposers themselves - > proposer revenue - builder market adapts to meet demand - (meme) Keep small node validators Against outsourcing: - increased complexity - introduces up front costs - increased up front costs for users - (meme) L2 DAOs: outsource builders = MEV Complexity paradox of block building Removing out of protocol TTP elections - trusted out of protocol elections incentivize out-of protocol integrations - in protocol solutions probably need early auctions - trusted execution environments might offer a workaround Solutions - Execution tickets - propose the whole block (ahead of time auction for fullblock proposer rights) - Execution tickets to propose updates for specific smart contracts/rollups/accounts/parts of the block - local fee markets - Enshrined PBS - honest majority committees run permissionless auction - Don't forget your TEE... Alternative PBS for based rollups Value capturing based rollups - L1 inhereits based rollups property rights ## Shared Sequencing economics (Christoph mev.market) - the rollup centric roadmap has led to the creation of about 50 rollups - this is good for scaling but fragments usage - arbitrageurs should make markets more efficient across domains - this was on of the possible Endgams - wheather cross domina MEV can be extracted deoends on the ease of coordinating across domains - shared sequencers should facilitate this - Shared sequencer gaurantee atomic execution of a bundle of TXs in different domains - what does that mean? - ffor cross domain trading and MEV extraction - market efficiency - overall revenue for the sequencer - revenue on each sequencer domain Trading as a contest - cross-domain arb trading contest - orchestrate the rules of a trading contest - designing transaction ordering policy means desigining the rules of that contest - if you do FCFS, you incentivzie latency competion - If you do PGA, you incentivize bid cancellation - If you do shared sequencing, you reduce noise competiton A minimal model - two searchers compete to make a profitable cross domina arb trade - the trade has a value v > 0 when executed - winning requires to place two txs, one on each domain, before the competitor - Searchers can influence the outcome of the competition by investing in a signal, s at a cost of C(s) - several interpretations of the signal depending on sequencer design - searchers can influence the out come of competion by investing in - FCFS - In a PBS world they invest in a a bid - In some proposals made by arbitrum, they might do both - The contest is noisy - randomness - investing more make winning more likely - but there are still random factors - to realize the full value of the arb, the bidder needs to win all domains - Shared sequencing: one signal needed - Prob winning: (F(s-s)) Where F is the cdf of delta := epsilon - epsilon - Seperate sequencing: two signals needed with tow imperfectly correlated noise returns Equillibrium investment - assuming some stuff about - the cost of investing What is predicted by the model? - Proposition - there is a threshold on v such that above the threshold in the equillibrium both bidders enter the competition for sure and make the same investment - increasing in value v - decreasing in variance The threshold on v is lowered for shared sequencing What is predicted by this model? - what about the equillibrium investment conditional on both bidders competing - this is more subtle - the charge users twice under seperate sequencing - the marginal return on investment is higher under shared sequencing - Which regime give more equilibrium investment depends on - the variance of noise - the sequencing rule What is predicted by this model? - examples - constant cost elasticity: shared sequencing will generate more revenue, more latency investment under shared sequencing FCFDS - Time Boost (arbitrum proposal) - it depends on v and sigma (the parameterization of the boost formula) - PBS type of market structure - signal is expenditure for fees, infra, payments to intermediaries in the mev supply chain - absolutely no ideas what's the cost structure of this - Regardless bottom line - shared sequencing has ambiguous affect on sequencer revenue - the revenue cost effect of shared sequencing depends on sequencing rule - mechanisms matter! Further considerations - I have looked at a very particular set of players, whats the effect on stake holders? - I have considered overall revenue across all domains - there is also a distributional concerns - some domains may gain from no more integration than others - **revenue sharing scheme is needed** - What is the size of unrelaized cross-domain MEV Conclusion - mechanism matters for who gains and loses when we integrate domains - given all of this, some serious persasusuin might be need to achieve domian integaration anf usage of shared sequencers