<!-- Start of AUCTION_SPEC.md --> # Block Proposer Call Market Auction Specification > This is a specification for a block proposer call market auction contract. It is a work in > progress. It offers an alternative implementation and specification to the MEV Boost Flashbots > Auction ## Table of Contents - [Block Proposer Call Market Auction Specification](#block-proposer-call-market-auction-specification) - [Table of Contents](#table-of-contents) - [Overview](#overview) - [Contract Properties](#contract-properties) - [Contract Functions](#contract-functions) - [Contract Events](#contract-events) - [Contract Storage](#contract-storage) - [Contract Invariants](#contract-invariants) - [Contract Tests](#contract-tests) - [Contract Properties Tests](#contract-property-tests) <!-- End of AUCTION_SPEC.md --> <!-- Start of GLOSSARY.md --> # Glossary ## Overview Glossary of terms used, formally defined. This takes precedence over any other defintion. Terms are defined with _Header 4_. Sections are defined in _Header 3_. ### Blocks #### Above The `Above` Block, also called $\alpha$, is the 'MEV Boost' Block body. This is the part of blockspace that comes first in the block, and contains transactions with top priority. #### Below The `Below` Block, also called $\beta$, is everything else after $\alpha$, excluding our L2 rollup and the Relay-inserted transaction at the end of the Block. ### Call Market #### Call Epochs #### Call Slots ### Call Tokens #### Option Slot Priority <!-- End of GLOSSARY.md --> <!-- Start of accounting.md --> <!-- ACCOUNTING --> # Accounting We need to track flows of remunerations for validators and builders across the different L1/L2 and alpha/beta block parts and include their actual payment txs. ## Notices > [!NOTE] > Proposal One is the current accounting regime > 📍 [!INFO] > We do NOT take on external validators that are not part of our system (i.e. validators must be > added to the registry by us before they can connect to the relay). ## Validators For the validators, the main challenge is determining how much they should be paid and how the exact transfer scheme is organized. ### Proposal 1: Internal accounting - Auction revenue from alpha, $\pi_{\alpha}$ goes to a L1 address different from the actual validator, owned by us. - Auction revenue from beta, $\pi_{\beta}$, goes to a L2 address, owned by us. - From these addresses we organize a payout scheme according to our rules. That is, the validator periodically receives a payment that matches some target $\pi^{t}$ we define. This could be based on the last 30-day average payoff plus a small $\delta$. We have to be careful there, obviously. - Advantage: * We can really fine-tune the payouts we give and skim very effectively our own payoff. - Disadvantage: * We need to change the mev-boost structure to replace the validator address (we have to do this regardless because of the builder remuneration discussed below) * We are building in a trust assumption because our payout scheme might fail or we negate it. _This is a significant change and could amount to a no-go._ ### Proposal 2: Fee The alpha payout part is organized as currently under MEV-boost; the validator is included through a transaction. Hence, the validator automatically gets the alpha winner's bid as soon as the block is minted. From the beta auction we impose a fee, $f$. The revenue $\pi_{\beta}* (1 - f)$ also goes to the validator (on a L2 address). We keep the fee. - Advantage: * Relatively straightforward to set up. - Disadvantage: * Assuming we set $f$ beforehand there is the danger to payout too little or too much. Over time, estimates of $f$ should become better though. ### Proposal 3: Residual claim As for Proposal1, we define a remuneration target for block, $\pi^{t}$, based on some externally based measurement, denoted as $\overline{\pi}$, plus an additional small $\delta$. That is, $$ \pi^{t} = \overline{\pi} + \delta $$ - The measurement could be for instance a 30-day moving average of external block rewards. > [!WARNING] > The dependency on the measurement is key. When a block is built by us, we include a transaction that contains the transfer to the validator $\pi^{t}$. So, this replaces the normal mev-boost transaction based on the immediate auction payout. > [!NOTE] > That this makes the payment to validators independent of the current block. Effectively, if the > validator fulfills his duty, he will receive a guaranteed reward. We will face the residual, $\pi^{protocol}$. That is, $$ \pi^{mev.io} = \pi_{\alpha} + \pi_{\beta} - \pi^{t} $$ - Note that for a specific block $\pi^{mev.io}$ might be negative. This means, we have to provide a buffer for losses. But this buffer can be calculated relatively easily. - Periodically, we will skim the buffer. This goes to our own treasury. - In this setup, we can offer proposers an above average return while at the same time extracting all the residual above. - Also note, we could distinguish transfers for external validators and internal validators: External validators receive their remuneration on the L1; internal validators receive it on the L2. - Advantage: - We give validators what they need but not more - everything else goes into our pocket. - No complicated accounting needed for L1/L2 - Disadvantage: - The treasury introduces a liquidity risk on our side. And it requires careful calibration of the measurement period. ## Builders In contrast to validators, for builders the accounting details are more complicated. The reason is that we have now several builders building the block. Which means value from the transactions has to be attributed differently. ### Required changes in current block structure Currently, builders will put themselves as the fee-recipient for a given block. That part cannot stay intact: If we were to keep it, then the alpha builder would receive fees from the whole block - including the beta part they have not built. This means this aspect of the basic mev-boost/relay infrastructure needs to change. The aggregation of fees in a block is handled through the ETH-client. This is not a piece of infrastructure we can/want easily change. So, the only way to handle this, is to replace the fee recipient with another address and then handle the accounting and settlement on our end. See [Bid Adjustment](#bid-adjustment) > [!NOTE] > This requires us to handle different fee accounts for each participating builder. ### Internal Operations We put an address we own as the fee recipient for any block we mint. That is, the transaction fees all go to us. We will then have to redistribute these tx fees among all the involved builders. ### Alpha builder - We need to sum the fees of the respective transactions he owns. - We will include a L1 transaction in the block that settles the fees. - **This calculation is latency-sensible.** By this we mean that: * The payment tx must be issued before the block is minted; * We need to make sure there's enough space in the block to include this tx. ### Beta builders - For each beta builder we need to sum the fees for the respective txs. - After the block is minted, the beta builders will receive their fees on their L2 address. - This calculation is not latency-sensitive, as it gets credited on our L2 and as such can be postponed by a few seconds if needed. ## Bid adjustment > Accessed 12/22/2023 > <https://gist.github.com/blombern/c2550a5245d8c2996b688d2db5fd160b/e16d5ef906267ecf21b22e784ed435387e5988d1> Bid adjustment is a new experimental feature for the ultra sound relay. The idea is that we try to adjust bids, ideally to `secondBestBid + 1 WEI`, capturing the delta (if any) from our latency advantage compared the the next best relay. Our goal is to make operating and developing (e.g. geo distribution, proper cancellations) the relay sustainable by having an incentive tied to its performance. To achieve this, we need block submissions to include some additional data. As an incentive for builders to integrate we offer a percentage of the bid delta as a "kickback". During November 100% of the delta will be kicked back to builders. ### Technical implementation Enabled by `?adjustments=1` and by including an `adjustment_data` object in the normal block submission: ```json { "message": { ... }, "execution_payload": { ... }, "signature": "0xa2def54237bfeb1d9269365e853b5469f68b7f4ad51ca7877e406ca94bc8a94bba54c14024b2f9ed37d8690bb9fac52600b7ff52b96b843cd8529e9ecc2497a0ecd5db8372e2049156e0fa9334d5c1b0ef642f192675b586ecbe6fc381178f88", "adjustment_data": { "state_root": "0x74f74d15dcb00ba194901136f2019dd6be2d4c88c822786df90561a550193899", "transactions_root": "0x74f74d15dcb00ba194901136f2019dd6be2d4c88c822786df90561a550193899", "receipts_root": "0x74f74d15dcb00ba194901136f2019dd6be2d4c88c822786df90561a550193899", "builder_address": "0xb64a30399f7f6b0c154c2e7af0a3ec7b0a5b131a", "builder_proof": ["0xabc", "0x123", ...] "fee_recipient_address": "0xb64a30399f7f6b0c154c2e7af0a3ec7b0a5b131a", "fee_recipient_proof": ["0xabc", "0x123", ...], "fee_payer_address": "0xb64a30399f7f6b0c154c2e7af0a3ec7b0a5b131a", "fee_payer_proof": ["0xabc", "0x123", ...], "placeholder_transaction_proof": ["0xabc", "0x123", ...], "placeholder_receipt_proof": ["0xabc", "0x123", ...] } } ``` - `builder_address` is the usual builder address that pays the proposer in the last transaction of the block. When we adjust a bid, this transaction is overwritten by a transaction from the collateral account `fee_payer_address`. If we don't adjust the bid, `builder_address` pays the proposer as per usual. - `fee_payer_address` is an account which holds the ETH used by the relay to pay the fee recipient. Builders fund this account to use the feature. All adjusted bids are paid from this address. - `fee_recipient_address` is the proposer's fee recipient. - `placeholder_transaction_proof` is the merkle proof for the last transaction in the block, which will be overwritten with a payment from `fee_payer` to `fee_recipient` if we adjust the bid. - `placeholder_receipt_proof` is the merkle proof for the receipt of the placeholder transaction. It's required for adjusting payments to contract addresses. > [!NOTE] > That we rely on the g`as_limit` of the payout transaction being exact, i.e. 21000 for EOA > recipient and variable for contract recipients. #### Computing the adjustment data Example using `flashbots/builder` and `go-ethereum` libraries: ```go type AdjustmentData struct { BuilderAddress *common.Address `json:"builder_address"` BuilderProof *[]hexutil.Bytes `json:"builder_proof"` FeeRecipientAddress *common.Address `json:"fee_recipient_address"` FeeRecipientProof *[]hexutil.Bytes `json:"fee_recipient_proof"` FeePayerAddress *common.Address `json:"fee_payer_address"` FeePayerProof *[]hexutil.Bytes `json:"fee_payer_proof"` PlaceholderTxProof *[]hexutil.Bytes `json:"placeholder_transaction_proof"` StateRoot *common.Hash `json:"state_root"` TransactionsRoot *common.Hash `json:"transactions_root"` ReceiptsRoot *common.Hash `json:"receipts_root"` } func (w *worker) computeAdjustmentData(env *environment, validatorCoinbase *common.Address, feePayerAddr *common.Address) (*types.AdjustmentData, error) { // Account proofs builderProof, _ := env.state.GetProof(w.coinbase) hexBuilderProof := make([]hexutil.Bytes, len(builderProof)) for i, v := range builderProof { hexBuilderProof[i] = hexutil.Bytes(v) } feeRecipientProof, _ := env.state.GetProof(*validatorCoinbase) hexFeeRecipientProof := make([]hexutil.Bytes, len(feeRecipientProof)) for i, v := range feeRecipientProof { hexFeeRecipientProof[i] = hexutil.Bytes(v) } feePayerProof, _ := env.state.GetProof(*feePayerAddr) hexFeePayerProof := make([]hexutil.Bytes, len(feePayerProof)) for i, v := range feePayerProof { hexFeePayerProof[i] = hexutil.Bytes(v) } // Placeholder tx proof placeholderTransactionIndex := len(env.txs) - 1 var transactions types.Transactions = env.txs transactionTrie := populateTrie(transactions) transactionProofDb := rawdb.NewMemoryDatabase() transactionKey, _ := rlp.EncodeToBytes(uint(placeholderTransactionIndex)) transactionTrie.Prove(transactionKey, 0, transactionProofDb) transactionIter := transactionProofDb.NewIterator(nil, nil) var placeholderTransactionProof []hexutil.Bytes for transactionIter.Next() { placeholderTransactionProof = append(placeholderTransactionProof, transactionIter.Value()) } transactionIter.Release() // Placeholder tx receipt proof receiptKey, _ := rlp.EncodeToBytes(uint64(placeholderTransactionIndex)) var receipts types.Receipts = env.receipts receiptTrie := populateTrie(receipts) receiptProofDb := rawdb.NewMemoryDatabase() receiptTrie.Prove(receiptKey, 0, receiptProofDb) receiptIter := receiptProofDb.NewIterator(nil, nil) var placeholderReceiptProof []hexutil.Bytes for receiptIter.Next() { placeholderReceiptProof = append(placeholderReceiptProof, receiptIter.Value()) } receiptIter.Release() transactionsRoot := types.DeriveSha(transactions, trie.NewStackTrie(nil)) receiptsRoot := types.DeriveSha(receipts, trie.NewStackTrie(nil)) return &types.AdjustmentData{ BuilderAddress: &w.coinbase, BuilderProof: &hexBuilderProof, FeeRecipientAddress: validatorCoinbase, FeeRecipientProof: &hexFeeRecipientProof, FeePayerAddress: feePayerAddr, FeePayerProof: &hexFeePayerProof, PlaceholderTxProof: &placeholderTransactionProof, PlaceholderReceiptProof: &placeholderReceiptProof, StateRoot: &env.header.Root, TransactionsRoot: &transactionsRoot, ReceiptsRoot: &receiptsRoot, }, nil } // Helpers // encodeBufferPool holds temporary encoder buffers for DeriveSha and TX encoding. var encodeBufferPool = sync.Pool{ New: func() interface{} { return new(bytes.Buffer) }, } func encodeForDerive(list types.DerivableList, i int, buf *bytes.Buffer) []byte { buf.Reset() list.EncodeIndex(i, buf) // It's really unfortunate that we need to do perform this copy. // StackTrie holds onto the values until Hash is called, so the values // written to it must not alias. return common.CopyBytes(buf.Bytes()) } // Adapted from core/types/hashing.go func populateTrie(txs types.DerivableList) *trie.Trie { db := rawdb.NewMemoryDatabase() triedb := trie.NewDatabase(db) t := trie.NewEmpty(triedb) valueBuf := encodeBufferPool.Get().(*bytes.Buffer) defer encodeBufferPool.Put(valueBuf) var indexBuf []byte for i := 1; i < txs.Len() && i <= 0x7f; i++ { indexBuf = rlp.AppendUint64(indexBuf[:0], uint64(i)) value := encodeForDerive(txs, i, valueBuf) t.Update(indexBuf, value) } if txs.Len() > 0 { indexBuf = rlp.AppendUint64(indexBuf[:0], 0) value := encodeForDerive(txs, 0, valueBuf) t.Update(indexBuf, value) } for i := 0x80; i < txs.Len(); i++ { indexBuf = rlp.AppendUint64(indexBuf[:0], uint64(i)) value := encodeForDerive(txs, i, valueBuf) t.Update(indexBuf, value) } return t } ``` #### SSZ encoding We strongly suggest SSZ (+ gzip) encoding all submissions for the best performance. The `AdjustmentData` is expected as the last field in `SubmitBlockRequest` (see JSON example above). Here's how to encode `AdjustmentData` as SSZ: ``` state_root: FixedVector<u8; 32> transactions_root: FixedVector<u8; 32> receipts_root: FixedVector<u8; 32> builder_address: FixedVector<u8; 20> builder_proof: VariableList<VariableList<u8>> fee_recipient_address: FixedVector<u8; 20> fee_recipient_proof: VariableList<VariableList<u8>> fee_payer_address: FixedVector<u8; 20> fee_payer_proof: VariableList<VariableList<u8>> placeholder_transaction_proof: VariableList<VariableList<u8>> placeholder_receipt_proof: VariableList<VariableList<u8>> ``` An example of the SSZ codec in go: https://github.com/blombern/adjustable-bid-encoding ### Testing We've successfully tested this feature using our own builder on Goerli, and we're now looking to integrate with external builders. Testing can be done on either Goerli or Mainnet (with optimistic relaying disabled). ### Performance considerations The additional size to an uncompressed json payload (worst case) is 22KB. We recommend SSZ encoding payloads for significantly faster decoding. On the relay side, the adjustment computation cost is negligible at ~300μs <!-- ACCOUNTING --> <!-- End of accounting.md --> <!-- Start of auctions.md --> # Discussion Market Designs - WIP Version 0.4.0. Discussion of main design tradeoffs, results, and open questions. Updated continuously. # Overview ## Updates over version 0.3.0 - Locked in format for first version of - Refinements of discriminatory auction. - Contingency plans for discussions with potential bidders and their actual behavior. ## Updates over version 0.2.0 - Concrete design proposal for an augmented uniform auction ## Updates over version 0.1.0 - High remunerations of blocks follow a different distribution of gas size: Overall more skewed to max capacity. - If we hard fixed capacity at 15MM, the average block return would have been 0.065 ETH. In contrast, if we allow for max block size, the return would have been 0.10 ETH (for the period from 15th of August 2023 to 31st of October 2023). - While the max capacity is important, we also see prob mass in value between target size and max size. - Consequence is that we need to make the blocksize more elastic in order to be able to capture such blocks. We have a favored proposal for this. - Regarding the design of the primary auction, the choice is either between something very simple with problematic properties or something very complex with nicer properties. - Key factors for simple vs. complex is whether demand for beta space is downward-sloping and whether values are correlated. - The latter can be limited through private order flow. - While a permissioned set of bidders helps from an implementation and security perspective; this might have an effect on the rewards we can achieve in the primary market. - We are looking into novel designs that would represent better tradeoffs between properties and complexity. ## Questions to clarify/ TODOs - Once we converged on concrete auction format(s) - Simulate them; maybe run experiments before hand - Red-team them with some experts we know ## TL;DR Main points - Main tradeoff in the design of the beta market is the allocation of capacity between alpha and beta as well as how the settlement works. - Limiting alpha possibly restricts our possibility to reap lottery blocks. - Allowing alpha to be extended risks lowering the quality of beta options. Might make this unattractive. This depends on the type of transactions to be included and also on the nature of beta bidders. # Market design discussion ## What is the good to be sold? - Some time t-x we know that one of our validators gets to propose a block at time t. - The mechanisms we build are for selling that block space. - We want to split the block into two parts: (1) top of the block (alpha) and (ii) the rest (beta) - Alpha will be sold on the spot; beta will be sold ahead of the slot - it enables buyers to get inclusion rights with attached gas loads before t. ## Buyers - Buyers have different preferences regarding inclusion in a block. - Order of block matters: being at the top or being at the bottom. - Just getting a transaction in; maybe within a reasonable amount of time. - TODO What is important to understand is whether there is an incentive to bid on the whole block. - This can come about due to 1. coordination problems with other bidders for beta 2. position in beta does actually matter - We should make sure we understand these preferences and can differentiate if applicable. - From rsync we have received some information on this: It seems pretty clear that the demand curve is quite flat except for the top of the block. They have their exclusive (self-produced order flow though). - We can gather this from the bids that we receive - It is important to keep in mind wrt to a secondary market. The nature of that market changes significantly if there is only one beta winner. - We probably can refine this more over time but a reasonable starting point. - Buyers' valuations are block-dependent and thereby vary over time. - Sometimes it is very valuable to be at the top of the block; sometimes it is not. - Same applies to just getting a tx in though we expect less variability. - Risk matters - While top of the block is a winner takes all business, getting regular txs in is not. - For regular tx's risk of not getting a tx in within a time frame more important than for top of the block buyers. - The main idea for buyers should be the possibility to hedge gas price spikes as well as timely inclusion guarantees. - Comments - Beta should be particularly interesting for certain forms of orderflow - It is pretty evident that we can build an intents structure on top - We are eyeing institutional investors; for them having a format well known from Wall Street might be a good idea. ## Tradeoffs regarding alpha and beta capacity - Alpha is important as it provides the most remuneration. - In particular, there are tail lottery blocks which account for amount of overall remuneration. - Builder valuations for alpha capacity and beta capacity vary over time. Therefore, the amount of alpha and beta we would optimally allocate varies over time as well. Obviously, we do not directly observe the demand. - Looking at the data, the following results: - Very skewed distribution: 99% of the validator rewards are within 1.5 ETH - With high validator remuneration, gas requirements increase - The max capacity is used more often when rewards are very high. - The larger the size of alpha, the more txs will be covered within alpha; the more likely it is that txs in beta will be reverted. In other words, the uncertainty wrt to values of beta goes down with larger alpha block size. This depends on a couple of conditions that we go into more detail below. - TODO The question is by how much? That depends on the kind of txs people will put into beta. - TODO Are there classes of txs that are very unlikely to revert? - The main tradeoff is thus being able to reap lottery blocks (possible large alpha) while maintaining a decent service for beta (restrict alpha rigorously). - TODO What we do not yet understand, what makes lottery blocks lottery blocks? - The main question is whether both alpha and beta builders will be affected in the same way - what would be good for us - or whether alpha builder profit disproportionately compared to beta builders - what would be bad for us. - In the former case, we could expand the capacity of both builders. ## Basic market structure The overall structure of markets looks like this: - We keep a version of mev-boost in place for alpha. - For beta we are building a separate allocation mechanism that has two components. 1. Primary call option market for gas starting at time t-x where x is the time ahead of the block 2. Secondary market for options until time t - We will focus on 1 first; 2 will be considered later. - Nevertheless, we need to consider the interaction between markets 1 and 2 and also possibly the alpha part. - The auction will run as a contract on L2 - This is potentially very big because we stop the latency game. Bidding happens through builder specific bidding contracts - NOTE we are targeting to extend this structure also for alpha to the market (later) - This means that we are not only creating a concrete auction instance for contract template we are creating ideally can work for different formats as well and also onchain for alpha - The central challenge is to size alpha and beta. - TODO We discussed the possibility to include an additional third spot market. This could give us an option of including a market maker who runs that part. But it would require a possibly complicated interaction with alpha. We leave that open for now and will come back to it. - NOTE We might need to reserve space for internal txs inclusion. So everything should be seen as a constant to this. ## Beta ### Primary market #### Nature of options to be sold - The options refer to slices of the overall gas size of beta. - We aim to homogenize it, i.e. all slices of gas are equal and interchangeable. - TODO This is an assumption. We need to check this with builders; maybe priority is also important for the beta part - The nature of the futures we are selling under the beta market depends on the capacity restriction we impose. Most importantly, do we guarantee enough capacity that the calls get included or not? - The problem with guaranteeing a fixed capacity for beta is that this means restricting the capacity for alpha. The problem with this is that alpha might be more lucrative for us. - In particular, we know that lottery blocks come along that make up a large chunk of the overall remuneration we can achieve at all. - So, if we fix alpha, we might be missing a significant amount of value. ![Diagram](illustration-tradeoff-alpha-beta.png) - The tradeoff can be seen in the picture above. - We make the assumption that top of the block gas space is more valuable - We also assume the marginal value of extra space is decreasing. This is a key assumption because it implies a downward sloping demand for additional space. If this is violated, we are facing scale effects. - NOTE for alpha bidders most likely that assumption will be violated at the very top; so having, say 5MM of space, is much more valuable than just having 4MM. But the important part is that at some point the demand is downward sloping. Which means that it is just about getting additional transactions in. - A big question is whether the shape of the curve is the same for high reward blocks. - The main assumption is that at some point, the marginal value for a builder owning the whole block (green) goes below the marginal value of a beta buyer (typically someone who wants to be just included independently of the order). - If we had perfect information, we could fix the capacity constraint at exactly the intersection point. But this information is not available. We can only approximate it. - The question of lottery blocks can also be seen in the diagram, the question is whether for such a block both curves are shifted in the same way. If not, it would indicate the need to give alpha builders more space relative to beta builders. - There is another aspect to consider. The larger the alpha part, the larger the possibility of including a txs in beta that will revert (due to state changes). This will degrade the value of the option to buy beta space. What is more, this might jeopardize the service as a whole. If builders get the impression that the service is not of sufficient quality, then they will stop using it. So there is also a reputation component - in particular at the beginning. - Another thing to consider is the issue of beta market and uncoordinated txs between different builders there. If builders draw from the public mempool, then it might easily lead to the inclusion of the same txs. We might think about helping coordination; maybe with a leaderboard of sorts. ##### Possible settlement solutions This describes how the settlement of alpha and beta capacity can be handled. 1. **Non-binding Variant**: Capacity is not binding. Buying an option is just a risky thing; no guarantees get attached. - Cons: Bad because it undermines the whole purpose of developing stable guarantees; depends ofc on the actual data (how often does it happen) - Pros: Easy and guarantees full reaping of lottery blocks. 2. **Non-binding variant with buyback**: Buying an option but getting refunded if the inclusion does not happen. - Cons: Makes things more complicated (in particular relative to secondary trading); and does not really satisfy risk averse preferences - Pros: Reduces spending risk of participants 3. **Hard cap on alpha**: We fix alpha at a hard limit. - Cons: We might artificially limit our ability to reap lottery blocks. If the most lucrative blocks happen to be big, we might extract suboptimal value from them. That obviously depends on our understanding of the intra-block demand curve. - Pros: Makes a very strong beta market with guarantees; also if a secondary market is going, this enables bidders to buy at the secondary market additional capacity. Note, however, that there is no guarantee on the connectedness of alpha space and futures. See 4. 4. **Hard cap on alpha with order guarantees**: As 3 but if a block winner buys additional futures, these futures will be ordered first. - Cons: as 3 - Pros: as 3 but makes buying futures a more attractive option. There is also a further strategic aspect to this. If beta blocks are not connected, then alpha bidders might just have an incentive to buy up all beta space in order to get the full block. As the auction is permissioned, the direct buying might be less of an issue. Still, the permissioned buyers might be speculating on the secondary market. 5. **Hard cap on alpha with reserve**: We limit alpha but offer the extra 15MM blockspace above target size for alpha block as a reserve. We give that space to the alpha part. Importantly, this requires a change in the auction format. Builders can either bid on the core part or bid on the extended space. Importantly, the bid on the extended part will include a minimum prize. This prize will be set according to the data of realized block rewards for validators. It will be high so that only in extraordinary circumstances, namely with lottery blocks coming up, builders have an incentive to bid for it. We can also consider a compensation mechanism for beta builders (systemic vs. individual insurance). - Cons: Deviates from mev-boost mechanism; also more complex; still no full guarantee on reaping lottery blocks. - Pros: Might give us the best approximate full reaping of lottery blocks; gives us additional information on demand curve. That information might be very useful. If we see the extra demand regularly exceeding the primary market price for futures, we know that there is a structural problem. 6. **Elastic beta**: We limit alpha at 10MM but make beta space elastic. That is beta can take up to 20MM. We require a minimum price level for beta space. We in principle guarantee elastic alpha because alpha builders can get beta space on the secondary market. This would be a radical way to reorganize the market. - Cons: The reaping of full alpha depends on an effective working of the secondary market. If complementarities of early and later txs, or high value concentration among alpha bidders, then this requires them buy up all beta; might be difficult in practice. Also setting a reasonable reserve price is more challenging. Using data from past blocks not really adequate. The main disadvantage for us is that if lottery blocks form at the last moment, then most value goes to the auction bidders. This is problematic also as they are permissioned. It is a basically a permissioned reallocation scheme from alpha to beta builders. - Pros: Keeps mev-boost intact (except of limiting the gas size to 10MM). Gives us a good approximation of overall block space required. Trading on secondary market might also give us a good indicator of value of beta versus alpha. 7. **Elastic alpha**: We limit alpha but offer fixed tranches of extra blockspace above target. Note that this includes 5 as a special case. Builders can submit different bids for the different block sizes. Importantly, we require a minimum reserve price. The idea is, that if a builder wants to extend his bid for the package, the marginal value of the txs he includes must be higher than that reserve price. Or, he realizes complementarities between earlier txs and later txs. That reserve price should come from the beta market. In this way, we guarantee that value generated through beta is a blocker for alpha just eating into the value of beta. In exceptional cases, even if beta were high, alpha builders would have an incentive to buy into that space. We could include a compensation mechanism for beta builders. The higher the capacity requested the higher the compensation for beta builders (as a way to counter possible detoriation of service). - Cons: Deviates from mev-boost mechanism; also more complex; still no full guarantee on reaping lottery blocks. - Pros: Might give us the best approximate reaping of lottery blocks with variable gas requirements; gives us additional information on demand curve. That information might be very useful. If we see the extra demand regularly exceeding the primary market price for futures, we know that there is a structural problem. An additional advantage would be to get an indicator for the shape of demand curve (downward-sloping). 8. **Combined elastic alpha and elastic beta**: Combination of 6 and 7. We limit alpha but offer fixed tranches of extra blockspace above target. We do the same for beta. However, we can also include steps inside the limit that we fix for beta. If that space does not sell, it is added to alpha. - Cons: We need to come up with a supply schedule for both alpha and beta. - Pros: Combines 6 and 7. If we assume that the information regarding a lottery block and the demand for much gas capacity comes in at the end of the block allocation phase; then going with 7 or 8 seems the better choice. It also important to note that we can fine-tune the relative importance of alpha versus beta. If, for instance, we see beta prices dominating alpha demand most of the time then there it can make sense to sell more of the block through beta. #### Bidder characteristics - NOTE we operate under the assumption that position does not matter; if it does might affect the design significantly. - Bidders are possibly risk-averse. Standard revenue equivalence might go out of the window. - Bidders are asymmetric; in particular if there is private orderflow. - Bidders valuations are not clear: - If they draw from a public mempool or if there are global conditions affecting value of block space, their valuations will be interdependent. - There is also the danger of a further coordination issue; this might favor a winner-takes-all solution - In so far as private order flow matters, less interdependence - Interdependence is relevant as it complicates the design: - Revenue equivalence breaks down - A dynamic auction might be useful - Bidders valuation of space is still not clear - Are the options substitutes? Not fully for sure; because depending on the txs that I want to pack into space a package of space might be more valuable than having individual space. - The key question is what is the optimal gas space size for beta players? - Is it just about getting additional items in? So that a little bit of space will be valuable; and more space will be more valuable? - Or do they need a fixed amount of space? - Due to the discrete nature of txs gas requirements, definitely a packaging problem; this affects the downward sloping nature of demand - One result could be that it is effective to auction off the whole space to one bidder - Due to the permissioned nature few bidders expected, affects bidding as well as market power on secondary market. #### Relation to the secondary market - Traditionally we would assume that a well designed auction does not require a secondary market. If the result of the auction is in the core, no change in the allocation makes sense. - This is different here as information comes in over time. - Base fee - TXs updates for alpha bidders - New bidders are active on the secondary market (note that the primary market is permissioned) - The secondary market therefore is not just a reallocation of the primary auction but includes information updates. - Note that this is in contrast to most work on auctions with resale - The secondary market changes the rationale for bidding in the primary auction - NOTE it is unclear what allocation we will have from the primary market - A winner-takes all possible - Also note, if the packaging of the space is an issue then leaving this to buyers/sellers is problematic - NOTE even if the primary market outcome is driven by bidders who have downward sloping marginal values; this might not be true for alpha bidders - We might want to structure the secondary market so that a concentrated selling back is feasible - Luckily for us, we can do this in blockchain land. #### Possible formats 1. **Uniform price sealed bid** - Cons: - Uniform price auction often exhibits demand reduction and reduced revenue - Because of resale, demand reduction might be even more reduced - Requires full demand schedule - might be too complex - Pros: - Simple - Common in other setups (e.g. Treasury) - Might be considered fairer 2. **Augmented Uniform price sealed bid** - Cons: - Uniform price auction often exhibits demand reduction and reduced revenue - Because of resale, demand reduction might be even more reduced - Novel design - Pros: - Reduced risk of massive price reduction - Might be considered fairer 3. **Discriminatory** - Cons: - Interaction with resale market unclear - Could be perceived as unfair - Strategy of bidding more complex - Pros: - Simple - Demand reduction less pronounced (but note only understood in isolation) 4. **VCG** - Cons: - Complicated - Problematic with the discreteness of txs - With few asymmetric players might end up with low price and low revenue - Complex to understand, complex to send signal, and complex to compute - Problematic wrt possible collusion - Pros: - If space is substitute for bidders, efficient and result in the core. 5. **Ascending auction (uniform price)** - Cons: - Relatively complex for this setup; after all will be repeated over and over again - Latency issues will possibly re-emerge - Biggest issue lies in the problem of possible complementarities; that will defeat bidding effectively - Complex to compute - Pros: - In so far as bidding happens through bots, maybe not too problematic. 6. **Combinatorial Clock auction** - Cons: - Even more complex for this setup; after all will be repeated over and over again - Latency issues will possibly re-emerge - Complex to compute - Pros: - Allows for complementarities - Can probably approximate the efficient outcome the best independently of the specific assumption on goods. 1 is a special case of 2 we can therefore ignore it. 4 is probably not a good choice: too complex, too unfamiliar, too susceptible to collusion. Moreover, we are not interested in efficiency. It is thus dominated by 2 and 3. Also note that there might be ways to augment the uniform price auction. We discuss details of the augmented uniform price auction and the choice between 2 and 3 at length [here](\static_standard_auctions.md). If we go for something more complex, we should do it all the way, i.e. go for 6. The central question is the nature of the demand function; if there are complementarities, a uniform price auction might be a bad idea. ### First version of MEV Protocol #### Alpha We will keep the alpha part as is for now. We do not want to impose a new format on bidders as given our small size might mean being ignored. The only change is the restriction of gas to 25MM. (The rest goes to the beta market). #### Beta We will go with a discriminatory auction first. It is understood though that we adapt the auction format depending on (i) feedback by potential bidders and (ii) actual behavior in the market. We will restrict the residual claim only to full bids for now. To avoid too small token allocations that are then practically of no use. Also note that in the case of a secondary market, this issue might be less important. The main reasons are: - Simple and familiar format - In principle compatible with a secondary market - But first and foremost: Want to get feedback from bidders how they bid. #### Contingencies 0. Depending on the step bidding, we increase the token size. 1. If we observe bidding on whole blocks only (or close to it), we can simplify the whole setup dramatically. 2. If we observe upward sloping demand curves, we might need to change the format completely. ### TODO Secondary market - Market running after primary allocation took place - Need to think whether we should provide the marketplace ourselves - In principle could be handled by external parties - But the interlocking of different markets (that the timing of spot market and secondary market is making sense\*\* is probably relevant and easier to orchestrate by us - The main purpose of this market is to account for information differences over time - builders might realize they cannot fill a block; or they might need more space - The base fee is not known at the time of the primary market; so there should be updates regarding the base fee happening with before-strike-time blocks being minted. This also affects the initial pricing we offer. ### Auction Accounting Discussion for accounting details [here](\accounting.md). # TODO Roadmap market implementations **Totally preliminary** What is the roadmap for rolling out the different components? Obviously, it would be ideal to run out all market components at once. But that might not be a good idea from the engineering perspective. So, how can we proceed? - V0.1.0 TESTNET - MEV boost as is - Beta auction in place - No secondary market - Alpha 25MM, beta 5MM - Alpha and beta fixed. No way to extend beyond the market. - V0.2.0 - Introduce secondary market - V1.0.0 READY FOR USERS - Include connectedness guarantees for beta blocks bought by winners - V1.5.0 - Replace MEV boost with variant that includes elastic demand beyond the 15MM target # Concerns to be addressed What are possible vectors of attack that would impede the functioning of the market? - Speculative activities on the beta market to capture the whole block? - Influencing fees in t by building in t-1? <!-- End of auctions.md --> <!-- Start of static_standard_auctions.md --> # Static sealed bid auctions ## TL;DR - No clear dominance of either uniform or discriminatory price auction. - Key characteristics to decide for uniform (i) flat valuations and (ii) bidders without capacity constraints. - As remarked before, the whole design rationale rests on the assumption of non-increasing demand curves. If this is not fulfilled, the designs discussed here are problematic. ## Overview In contrast to the single unit setting, the allocation mechanisms of multi-unit auctions is less straightforward. We consider here two mechanisms we might consider: 1) An augmented uniform price auction and 2) a discriminatory price auction. Let us first state what the issue is with the standard uniform price auction. With fixed supply, i.e. supply being perfectly inelastic, there is a danger that very low prices result. It is well understood, going back to [@wilson1979auctions] that bidders have an incentive to shade their bids. This results in low revenue and inefficiency. We are mostly concerned with revenue here; efficiency is not that critical for us. Moreover, as there is a secondary market, inefficiencies can be in principle corrected. Bid-shading is not only a challenge in uniform-price auctions but is also affecting discriminatory price auction. However, theoretically at least, bid shading can be more severe. In the uniform price auction that tradeoff seems clear: If you shade your bid for the marginal unit, you are not only reducing your price for this unit but if you win, you have reduced the overall price you pay. Now, the overall effect and danger of severe under-pricing is contingent on demand factors. It cannot be easily inferred from data. It is also well understood that there is no clear way to tell whether the discriminatory or uniform price auction provide more revenue. Neither theoretically [@ausubel2014demand] nor empirically. Ausubel et al. identifies scenarios where uniform and discriminatory auction dominate w.r.t. efficiency and revenue. Only under very restrictive assumptions, specifically independently distributed values and symmetric bidders a clear dominance can be identified: There discriminatory auctions rank higher than uniform auctions. Below, we provide details on (i) an augmented uniform price auction with features aimed at avoiding underpricing and (ii) a standard discriminatory price auction. While general predictions which of the static formats work best is not feasible, two criteria can help to make a choice: 1. Do bidders' demand curves slope downwards steeply? 2. Are bidders capacity constraint? That is, they cannot bid for the whole available supply. In both cases, a discriminatory auction format might be less risky for us given what we know. Moreover, in the augmented uniform price auction we consider below, capacity constraints also come into play when we consider the allocation rule. Note, that the features favoring one or the other format might evolve with the alpha/beta setup. In particular, if we extend beta, factors 1 and 2 might also change. Lastly, reminder, most of the theoretical results we can lean on are developed without a secondary market interacting. ## Augmented uniform price auction We are augmenting the standard uniform price auction format with two features: 1. We do not provide perfectly inelastic supply (aka there is a fixed amount we auction off) but provide an elastic supply schedule: If the price is very low, we only offer limited amounts of options. 2. We provide a different tie-breaking rule for excess demand. Due to the discrete nature of bids, it can happen that there is market-clearing price (where demand=supply). The typical rule applied in many auctions favors high marginal bids first. We will consider an alternative that introduces more pressure at the quantity at the margin. We explain the rationale behind these two features below. ### 1) Elastic supply curve - Fix the max capacity of beta, $q^{max}$. We assume this is given for the auction. Obviously, it can be a parameter that we will optimize over time. - The supply curve, $S \colon P \to Q$, where $P$ is the set of allowed prices; and $Q$ the set of available options. It gives for each price the amount of gas space we offer. - The set of options will be determined by the tick size we provide. $Q$ will be further limited by the max capacity we offer. That is, typically, $Q=\{0,t_q,2t_q,..,q^{max}\}$ where $t_q$ is the tick size of one option and it should hold $$ t_q=q^{max}/k $$ for some $k \in N$. - The idea for the shape of the supply function is to have an initial segment of the supply curve which is concave and a second segment that then only provides a constant amount. This is the maximally available capacity. Below are pictures, that will make this clearer. - Concretely, one parameterized functional form is this: $$ S(p) = \begin{cases} s(p) & \text{for } p < p' \\ s(p') & \text{for } p >= p' \end{cases} $$ where $ s(p) $ $$ s(p) = ap^n $$ where $a,n$ are constants. The idea, again, is that the function is concave and thereby monotonically increasing until price $p'$. This price is calculated by setting the max quantity, $q^{max} = ap^n$, and then deriving $$ p' \equiv (q^{max} / a)^{1/n} $$ NOTE: It goes without saying that the functional form above is up for change if we want to. NOTE: It is theoretically well understood that an elastic supply curve can reduce the danger of dramatically underpricing [@licalzi2005tilting]. In practice, this is not so often used. One reason can be that the value of the good might be lower for the auctioneer. In our case, however, this can be different. The reason is that we possibly have use for the space ourselves. So, as remarked above, we might have a positive outside value and might not be willing to sell for any price. NOTE: In principle, we can also consider an adjustment ex post as suggested by [@back2001auctions]. That is, we reduce the supply after the submission of demand schedules. Given the repeated nature of our auctions, I do not think this is a good idea. There might also be implementation issues with such a solution. ### Bidders Each bidder can submit several (quantity,price) pairs. We thereby elicit his (partial) demand function $d_i(p) \colon P \to Q$. Note, in practice, we will limit the number of bids admissable. ### Aggregation of demand The way that a uniform auction proceeds is by aggregating demand and matching it to supply. This allows to determine a market clearing price - if it exists. Reasons for non-existence can be increasing demand curves. Now, first, consider the aggregate demand: $$ D(p) \equiv \sum_j d_j(p) $$ Fix the highest price at which the all the demand can be satisfied (if it exists). That is, $$ p^{*}=max\{p|D(p)=S(p)\} $$ Note, in the case where there is excess demand for this price, i.e. $D(p^{*})>S(p^{*})$, we need an allocation rule that determines who gets what. Note, if demand schedules $d_j(p)$ were continuous, then excess demand would not be feasible. Lastly note, that the fact that actual bids are discontinuous might reduce the likelihood of a low price outcome [@kastl2011discrete]. ### 2) Allocation rule In case of excess demand, how is the excess demand cleared? The standard rule is higher price, higher priority, or sometimes referred to as pro-rata at the margin: $$ q_j = d_{j,\>}(p^{*}) + \frac{d_{j}(p^{ *})- d_{j,>}(p^{*})}{D(p^{ *})- D_{>}(p^{*})} $$ where $d_{j,\>}(p^{*})$ is the individual demand of $j$ for higher prices than $p^{*}$ and $D_{j,>}(p^{*}$ is the aggregate demand at prices higher than $p^{*}$. So, an agent first gets his demand that he stated at higher prices. Then he will receive a relative share of the remaining excess demand at the market clearing price. There is an alternative allocation rule though: Give a share relative to individual demand at that point. That is, $$ q_i = d_i(p^{**})/D(p^{**}) $$ Which means the larger the demand relative to overall demand at that price, the larger the share that a player will get. In contrast to the rule above, only the marginal demand matters; demand stated at higher demand levels are irrelevant. What is nice about this: This creates stronger incentives to bid closer to true valuations; it reduces the tendency to end up in a low price equilibrium. Note, I had a slightly different idea (a convex combination between high marginal and low marginal values) but the above rule has been discussed by a paper in the literature [@kremer2004divisible] and see also [@kremer2004underpricing]. It makes sense to follow this as we then can rely on some benchmark results. ## Discriminatory Price Auction This auction format follows standard specifications for discrete discriminatory auctions. We consider a fixed supply, $q^{max}$. ### Bidders and aggregation of demand As under the uniform price auction, bidders supply discrete, individual demand schedules. Note, we also assume a maximal number of bids per bidder. Aggregation of demand works also as for the uniform price auction. ### Clearing price and allocation rule The clearing price can be set as above for the uniform auction. The key difference to the uniform price auction is the payment. Instead of each bidder paying a constant price for all units, each bidder pays according to the bids included above or equal to the market clearing price, $p^{*}. In the case, where $D(p^{*})=S(p^{*})$, i.e. where supply equals demand, the individual transfers $i_i$ are: \[ \t*i = \sum*{j=0}^{p^{*}} d_i(p_j)*p_j \] The units allocated to $j$ follow accordingly. In case of excess demand at the market clearing price, we can again apply the above pro-rata at the margin tie-breaking. Hence: \[ \t*i = \sum*{j=0}^{p>p^{*}} d_i(p_j)*p_j + p^{_}q_i^{_} \] where $q_i^{*}$ is the pro-rata quantity as defined for the uniform price auction. In principle, alternative tie-breaking rules can be considered. Not clear though what might be gained. # TODO - How do we parameterize the exact shape of the supply curve? - Related: Can we absorb gas space that does not get allocated through the primary auction? If so, this would be also a natural way to price the auction parameters. - Can we adapt it over time or would this be bad? - Once we have converged: We should red-team the design with some external experts I know. - NOTE: The formats we propose still require extensive simulation and analysis. THEY SHOULD BE TREATED AS PRELIMINARY AT THIS POINT. <!-- End of static_standard_auctions.md -->