This document outlines orderflow sharing objectives, potential design goals, initial ideas, and research areas, providing a preliminary implementation to explore the design space and uncover unknowns.
The central aim of orderflow sharing is to boost the confirmation rate of transactions traversing private mempools (e.g., Flashbots' Protect and MEV-Share) by dissociating from any single party's orderflow. Broadly, two approaches exist to achieve this objective: sharing the orderflow externally with others, and aggregating other builders' orderflow internally. Feasibility, competition maximization, and SUAVE compatibility act as guiding principles in evaluating these strategies. Furthermore, this design is intended to be a living, iterative product that will evolve in the wild over the next six months.
Some possible design goals and a discussion:
Permissionless - Any builder within reason should be able to share orderflow.
Latency Aware - Builder games usually devolve into submitting blocks as late as possible, therefore this flow should be considerate of such games being played by attempting to minimize latency as much as possible. AKA reducing overhead on sharing and receiving orderflow.
Traceable - Orderflow should be able to be tracked throughout the process, including sharing, receiving, and submitting onchain.
Strong Identity - All actors in the process should have their actions be attributable to their identity in the case of fraud.
Sybil Resistant - Builders should not be able to take advantage of the system by registering new accounts
Orderflow decoupling - The totality of a builder's orderflow should not affect the inclusion rate of all of it's orderflow, more concretely, a subset of orderflow should be submittable to this solution and improve it's chances of inclusion.
Programmable preference expression - An actor submitting orderflow to this system should be able to have fine grainted control over how their orderflow is treated, most likely expressed through a program or preset programs.
Achieving all the goals outlined above within the constraints of time feasibility is unlikely. As a result, we divide them into short-term and long-term categories. Short-term goals focus on permissionlessness, traceability, and orderflow decoupling. In the long term, we aim for strong identity and Sybil resistance. Programmable preference expression has been excluded from these classifications, as it may be attainable in the medium term. However, it should not delay product launch, since short-term goals already ensure feasibility and competition maximization. Additionally, the precise language for expressing execution interweaving remains uncertain. Importantly, all goals align with SUAVE compatibility, as they do not contradict the broader objectives of SUAVE.
There are two primary approaches to achieving our goals: either we can send our orderflow to other builders, or they can send their orderflow to us. There are a few unique aspects to the orderflow sharing problem which are not in our typical ethereum distributed systems design space. Firstly, orderflow sharing is opt-in and considered a "public good" at this point, so a decline in sharing does not result in liveness or safety failures for any chain. Secondly, we can impose arbitrary conditions on order sharing, providing a novel approach to addressing the issue of MEV exceeding equivocation penalties in our solution.
Let's enterain what an architecture for each would look like.
This approach falls into the second category of "sharing our orderflow with others". The key pieces this approach relies on are:
Unfortunately from this description alone we can see that it presents significant complexity and is unlikely to be possible in the next 6 months. Nonetheless it is useful in thinking through for longer term ideas! A rough artistic architecture rendition can be seen below, where orders originate with users, go through an orderchain, and end up on the main chain!
Note : there are a couple inaccuracies in the above diagram. First a batch state post to L1 that contains bundles for block X would most likely not be confirmed at block X - 1. Second, orderflow data from L2 through builder-boost would require a read from the builder.
orderstream
on the orderchainorderstream
by bonding X Eth
to 𝑝𝑘1
𝑝𝑘1,…,𝑝𝑘𝑛
X eth
for a slot to a random pubkey from the registered set and posts to order sharing contract on orderchain
tx1, tx2, etc
exist in the txn trie and tx1, tx2, etc
do not exist in that order"// orderchain smart contract
public view getOrders(blockNumber) bundle[]
public view getRegistrations() pubkeys[]
public register() bool
public submitFraudProof(executionPayloadHeader, fraudProof) bool
public postOrders(bundle[]) bool
// builder-boost
func publishOrders(bundle[]) bool
func getOrders(blockNumber) bundle[]
I believe there is merit in the argument that, from a market competitiveness perspective, sharing orderflow is more preferred than receiving partial blocks and appending our own orderflow. However, this approach is complex and serves more as a thought exercise in exploring possibilities. With the advent of SUAVE, the distinction between sharing and non-sharing parties may become meaningless anyways.
Why can only 1 builder decrypt the orderflow? This is because it's currently not possible to attribute a builder to a block in a trustless manner (extra data is non-unique, etc). If we can solve builder attribution we can share bundles with ALL builders and is a potential research area where things like steganography may become useful. Additionally with enshrined PBS it's possible that a builder's pubkey will be included in the beacon block making this less of an issue.
This flow is Ethereum-specific and relies on a beacon chain construction, which may not be a significant issue but warrants consideration. Additionally, it's unclear if the “sim and dont include if more than X eth
” constraint is meaningful. It may be possible to cosntruct a series of n bundles
each paying out 1 Eth
could be unbundled for > n * 1 Eth
. Were this route to be purused in the future this would be an area of research/adverserial analysis to explore. Another consideration here is can a rollup handle the bundle throughput that builders see today? Anecdotally some builders receive 1-10k+ bundles per slot, at the upper end that means the chain would need to process 2k transactions per second.
One question that arises when considering the constraints this approach poses on orderflow sharing is "how much orderflow meaningfully contributes to market compeition", for example, if only 10% of orderflow is shared does this help rise the tide of new entrants to the market, or because of the bundle flow dynamics do we need >90% to be meaningful? Following is a rough sketch on how we could think about the effects on market competiteveness and potentially even useful for product image strategy.
We do our simulation naively by making a few assumptions. First we equate increased orderflow to a proportional increase in chance at winning the block auction. Second we assume that an increase in blocks wion in nearby preceding blocks equates to a higher likelihood of winning the next nearby blocks. In this case we take the average over the last 4 hrs
worth of blocks for 3 days
worth of blocks and 40 builders
with different win rate probabilities. The initial probabilities of each builder are 66%
split amongst 3 high
class builders, 25% amongst 7 medium
builders, and 8.34% amongst 30 low
class builders.
num_steps_
= 3 days worth of blocks
period
= 4 hours worth of blocks
num_builders
= 40
0% win boost
Meaning the baseline simulation of this basic environment.
10% win boost
to all lower classExtreme redistribution of win rate.
0.1% win boost
to 1/num_lower_class buildersSmall boost to constrained set based on our solution to the builder attribution problem.
10% win boost
to 1/num_lower_class buildersLarge boost to constrained set based on our solution to the builder attribution problem.
This approach involves other builders sharing their orderflow with us, raising two concerns: 1) reduced bandwidth efficiency, and 2) increased trust in the Flashbots builder. The first concern is manageable by avoiding incentivizing multiple partial block combinations, especially in an EIP-4844 context. The second issue is more fundamental and won't be resolved here, but can be addressed as part of long-term SUAVE initiatives.
The above diagram shows our two main options for where the "block extender" should live, but a tradeoff analysis is presented later on. In case too abstract, bluck cubes are regular eth txns, red are back-runs for mev-share, purple are bundles, and light green are partial blocks.
Much of the work on MEV-share bundle inclusion and validity conditions can be adapted for this API. Goals of the API should be maximal execution preference expression, as well as resource constriants on hardware.
Append
- orderflow added to the end of the blockPrepend
- orderflow added tot he beginning of the blockInterleave
- superset of Append
and Prepend
with the possibility of merging orderflow into the partial block (eventually via programs)Append
is the simplest approach, requiring inital simulation, and usual bundle merging within the remaining gas. Prepend
needs partial block simulation, bundle merging on the pre-sim canonical state root, usual bundle merging, and then finally partial block re-insertion and simulation.
Interleave
is the most complex and undefined, but also the most expressive and therefore SUAVE-like. Eventually this strategy could allow for arbitrary bundle insertion into the partial block. We could even imagine enabling a class of features examplified by builders such as buildai who scrape unrealized profit after bundles and redistributes. Similarly to mev-share, you could imagine this component reveals partial blocks and some subset of their execution traces and buildai like actors specifying insertion.
But each of these methods need to be more properly defined, researched, and adverserially analyzed. Even with the simplest example append
there exists issues. If anyone can insert a transaction before a mev-share bundle then they could also effectively sandwich by also being the back runner in mev-share. Additionally if partial blocks dont specify a means to be decomposable, sending many combinations of partial blocks will be a dominant strategy. Besides some type of naive heuristic filtering, it's unclear to me at this time how you would avoid this scenario and I would recommend a top priority at infancy of this project.
Lastly another surface to research around this API is interblock preferences. If half of the bundles in your partial block become invalid then how should this scenario be handled? Thus it would be interesting to explore how a partial block could include execution preferences over multiple blocks not simply just one as the PoC does.
Three potential locations for the "Block Extender" are: within the builder, matchmaker, or as a separate component. Let's discuss the pros and cons of each.
Pros:
Cons:
Pros:
Cons:
Placing the component outside the builder minimizes builder complexity and allows future swappable execution layers. However, the latency impact and constraints of running within an "SGX SUAVE node" can have an architectural impact and needs further researched.
Pros:
Cons:
This last set up is the easiest to make a PoC of in the given time cosntraint therefore it is our choice architecture. But when bringing this to market I believe "Inside Matchmaker" should be our most seriously considered option. Or equivalently establishing an upgrade path early on.
SendPartialBlock(ctx context.Context, args SendPartialBlockArgs)
type PBlock struct {
Inclusion PartialBlockInclusion
Body PartialBlockBody
Validity BundleValidity
hash atomic.Value // hash(bundle_1_hash, ..., tx1_hash)
}
type PartialBlockBody struct {
OrderedBundles []MevBundle
RemovableIndexes []int
}
type PartialBlockInclusion struct {
// defined very similarly to mev-share
}
type BundleValidity struct {
// defined very similarly to mev-share
}
RemovableIndexes
front-runs the incentive to spam combinations of partial blocks, but it now moves this heavy task to the builder.
This approach aligns much more closely with our guiding principles and design goals. Additionaly the potential of creating a new market for partial blocks is exciting and has a solid research area to explore. Building out of this approach also relatively straight-forward and looks extremely similar to many of Flashbot's current products so should familiar from inception to launch.
In this portion of the document we will go over the work that was done in the builder-boost repo. Based on the above discussions I built the PoC out inside of the builder, but a future version could be more portable and live inside the matchmaker.
The code developed for the PoC follows a very similar structure to the mev share bundles API, this includes:
There are a couple shortcuts I took to focus on the end goal and progress the PoC:
After doing this rough sketch implementation there were a few things that made themselves evident as next steps.
Short - Medium Term
The two biggest surface areas here are:
Both of these seem fairly simple. On 1 I feel strongly Append
is enough to experiment safely in the market. On 2 making an argument appealing to increasing market efficiency as well as connecting to concrete parts of future SUAVE vision are a good start. As well making the claim that flashbots can be trusted for this product since there is a proven track record of being a trusted actor.
Ultimately we can proceed with building on the simple append
approach as long as we maximize maximal optionality going forward. Below is a rough timneline on how I would approach building and researching based on this dive into the problem.
This gantt chart assumes starting next week may 15th 2023 and doesn’t necessarily represent only my contributions. On the API Design, Docs, and Infra portions especially I imagine collaboration. As I'm not sure what level of resources I will have access to, this should be treated as a relative rough estimate, meaning the timings are most likely relative to each other but the durations may be off.
Some vague notes on where I see these types of solutions pushing (guiding) the builder market to.
IWith a bit of imagination, we could envision a world where a builder simply forwards all its bundleflow to flashbots/SUAVE - in fact a partial block as constructed above is essentially that! This development could be beneficial for several reasons. Firstly, operating a builder is a resource-intensive process, leading to silos of orderflow and inefficiencies that hinder the broader market from reaching its information-theoretic optimum. Additionally, the demanding requirements for running a builder make the market for execution less competitive. Secondly, managing reputation is a daunting challenge for an individual builder. A hierarchical structure of reputation may represent the future of the builder/MEV supply chain. If bundles with access lists can be trustlessly (e.g., zk proof for specific dApp execution) propagated from the "builders" or "orderflow providers" of the future, we can achieve rapid and scalable bundle combination in trusted environments such as SGX.
Additionally SUAVE will need to consider how it fits into ePBS as well as MEV burn. One, potentially trivial, challenge I foresee is that SUAVE nodes will need to emit a series of block bids (headers) that the network will use to come to agreement on a "burn floor". Next SUAVE will also need to be connected to a somewhat heavy gossip sub network for all PBS-supporting chains it services. And lastly, at Zuzalu there was some potential discussion of SUAVE being used as a tip oracle that will allow MEV burn to go from approximating 99.99% to 100% and this has very exciting implications for chain health, as well as it's role in the ethereum protocol.