---
tags: mev-boost-community-call
---
Recording: https://www.youtube.com/watch?v=-sNlISQF_ow
# Summary
- Work is in progress for 4844 readiness. There are a few pieces to coordinate to add new types that support blobs while keeping the current flow working.
- Next steps:
- About two weeks away from participating in a testnet. When ready to test, relays should watch for latency spikes on API calls since more data will be passed around.
- Various improvements have been implemented into MEV-Boost relay, most optimizing for simulating as little bid submissions as possible.
- Optimistic Relaying is proving to be a challenge from the operational end. Work on V2 is progressing, but data explorations hint at potentially serious issues with in-time payload delivery by builders.
- Next steps:
- More data exploration and thinking through colocation incentives.
- Discussions are heating up around relay funding. There’s general agreement that relays provide important services, and the question now is which route to pursue for sustainable operation and R&D.
# MEV-Boost and Relay
### 4844
First up is the topic of 4844 readiness w.r.t MEV-boost. Most of the work here is changing types of APIs so that they support blobs. There are a few pieces to coordinate such that both the new and the current flows are supported. With blobs relays may see more data passed around per API call, so relays should monitor latencies for API calls. If there are spikes, it may be because there is just a lot of data to send. Flashbots is about two weeks until ready to participate in the testnet. The ongoing work can be tracked at https://github.com/flashbots/mev-boost/tree/develop-deneb.
### Relay Performance
(Chris from Flashbots gives an update and overview). There have been many relay performance improvements over the past month, with most related to not simulating as many submissions as possible. Redis improvements in the critical path during block submission and storage brought resource usage down up to 10%. The non-optimistic validation path is currently at about 200ms using a stock EC2 instance and the stock code.
At previous calls, there’ve been discussions around cancellable vs. non-cancellable bid submissions. The optimizations are relevant here since, for non-cancellable bids, you can discard bids if they’re below the top bid. Chris mentions that about 2/3 of all submissions are now being skipped. The improvements will help reduce operating costs and operating efforts for all relays.
Write-ups of the improvements
- https://github.com/flashbots/mev-boost-relay/blob/main/docs/docs/20230602-recent-performance-improvements.md
- https://github.com/flashbots/mev-boost-relay/blob/main/docs/docs/20230605-more-redis-performance-improvements.md
### DB Migration
Flashbots had to do a large DB migration recently and put together a guide which can be found at https://github.com/flashbots/mev-boost-relay/pull/464. Chris mentioned that those looking to do a similar one can reach out and FB can support.
# Optimistic Relay
### General update
(Justin gives updates). First, the “good news”. Have 21 builders depositing 1 ETH. Only one builder has asked for a refund because they were shutting down as a builder. So far have had no optimistically relayed blocks that are invalid and signed by the proposer, so no refunds had to be given out. Making progress with optimistic relaying V2.
Additionally considering the possibility of increasing the collateral limit from 1 ETH up to something like 10 ETH. Some builders have asked for this, since a lot of MEV opportunities are in the big MEV blocks which with the current limit do not benefit from the optimistic relaying service.
Now the “bad news”. Right now the Ultrasound Relay is the only relay running with optimistic relaying turned on despite the code being merged upstream. Team found that it has been more of an effort than anticipated, even after the initial builder onboarding. There is a lot of maintenance and every other day there is stuff to be done, some manual and some semi-automated. For example, whenever there is a timeout the relay has to demote. Re-orgs are also tricky to handle where you can’t simulate because the parent block has changed. Ultrasound team has built out custom infra to automate some of the processes required for this, for example, re-promote after a timeout. If any operators want to do optimistic relaying, the team will give a heads-up. There is still maintenance even though it’s reduced.
Also dealing with issues around “spammy builder” — builder where the inclusion rate is less than 1 per 100k submissions. The activity puts a dent on cost due to simulation resource consumption and provides the least value to the ecosystem. Ultrasound is not doing rate limiting right now, partially because it’s easy to get around but also because they are looking to keep an archive of every single submission as a dataset for future research. They used to have Cloudflare but got rid of it to optimize for optimistic relaying since it adds an additional 10ms. Considering rate limiting based on collateral.
### Optimistic Relaying V2 update
(Mike gives update). To recap what optimistic relaying V2 is at a high level, it’s basically header-only parsing of bids in order to mark a bid as eligible before the rest of the body is downloaded.
The code is implemented and currently in the stage of doing experiments and collecting data. There is a pull request open against the MEV-Boost relay repo: https://github.com/flashbots/mev-boost-relay/pull/466. So far, seeing about two orders of magnitude faster bid processing than when decoding the entire payload. Working with rsync builder to demo this.
There is added complexity and a new state possible, where a bid is received and marked as eligible but payload may be not available. Therefore, there is also a new event that will lead to a demotion: payload not received in time. A situation like this would mean a relay can’t publish the payload because it does not have it. Another way to think about this is the relay takes on more risk when marking a bid as eligible to win the auction, since the payload has to be delivered separately. Similarly, a builder takes the risk of sending “header only” with the payload to follow.
Some additional ideas are being explored for V2:
- Going straight to builder-side publishing — avoiding the MEV-Boost relay intermediary altogether could help lower complexity and avoid some latency considerations with V2 header-only parsing
- Early cutoff for V2 submissions — since payload delivery would now be separate, the idea would be to cutoff submissions sometime prior to the deadline to mitigate attestation deadline issues
- Adding additional criteria for builders to meet to qualify for V2 optimistic relaying
On the topic of risks from V2, Chris raises the question of whether the added complexity is worth the change if the decoding takes about 10ms on average. On Flashbots end, seeing 10-50ms latency for reading + decoding payloads (from over 27k successful submissions in the last 30 min). The other side of the argument is that the long tail is long and the savings from decoding optimization might be more meaningful with 4844.
There is also a concern that V2 encourages some colocation. Specifically, a stronger incentive for the builder to care about latency since they need to make sure that the payload is delivered on-time.
In conclusion, need more clarity around what incentives this introduces, mostly around colocation. Perhaps the biggest issue is the builder-side risk around failing to deliver a payload.
# Miscellaneous
### Change to MEV-Boost to getPayload() call to all relays
One known risk with relays is if, for whatever reason, the relay cannot return the payload for the winning bid (the one that client signs over). If it can’t get the winning bid body, MEV-Boost can’t do much and leads to a missed slot. The idea is that perhaps during a given slot auction other relays which did not win the auction might still have the block. With this proposal the client would make the payload request call to multiple relays, in essence broadening the size of the “data availability”. Some concerns here include an open question for how to attribute “winning” and thinking through the incentives to connect to a single relay for getHeader() calls and multiple other relays for getPayload() calls.
### Allowing coinbase payments
Different relays may implement different payment mechanisms per-block. For example, the Flashbots relay requires a payment transaction at the end of block. Now starting to consider opening up to coinbase payments. Some of the reasoning here is that transaction fees add up and a payment transaction is only useful if you want to take a profit. There is probably also value in standardization. Flashbots relay will start allowing both forms of payments.
Education / tooling may be important for validators / staking pools to show that they do indeed get paid. Operators are aware of cases with confused validators around block values + extra transactions. So far one fix has been showing the historical ETH balance as an illustration of continuous payments. Also important to consider transfers in and out and discounting any withdrawals to calculate the start/end balances.
May be good to collaborate on some sort of a reference implementation that everyone could implement.
### ZK bids
Early explorations into implementing POCs of ZK bids. These could serve to improve several aspects of current bidding strategy landscape where builders compete on information access to outbid each other by small increments.
Could also potentially be used to build some cool things with second-price auctions.
# Relay Funding
It’s a big topic and just starting the discussion at the end of this call. There have been proposals discussed here and there and the general agreement is that relays are providing an important service which needs to be acknowledged. Also in agreement that it probably makes sense to seek public goods funding for these relays and the question is how to do this.
Matt from Blocknative shares thoughts on what seem like are two obvious paths — 1) grants, which feel better but are indeterminant and 2) some sort of commission for relay operation, since currently validators receive all the benefit from relay services. Matt thinks that vertical integration is currently incentivized, and the idea is to counter this. In terms of who to go with grant requests, Alex doesn’t think that EF or Protocol Guild would be involved. Part of the challenge is that there is a big design space. Flashbots is thinking through relay funding and may have a more concrete proposal on relay operation and R&D around EthCC.