---
tags: mev-boost-community-call
---
# mev-boost community call #3 recap
Recording: https://www.youtube.com/watch?v=U3ncYq60A2U
# Summary
- Things generally went well with the Capella hard fork. There was an issue with Prysm where all Prysm validators using MEV-Boost failed to propose blocks.
- Next steps:
- More end-to-end testing with hive for future upgrades
- A validator performed an unbundling attack abusing a missing validation on the MEV-Boost relay. A bug in the relay, but points to the idea of unbundling.
- Next steps:
- More research on timing games / attestation races in the generic unbundling attack scenario
- Continue + increase alerting
- Discussion with favorable response to eliminating the block cancelation feature from MEV-Boost relay.
- Next steps:
- Prototyping getHeader() streaming and making polling default
- More data collection around current rates of submissions that are block cancellations
# Capella
### Update from Terence
Things generally went well with Capella apart from an issue with Prysm where validators were unable to propose blocks. The issue was the Prysm client not populating a new `BlsToExecutionChanges` field in the blinded block, which caused invalid block hashes whenever there were non-zero number of `BlsToExecutionChanges`.
Approximately ~50 blocks were missed. What helped was the relays identifying Prysm User-Agent and blocking, as well as the circuit breaker kicking in after 5 missed slots per epoch.
The Prysm team published a report on the incident. Find it [here](https://www.notion.so/0ef5ccd795e54ae4894fa695f1a3e70b).
### Relay perspective from Flashbots / Ultrasound
Relays were observing fork when they started to notice missed slots. Specifically, invalid signatures on `getPayload()` calls and were able to quickly narrow down that is was a block with a bad signature. As mentioned earlier, relays were able to identify Prysm by User-Agent and were able to prevent sending bids; this at least allowed for a fallback to local block production.
In the evening, still saw additional missed slots, but this time the cause was a configuration issue on proposers running a new Prysm release; missed slots stabilized.
### Testing and how to avoid in the future
A lot of relays and staking pools are rolling out additional monitoring to quickly react in similar situations. From CL side, Lighthouse + Prysm working on adding User-Agent to the MEV-Boost request in the builder API which would help scope to client for fast diagnosis.
Ultimately, this was a bug in Prysm. Alex is working with EF testing on tests in hive (end-to-end). More testing, especially end-to-end kind is an area to focus on for next hard fork and any other client release.
# Unbundling attack
### Summary
(Mike from Ultrasound Relay provides a high level description). On April 2nd the relay team received an alert that a delivered payload didn’t match the block hash of block that ended up in that slot. Later, were able to correlate with an unbundling attack.
The idea behind the attack consists of tricking a relay into revealing the contents of the block (cleartext transactions specifically) when they’re not supposed to.
At the given slot, the proposer was able to send an invalid signed header to the relay. The relay verified the signature but not the actual header. Since the signature was valid, the relay released the contents of the block. Relay then tried to publish the invalid block, which failed. Proposer in the next seconds unwinded the sandwich transactions (after baiting the bots), ran the cleartext transactions backwards, and built an actual valid block with own transactions included in places to take advantage of the first leg of the sandwich. Since this constructed block was valid, it was able to be propagated and accepted, and so became the landed block.
The team were able to slash the attacker because they double signed the header, but in comparison to the profit from the exploit this was a very small penalty.
### Takeaways
In general, this highlights the adversarial nature of MEV. As a builder or searcher, the point is to be prepared for re-org-like things that happen on chain. “If something is possible, it will be triggered”.
Mike + Francesco published a blog post on the equivocation attack. Find it [here](https://ethresear.ch/t/equivocation-attacks-in-mev-boost-and-epbs/15338).
What made this attack possible was the fact that the header was invalid. The proposer didn’t have to race against the relay (more on this later). This “base case” vulnerability has been patched. A more general version of the attack is a timing race with the relay. The equivocation part is still the same as the attacking proposer signs over two *****valid***** headers, but now has to win the attestation race for their block to become canonical. There is a proposed “Headlock” idea in the blog post.
Even though the attack happened on the Ultrasound Relay, it was a software bug in the MEV-Boost relay codebase and the attacker could have triggered the vulnerability on all of the relays, i.e. the attacker went through Ultrasound but could have gone through any of the relays. From the Ultrasound side, custom alerting helped with notification via a Telegram bot. The alerting catches anomalies such as one where a proposer signs a header that doesn’t end up on chain.
### Questions around timing w.r.t response modifications after the attack
The timing seems correct: 3 seconds was a bit aggressive; with 4s cutoff no reason to push it any later. 1 second delay to returning to proposer seems okay and gives a block more time to propagate before proposer tries to submit a conflicting block. Here, there is responsibility on relays to have good peering and redundant beacon nodes. Theres no guarantee that block published after 4s gets reorged. There are subtle relationships here making timing games around attestation deadline. Mike and Georgios from Paradigm published a piece on this. Find it [here](https://www.paradigm.xyz/2023/04/mev-boost-ethereum-consensus).
Many changes added so much latency that they were rolled back. Some of the mitigation might raise the probability of an honest proposer missing a slot; perhaps even if its 1% it’s too much.
# Block cancellations
### Intro
(Agenda item brought forward by Chris from Flashbots). Block cancellations refers to the ability for block builders to replace higher value earlier submissions with lower value later submissions. Certain stat arb strategies / builders who do stat arb want/need this feature. This feature adds performance impact on relays since every block submission needs to be validated instead of only higher value ones. In general though, there is no guarantee that a cancellation takes into effect: a proposer can ask for highest bid multiple times and pick out the highest value one. Since any call to `getHeader()` can be a proposer (no signatures), the relay doesn’t have insight into exactly who’s asking for the bid and so for any call needs to guarantee that the payload is available.
Preliminary calculations estimate that removing cancellations would remove 99% of block validations, making relays more cost effective. There’s more data that can be collected, since data about bids that lead up to delivered bid is public, but initial investigations from relay end estimate that 99% of blocks are not increasing in value. The specific work to be done would be a change to the MEV-Boost relay software so you won’t be able to cancel your bid by submitting a lower value block.
### Looking forward to enshrined PBS
Additionally, it would be nice to move beyond block cancellations given work on enshrined PBS (ePBS). Looking forward, it’s hard to see how to do cancellations with ePBS. In ePBS information about bids is gossiped: how to cancel something that you’ve already gossiped? Peers can just choose to ignore the follow-up gossip bids that are supposed to be the cancellations.
### Relay performance
Without cancellations, the relay does not need to validate or store every single bid. The reason for storing everything is the obligation on relay end to deliver the accurate bid via `getHeader()` given that the winning bid may be cancelled at any time. For example if `bid_1eth`, `bid_10eth`, and `bid_100eth` get submitted but then `bid_10eth` and `bid_100eth` get cancelled, the relay should start delivering `bid_1eth` on `getHeader()` calls, and therefore needs to store everything. Not needing to store bid contents in the hot path is important for performance.
One idea for managing cancellations is to have a bespoke API for this, for example, having an endpoint that allows to cancel bids via ID after assigning each bid an ID. An issue with this is having to store all bid submissions. Currently, in the hot path the relay only stores the latest by every builder; now would need to store everything. Have some evidence that there are lots of performance issues from storing all those headers + payloads.
### Cancellation relay
Are there concerns around centralization if actors prefer a builder that offers cancellations? Currently the feature is opt-in on the Flashbots relay. For submitting to multiple relays, cancellations start to break down. This is because MEV-Boost connects to multiple relays and all relays need to cancel to guarantee the cancellation. Therefore, cancellations incentive searchers / builders to submit bundles to only a single builder. It’s possible that there can emerge a “cancellation block relay” just as there is an “ethical block relay”. In the event that no relays provide the cancellation feature, a builder who depends on the cancellation feature for their strategy can be expected to spin up their own relay. Won’t be able to prevent this, have to assume this may be possible.
### Validator perspective
Already from looking at data there have been found a few example of bids that should have been cancelled but ended up winning the auction. This points to validators who might try to play a game by asking for a few bids and picking the highest one. This poses the question: from validator side, how to protect builder cancellation? Validators can request bids many times and restrictions / rate-limits are complicated here. For example, limiting by IP encourages sophistication in spinning up lots of clients to connect to relay and there would be clear incentive to do this strategy in order to be able to request many bids.
On this topic, why not require signatures on `getHeader()`? This has been discussed before and ultimately the decision was made with the goal of keeping information public. It is not fully clear what the implications would be if `getHeader()` calls had to be authenticated. One point here is that builders currently use the endpoint to decide how large bids to set. That’s because in this auction running in real time, `getHeader()` is the current winning auction price. Data-wise, the estimate is almost half a million `getHeader()` calls each slot, with the majority not coming from validators. Additional concern is incentivizing proposers to be “close” to builders, since `getHeader()` would now be accessible only from a privileged position.
### Subscription / stream
One idea is to make `getHeader()` be a subscription where anyone can subscribe to a stream of bids with maybe some ability to specify filters / specific conditions on which bids to receive. It is likely that there are MEV-Boost implementations out there that repeatedly call `getHeader()`, so maybe the actual way to move forward is making the default path just a stream of bids. There are some concerns around the stream getting spammed / dos’ed but given that the stream is purely read only-only, the expectation is that scaling it out should be feasible. One concern is around having everyone receive stream messages at the same time down to the nanosecond, which can’t be guaranteed. This could incentive some adverse behavior as it did on Arbitrum with strategies to game the websocket connections, though it’s slightly different since there was an incentive for transaction inclusion (write vs. read-only).
While prototyping the stream, polling of `getHeader()` on MEV-Boost proposer side seems like low-hanging fruit. Sophisticated actors might soon / already make the change to MEV-Boost to extract more MEV through a more “optimal” strategy of repeated bid querying. Relays could rate limit getHeader(), but there is no practical way to do this without signatures.
As for concrete next steps, it makes sense to prototype streaming bids and to publish more data. Flashbots is already experimenting with public subscriptions for MEV-share in prod today. Ultrasound is thinking about experimenting with a stream. The expectation is for only ~20 builders and one proposer to subscribe and the stream would only stream top bids or perhaps deltas on bids only. It remains to be seen how this affects the relay infrastructure providers.
### Concern around bonding
Right now builders have to bond if using optimistic relays (e.g. Ultrasound), how does this look like with ePBS? With ePBS, gossip network would only forward bids that increase top bid, being able to discard 99% of bids that don’t increase the value. As any market participant, you would connect to p2p gossip stream and see info. For anti-dos, builders would be collateralized in ePBS, allowing to know that bids are real and have a way to enumerate builders (expectation perhaps only ~100 builders). Potentially can put some constraints on p2p layer, for example, 1 bid per 10ms per builder.
Justin is working on a [ethresear.ch](http://ethresear.ch) draft post on design space. One of the things that can be done is capping amount of collateral, to 32 ETH for example. Builders can still make bids with higher value, for example, a 1000 ETH bid if it’s valid and submitted on time and pays proposer full amount (1000 ETH). If any condition is invalid, the builder would lose the 32ETH. The question here is how much should we penalize the builders for forcing an empty block? Usually, empty blocks are lose-lose, but there is a concern around builders being able to force an empty block to use for unbundling. What if opportunity is >> 32ETH? Generally, we really don’t want rich builders to have an advantage.
### Summary of points on cancellations
- Not great foundations for future
- If doesn't make sense with enshrined PBS, why bother?
- Pretty big resource strain on relays today
- Some builders would want this and so will be incentivized to run own relay