or
or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up
Syntax | Example | Reference | |
---|---|---|---|
# Header | Header | 基本排版 | |
- Unordered List |
|
||
1. Ordered List |
|
||
- [ ] Todo List |
|
||
> Blockquote | Blockquote |
||
**Bold font** | Bold font | ||
*Italics font* | Italics font | ||
~~Strikethrough~~ | |||
19^th^ | 19th | ||
H~2~O | H2O | ||
++Inserted text++ | Inserted text | ||
==Marked text== | Marked text | ||
[link text](https:// "title") | Link | ||
 | Image | ||
`Code` | Code |
在筆記中貼入程式碼 | |
```javascript var i = 0; ``` |
|
||
:smile: | ![]() |
Emoji list | |
{%youtube youtube_id %} | Externals | ||
$L^aT_eX$ | LaTeX | ||
:::info This is a alert area. ::: |
This is a alert area. |
On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?
Please give us some advice and help us improve HackMD.
Syncing
xxxxxxxxxx
Ethereum is moving towards a rollup centric roadmap, where blockchain related activities like DeFi, NFTs etc will be mainly done on L2 layer rollups like arbitrum, optimism etc.
Rollups essentially batch up txs and commit the state change done by these txs to the L1 layer. Each type of rollup has a different way in validating the tx batch, e.g. in optimistic rollups, a set of rolled up txs are considered final until some party proves that the list of txs are invalid.
Rollups have had to deal with fairly high gas costs due to the fact that the rollup data(which can be pretty large) was stored in the calldata of the ethereum tx which is expensive since it is stored in the EVM.
To reduce the gas costs of rollups, the ethereum core development teams have introduced blobs to ethereum. Blobs are essentially opaque bytes of data which are only relevant to rollups(or the users who are using blob txs).
Blobs are committed onchain by the KZG polynomial commitment scheme. Basically, each blob is interpolated into a polynomial whose commitment is computed using the KZG commitment scheme.
Computing the KZG commitment schemes will get expensive for homestakers going forward which will make it harder for home stakers to propose blocks with blob txs. ePBS solves this issue by allowing homestakers to delegate block building to external builders.
One of the other problems is that in full danksharding, n/w bandwidth will be a huge factor.
Below is quoted from the proposal written by Potuz (https://ethereum-magicians.org/t/uncouple-blobs-from-the-execution-payload/13059)
This forces proposers to rely on external builders to prepare these blocks, which in turn imposes a threat to censorship resistance: a few external players (the builders) will have ultimate decision of what transactions are included or not on L1.
While it is true that a non-censoring proposer may always propose execution blocks without any blobs, this becomes a strong centralizing force in that non-censoring validators have to forfeit their earnings from blobs (which in a rollup-centric roadmap are expected to be a large part of the block) if they want to control their execution contents.
We propose the decoupling of blob building from execution tx building for homestakers. Here, the intent is to ensure that smaller home stakers are not squeezed out of the builder market and that the builder market is not dominated by only specialized builders which can increase censorship in Ethereum.
Some Context
In Eth2.0, the consensus layer randomly selects(through RANDAO) a particular validator to propose a block. To determine which transactions will go in the payload, the validator fetches an execution payload from the local execution engine using
engine_getPayloadV3
, the execution payload contains the following:After receiving the payload from the local execution engine. We also request a "Blinded" execution payload header from an MeV relayer we are connected to. A "Blinded" execution payload header is essentially the same as the execution payload received from the local execution engine, but the main difference being that the transaction list is hidden and is instead replaced by a merkle root of the transaction list. The merkle root is an equivalent substitute for the transaction list for validity. The validator client signs the blinded payload and sends it to the MeV relayer which means that it is essentially "commiting" to the payload and will definitely use it.
Post this, during the unblinding of the payload which essentially means the opening up of the payload and including the txs in the block. The block is then gossiped to the n/w for other validators to include it in their fork choice stores and attest to it.
TODO
Questions
Random Musings on the problem
These are some of the different ways in which we can think of the problem.
tx selection
and notblock building
. Because assuming the protocol specs hold good, having few builders cause the risk of tx censorship and not protocol level changes. Even if there are protocol level changes, the vast number of proposers can reject the block.Design Goals
We try to establish some scope of our design
Potential Design Solutions
We discuss below some solutions on how homestakers can cope with the problem of expensive KZG commitment computations in full danksharding.
Defer Blob validation and not Block building
We know that the computation of KZG commitments will get expensive with full danksharding.
Currently the KZG commitment computation seems to be happening only when a blob tx is adding to a tx pool in the execution layer. There is no KZG commitment computation or proof verification during block building or block validation. This means that there may not be a guarantee that the blob tx will be available in the pending pool for the proposer to build it.
Given that this expensive operation is happening only during the addition of blob txs to the tx pool, we can think of the following ideas too:
This is BAD idea for the obvious reasons
This is a bit similar to the idea of an MeV Relayer where we send an api call to it to request a blinded block. This api call could be faster as the blob validator will have much more specialized infra to run these commitments. But there are also centralization issues.
Proposer builds execution txs, builder builds blob txs and the block is assembled in the MeV relay
In this solution, the proposer sends execution txs to the MeV relay alongside with the request to get a blinded header. The MeV relay will get the blob txs from the builder and assemble a block with the proposer execution txs and blob txs of the builder and sends back the blinded payload to the proposer who signs and commits to it. The proposer unblinds the payload later and builds the block with it.
There are some questions that come along with this:
Proposer node sends(not builds) preferred execution txs to the MeV relay
The problem of decoupling blob payload and execution payload can be looked at also as 2 individual execution client building the payload for 1 block. A transaction list should be deterministic in its sequence however its sequenced. All builders must agree to one sequence. If multiple builders build parts of an execution payload, they might have different view of the execution payload sequence which will make block building non deterministic for these builders.
The correct way IMO would be to fetch txs from multiple builders and then one validator will assemble the txs, commit all the txs in a particular sequence, store them in the block and send it to the consensus layer to build the block and broadcast it to the n/w.
This architecture makes me think of a map reduce architecture of sorts, where the mappers send the work to reducers.
now that we have established that tx sequencing should be deterministic, lets move a bit to the consensus layer.
In the consensus layer, we need a heuristic to determine if the validator is capable of building all the txs(including blob txs) by itself or does it need help from external builders to build a blob tx. We can have a validator in 4 possible modes:
The ideal heuristic for (2) and (3) should be the system specifications and also to allow the validator to specify what they want. (1) should be based on validators preference. This heuristics should only be determined based on good benchmarks.
(1) and (3) cases are already implemented in existing consensus and execution layers today. We will speak more about (2)
In the case of 2, the blob tx payload construction is being deferred to an external builder and the other execution txs are proposed from the proposer. Since blobs and execution txs dont compete for gas, we don't have to worry about reserving space in the payload for blob txs. We just need to reserve enough gas for the builder to add the payout tx for the proposer.
To fit the current architecture, the following can be done to support such as an architecture:
onPayloadAttributes
SSE event from the consensus client. Similarly consensus layer can trigger an engine api call to fetch the execution txs(without commiting them) to be proposed from the execution layer and then send the execution tx list to the MeV relayer which will send the execution txs to the builder.onPayloadAttributes
SSE event is triggered. The builder fetches the validator data from the MeV relay where we can add a validator preference which indicates that the validator wants only blob txs to be built and will supply the remaining execution txs. The builder will wait for the execution txs for the proposer to be received. Once the builder receives it, it will commit the transactions to state, generate the tx sequence, state root, receiept root etc. It will add the payout tx to add the proposer reward and send the block payload back to the MeV Relayer.Notes on this solution:
Stuff I need to research