Blockchain Research
We created a new transaction type BLOB_TX-TYPE
via EIP-2718
The reason why VersionedHash is used instead of KZG here is for forward compatibility. STARK is very different from KZG, VersionedHash is 32 bytes, it will be very convenient to replace STARK in the future. For example, there is no need to replace the proof format in Precompile, etc…
New opcode: DATA_HASH
(with byte value HASH_OPCODE_BYTE
)
input one stack argument index
returns tx.header.blob_versioned_hashes[index]
if index < len(tx.header.blob_versioned_hashes)
, and otherwise zero
The opcode has a gas cost of HASH_OPCODE_GAS
Note: We removed blob_verification_precompile
as originally designed
https://github.com/ethereum/EIPs/commit/f45bd0c101944dc703bd8a80c6b064b47e1f7390
On the consensus-layer the blobs are now referenced, but not fully encoded, in the beacon block body. Instead of embedding the full contents in the body, the contents of the blobs are propagated separately, as a “sidecar”.
This “sidecar” design provides forward compatibility for further data increases by black-boxing is_data_available()
: with full sharding is_data_available()
can be replaced by data-availability-sampling (DAS) thus avoiding all blobs being downloaded by all beacon nodes on the network.
BeaconBlockBody
We add a BeaconBlockBody
struct:
Nodes broadcast this over the network instead of plain beacon blocks. When a node receives it, it first calls validate_blobs_and_kzg_commitments
and if this call passes it runs the usual processing on the beacon block.
If valid, set block.body.blob_kzg_commitments = blob_kzg_commitments
.****
SignedBeaconBlockAndBlobsSidecar
Set signed_beacon_block_and_blobs_sidecar.beacon_block = block
where block
is obtained above.****
This signed_beacon_block_and_blobs_sidecar
is then published to the global beacon_block_and_blobs_sidecar
topic.
After publishing the peers on the network may request the sidecar through sync-requests, or a local user may be interested. The validator MUST hold on to sidecars for MIN_EPOCHS_FOR_BLOBS_SIDECARS_REQUESTS
epochs and serve when capable, to ensure the data-availability of these blobs throughout the network.
After MIN_EPOCHS_FOR_BLOBS_SIDECARS_REQUESTS
nodes MAY prune the sidecars and/or stop serving them.
We also add a cross-validation requirement to check equivalence between the BeaconBlock
and its contained ExecutionPayload
(this check can be done later than the verifications of the beacon block and the payload)
Verifying versioned_hash is matched with kzg in block body.
https://github.com/ethereum/execution-apis/pull/197
Engine_getBlobsBundleV1
This method retrieves the blobs and their respective KZG commitments corresponding to the versioned_hashes
included in the blob transactions of the referenced execution payload.
Engine_getPayloadV1
This method may be combined with engine_getPayloadV1
into a engine_getPayloadV2
in a later stage of EIP-4844. The separation of concerns aims to minimize changes during the testing phase of the EIP.
Engine_getPayloadV2
The blob data is only there in the network wrapper presentation of the TX. From the perspective of the Execution layer, the blob data is not persisted, and not accessible in the EVM. Blob-data is purely meant for data availability. In the EVM the data can be proven to be there by VersionedHash
.
The validity of the blob is guaranteed on other architectures. This is important in the long run, because in the future these blobs will be broadcast on another subnet (DAS & Reconstruction) 📢.
Transactions are presented as TransactionType || TransactionNetworkPayload
on the execution layer network, the payload is a SSZ encoded container:
We do network-level validation of BlobTransactionNetworkWrapper
objects as follows:
L2 <> L1
Blob Lifecycle
In ZK rollup we can pass blob data as private input to KZG and do elliptic curve linear combination (or pairing) inside SNARK to verify it. But this is costly and very inefficient.
We can use proof of equivalence protocol to prove that the blob's kzg commitment and zk proof point to the same data.
Choosing the point z
as a hash of all the commitments ensures that there is no way to manipulate the data or the commitments after you learn z
(this is standard Fiat-Shamir reasoning)
Note that ZK rollup does not directly verify KZG, they use Point evaluation precompile
to verify. In this way, after we resist quantum in the future, it will not bring new troubles to ZK rollup.
Point evaluation precompile
The idea here is we need a random point that the producer/verifier cannot choose, and evaluate that the data point in both the KZG blob and ZK rollup data commitments is the same.
FYI:
Gas fee is the pricing of Ethereum resources. At present, we use one-dimensional pricing method (only have Base fee), but with the increase of historical state and data sharding, this method is very inefficient.
The resources of Ethereum can be classified into:
For EIP-4844 we will mainly take up two resources:
History growth is not a big problem, all blobs are not stored permanently, they are only stored for a one-month validity period.
Pricing for bandwidth will be a bigger issue.
EIP-4844 introduces a multi-dimensional EIP-1559 fee market, where there are two resources, gas and blobs, with separate floating gas prices and separate limits.
Just like this:
PR 5707: Fee Market Update:
https://github.com/ethereum/EIPs/commit/7e8d2629508c4d571f0124e4fc67a9ac13ee8b9a
Introduce data gas as a second type of gas, used to charge for blobs (1 byte = 1 data gas)
Data gas has its own EIP-1559-style dynamic pricing mechanism:
MAX_DATA_GAS_PER_BLOCK
, target half of thatmax_fee_per_data_gas: uint256
MIN_DATA_GASPRIC
E so that one blob costs at least ~0.00001 ETHexcess_data_gas
instead of basefeeWe use the excess_data_gas
header field to store persistent data needed to compute the data gas price. For now, only blobs are priced in data gas.