Main downsides of existing Eth1Data
poll are as follows:
These features are dictated by necessity of maintaining a bridge between two disjoint blockchains which is no longer the case post-Merge. This opens up an opportunity to improve on deposit processing flow.
EL is responsible for surfacing receipts of deposit transactions and including them into an execution block. In particular, EL is collecting DepositEvents
from transactions calling to deposit contract deployed in the network after transaction execution and either adds them to the block structure (when building a block) or runs necessery validations over the list of events. Computational cost of this addition should be marginal as EL does already make a pass over receipts to obtain receipts_root
.
Execution block header is equipped with deposits_root
field (a root of a list of deposit operations, akin to withdrawals_root
in the EIP-4895). Execution block body is equipped with the new deposits
list containing elements of the following type:
class Deposit(Container):
pubkey: Bytes48
withdrawal_credentials: Bytes32
amount: uint64
signature: Bytes96
index: uint64
When validating a block, EL runs two additional checks over deposits_root
:
deposits_root
value in a block header must be equal to the root of deposits list obtained from the the block executiondeposits_root
must be consistent with deposits
list of the corresponding block bodyAddress of the deposit contract becomes a network configuration parameter on EL side.
The following structures, methods and fields become deprecated:
Eth1Data
Deposit
process_eth1_data
process_eth1_data_reset
process_deposit
BeaconBlockBody.deposits
BeaconState.eth1_deposit_index
ExecutionPayload
class DepositReceipt(Container):
pubkey: BLSPubkey
withdrawal_credentials: Bytes32
amount: Gwei
signature: BLSSignature
index: uint64
class ExecutionPayload(bellatrix.ExecutionPayload):
deposits: List[DepositReceipt, MAX_TRANSACTIONS_PER_PAYLOAD]
BeaconState
class BeaconState(bellatrix.BeaconState):
pending_deposits: List[DepositReceipt, MAX_PENDING_DEPOSITS]
process_pending_deposit
def get_validator_from_deposit(deposit: DepositReceipt) -> Validator:
# Modified function body
...
def process_pending_deposit(state: BeaconState, deposit: DepositReceipt) -> None:
pubkey = deposit.pubkey
amount = deposit.amount
validator_pubkeys = [v.pubkey for v in state.validators]
if pubkey not in validator_pubkeys:
# Verify the deposit signature (proof of possession) which is not checked by the deposit contract
deposit_message = DepositMessage(
pubkey=deposit.pubkey,
withdrawal_credentials=deposit.withdrawal_credentials,
amount=deposit.amount,
)
domain = compute_domain(DOMAIN_DEPOSIT) # Fork-agnostic domain since deposits are valid across forks
signing_root = compute_signing_root(deposit_message, domain)
if not bls.Verify(pubkey, signing_root, deposit.signature):
return
# Add validator and balance entries
state.validators.append(get_validator_from_deposit(deposit))
state.balances.append(amount)
else:
# Increase balance by deposit amount
index = ValidatorIndex(validator_pubkeys.index(pubkey))
increase_balance(state, index, amount)
process_block
def process_deposit_receipt(state: BeaconState, deposit: DepositReceipt) -> None:
state.pending_deposits.append(deposit)
def process_operations(state: BeaconState, body: BeaconBlockBody) -> None:
# Verify that outstanding deposits are processed up to the maximum number of deposits
assert len(body.deposits) == min(MAX_DEPOSITS, state.eth1_data.deposit_count - state.eth1_deposit_index)
def for_ops(operations: Sequence[Any], fn: Callable[[BeaconState, Any], None]) -> None:
for operation in operations:
fn(state, operation)
for_ops(body.proposer_slashings, process_proposer_slashing)
for_ops(body.attester_slashings, process_attester_slashing)
for_ops(body.attestations, process_attestation)
for_ops(body.deposits, process_deposit) # Used by transition logic
for_ops(body.execution_payload.deposits, process_deposit_receipt) # New in Deposits
for_ops(body.voluntary_exits, process_voluntary_exit)
def process_pending_deposits(state: BeaconState, block: BeaconBlock) -> None:
# We may not cap deposit processing at all
# It's done for compatibility with the existing solution
# Note: this scheme doesn't require merkle proof validation
# which reduces computation complexity
for deposit in state.pending_deposits[:MAX_DEPOSITS]:
process_pending_deposit(state, deposit)
state.pending_deposits = state.pending_deposits[MAX_DEPOSITS:]
def process_block(state: BeaconState, block: BeaconBlock) -> None:
process_block_header(state, block)
process_randao(state, block.body)
process_eth1_data(state, block.body) # Used by transition logic
process_pending_deposits(state) # New in Deposits
process_operations(state, block.body)
During the period of transition from the old Eth1Data
poll mechanism to the new one clients will have to run two machineries in parallel until the moment in time when Eth1Data
poll period overlaps with the a span of blocks containing the new deposits
list.
process_eth1_data
def process_eth1_data(state: BeaconState, body: BeaconBlockBody) -> None:
# Stop voting on Eth1Data
block = get_beacon_block_by_execution_block_hash(state.eth1_data.block_hash)
if compute_epoch_at_slot(block.slot) >= DEPOSITS_FORK_EPOCH:
return
state.eth1_data_votes.append(body.eth1_data)
if state.eth1_data_votes.count(body.eth1_data) * 2 > EPOCHS_PER_ETH1_VOTING_PERIOD * SLOTS_PER_EPOCH:
state.eth1_data = body.eth1_data
process_pending_deposits
def process_pending_deposits(state: BeaconState, block: BeaconBlock) -> None:
# Wait for the last Eth1Data poll to happen
block = get_beacon_block_by_execution_block_hash(state.eth1_data.block_hash)
if compute_epoch_at_slot(block.slot) < DEPOSITS_FORK_EPOCH:
return
# Wait for an old deposit queue to drain up
if state.eth1_deposit_index < state.eth1_data.deposit_count:
return
# Filter overlapped deposit span out of the queue
state.pending_deposits = [d for d in state.pending_deposits if d.index >= state.eth1_deposit_index]
# We may not cap deposit processing at all
# It's done for compatibility with the existing solution
# Note: this scheme doesn't require merkle proof validation
# which reduces computation complexity
for deposit in state.pending_deposits[:MAX_DEPOSITS]:
process_pending_deposit(state, deposit)
state.pending_deposits = state.pending_deposits[MAX_DEPOSITS:]
Given that no more merkle proof is required to process a deposit, data complexity per deposit is reduced roughly by 1Kb
.
With a cost of deposit transaction roughly equal to 60k gas and gas limit at 30m gas, a block may have 500 deposit operations at max. This is 192*500 = ~94Kb
(~47Kb
at a target block size) per block vs 1240*16 = ~19Kb
per block today.
Alternatively, deposit queue may be moved from CL state to EL state to rate limit a number of deposits per block.
There are ~410k deposits made until today. The size of overall deposit data would be ~75Mb
.
The cache is used by all CL clients to optimise deposit processing flow and in other places. Recent entries of the pubkey cache will have to be invalidated upon re-org as a validator (index, pubkey)
pair is fork dependent.
Eth1Data
and votingAnother path was explored to reuse the eth1data
machinery as much as possible. In this path, the block's eth1data
vote would be passed to EL in the engine API as part of the EL validations. A goal of this path is also to reduce to voting period and follow distance.
In this investigation we realized that although this would work in theory, in practice it no allow us to reduce the follow distance from the current amount. This is because CL clients maintain a deposit-cache for block production, and this deposit-cache, today, assumes no re-orgs at the depth of the cache. If we reduced the follow distance significantly (e.g. to 1 slot), it would put the deposit-cache in the zone of re-orgs and require significant reengineering on an already error prone component of CL client.