# Implement EIP-7917 - Deterministic proposer lookahead Mentor - *Justin Drake* and *Lin Oshitani* Interested Permissioned Fellow - [Harsh Pratap Singh](https://harsh-ps-2003.bearblog.dev/) [EIP-7917](https://eips.ethereum.org/EIPS/eip-7917) is a mechanism to pre-calculate and store a deterministic proposer lookahead in the beacon state at the start of every epoch, eliminating non-determinism in proposer selection that can arise from effective balance changes within an epoch ### Background and Motivation Ethereum uses RANDAO, a built-in randomness source, to fairly assign validators to propose blocks and perform other duties on the Beacon Chain. Even though RANDAO seeds are known in advance, sudden changes in validator balances can shift who becomes proposer, even at the last second. That unpredictability causes problems. EIP-7917 addresses this critical limitation in current Ethereum consensus implementations where the beacon proposer schedule for epoch N+1 cannot be fully predicted from the beacon state during epoch N. Unlike RANDAO seeds which have deterministic lookahead of at least `MIN_SEED_LOOKAHEAD == 1` epochs, effective balance changes within an epoch can alter proposer schedules unpredictably. This unpredictability is particularly problematic for based preconfirmation protocols that rely on stable, predictable proposer schedules. This proposal fixes this design oversight while enabling on-chain access to proposer schedules via beacon roots and Merkle proofs. [Grandine's architecture](https://youtu.be/1VlCMIZ5HlQ?si=lPohV8anCH090gSM), with its focus on high performance through parallelization and memory optimization, provides an ideal foundation for implementing this enhancement efficiently. The [client's use of structural sharing to avoid full state copies](https://docs.grandine.io/storage.html) will be particularly beneficial for managing the additional state required by the proposer lookahead. ### How it works At the start of each epoch, Ethereum creates a list called `proposer_lookahead` showing which validators will propose blocks for the current and upcoming epochs. When a new epoch starts, the list updates by removing old proposers and adding new ones. This list is stored on-chain, so smart contracts can easily see who proposes next, making confirmations faster and more reliable. Simple! ### Implementation The codebase handles different consensus forks (like bellatrix, capella, deneb, and electra) through Cargo features and separate modules for each fork's data structures and logic. The Preset trait is used to manage fork-specific parameters. We will be targeting the electra fork for this implementation. Preset already owns const `MIN_SEED_LOOKAHEAD` and a type-level SlotsPerEpoch. All arithmetic on epochs/slots is done with those associated consts; they are accessible in const contexts (typenum). #### BeaconState Structure Modifications Every fork keeps its own Beacon State; all of them derive Ssz and are thin POD structs that embed only fixed/variable-size SSZ types. A unifying trait `types::traits::BeaconState` exposes only field accessors – adding one more field means adding one more accessor here, nothing else in the rest of the code touches raw fields directly. Structs are wrapped in `Hc<T>` to cache roots and are managed through Arc, so an extra cache-friendly field does not change any invariants. For [the BeaconState in the Electra fork](https://github.com/grandinetech/grandine/blob/f81a61c478a970061f2617f3c804ba5bbc84add0/types/src/deneb/beacon_state.rs#L27), we need to add the `proposer_lookahead` field to the BeaconState struct as a `List<ValidatorIndex, P::ProposerLookaheadSize>`. The size will be defined as a new associated type in the Preset trait. ```rust #[derive(Debug, Clone, PartialEq, Eq, Hash, Encode, Decode)] pub struct BeaconState<P: Preset> { // Existing fields... pub proposer_lookahead: List<ValidatorIndex, P::ProposerLookaheadSize>, } ``` For that, we need to add `ProposerLookaheadSize` to the [Preset trait](https://github.com/grandinetech/grandine/blob/f81a61c478a970061f2617f3c804ba5bbc84add0/types/src/preset.rs#L56) to generify the size of the proposer_lookahead list. For mainnet presets, this will be `(MIN_SEED_LOOKAHEAD + 1) * SLOTS_PER_EPOCH`. The `proposer_lookahead` field stores validator indices for the full visible lookahead period, spanning from the current epoch through the next `MIN_SEED_LOOKAHEAD` epochs. This design ensures that `proposer_lookahead` represents the first proposer in the current epoch, while `proposer_lookahead[SLOTS_PER_EPOCH + 4]` represents the fifth proposer in the next epoch. #### Proposer Index Calculation Optimization For pre-Electra forks [compute_proposer_index_pre_electra](https://github.com/grandinetech/grandine/blob/f81a61c478a970061f2617f3c804ba5bbc84add0/helper_functions/src/misc.rs#L160) implements the VRF-style loop and is always reached via `get_beacon_proposer_index`. Electra re-uses exactly the same call-chain – there is no fork-specific override yet. Swapping the body of `get_beacon_proposer_index` behind a fork-gate is enough to replace the calculation. The [get_beacon_proposer_index](https://github.com/grandinetech/grandine/blob/f81a61c478a970061f2617f3c804ba5bbc84add0/helper_functions/src/accessors.rs#L457) computes the proposer for the current slot on-demand by calling [compute_proposer_index](https://github.com/grandinetech/grandine/blob/f81a61c478a970061f2617f3c804ba5bbc84add0/helper_functions/src/misc.rs#L252), which shuffles the active validator indices based on a seed derived from the epoch. This is what we need to change to use the pre-computed lookahead, instead of computing proposer indices on-demand. The EIP-7917 implementation would replace this with an O(1) lookup: ```rust pub fn get_beacon_proposer_index<P: Preset>(state: &BeaconState<P>) -> ValidatorIndex { let epoch_start_slot = state.slot - (state.slot % P::SlotsPerEpoch::U64); let lookahead_start_slot = epoch_start_slot; let index = (state.slot - lookahead_start_slot) as usize; state.proposer_lookahead[index] } ``` This modification eliminates the computational overhead of real-time proposer calculation while ensuring deterministic results. #### Epoch Boundary Processing Enhancement The epoch processing pipeline requires a new `process_proposer_lookahead` function integrated into the existing epoch transition logic. This function will be responsible for shifting the `proposer_lookahead` list and appending the new proposers for the next lookahead epoch. ```rust fn process_proposer_lookahead<P: Preset>(state: &mut BeaconState<P>) -> Result<(), Error> { let last_epoch_start = state.proposer_lookahead.len() - SLOTS_PER_EPOCH; // Shift out proposers from the first epoch state.proposer_lookahead.drain(0..SLOTS_PER_EPOCH); // Calculate new proposer indices for the furthest lookahead epoch let target_epoch = get_current_epoch(state) + MIN_SEED_LOOKAHEAD; let last_epoch_proposers = compute_proposer_indices(state, target_epoch)?; // Append new proposer indices state.proposer_lookahead.extend(last_epoch_proposers); Ok(()) } ``` or we can be a bit clever as well if profiling shows optimization constraints : ```rust fn process_proposer_lookahead<P: Preset>(state: &mut impl BeaconState<P>) -> Result<()> { let mut lookahead = state.proposer_lookahead_mut(); // Ring-buffer trick: keep a head pointer instead of shifting memory. let head = (state.slot() / P::SlotsPerEpoch::U64 % (P::MIN_SEED_LOOKAHEAD + 1)) as usize; let start = head * P::SlotsPerEpoch::USIZE; // 1. generate proposers for epoch `current_epoch + P::MIN_SEED_LOOKAHEAD` let target_epoch = accessors::get_current_epoch(state) + P::MIN_SEED_LOOKAHEAD; let proposers = compute_proposer_indices::<P>(state, target_epoch)?; // 2. overwrite the slice in place for (dst, src) in lookahead[start..start + P::SlotsPerEpoch::USIZE] .iter_mut() .zip(proposers) { *dst = src; } Ok(()) } ``` This keeps updates O(n) for one epoch and O(1) per slot for reads, but avoids the `Vec::drain realloc`. The `process_proposer_lookahead` from within the process_epoch function when the electra fork is active. Then we can modify `get_beacon_proposer_index` to read the proposer index directly from `state.proposer_lookahead` when the electra fork is active. This will involve using the `state.slot` to index into the lookahead list correctly. Slot processing for every fork calls [process_epoch](https://github.com/grandinetech/grandine/blob/f81a61c478a970061f2617f3c804ba5bbc84add0/transition_functions/src/combined.rs#L439) once per epoch boundary; from there everything is routed to respective forks. So adding a single `unphased::process_proposer_lookahead(state)` call inside `process_epoch` covers every fork automatically. ### SSZ Serialization Integration No DB schema change is needed – `BeaconState` is stored SSZ-encoded as a blob, adding a field automatically changes the root and the branch proof. Old states are never loaded post-fork, so compatibility is not required. Grandine’s SSZ derives use field order, not field count; adding a trailing field is binary-stable in the sense that pre-fork states are rejected by the fork-phase logic anyway. All BeaconState SSZ containers already live behind fork gating, so we only touch the post-fork structs (`electra::BeaconState` in practice). ### Fork Activation The first block after fork activation requires special handling to populate the complete proposer_lookahead field. This involves calculating all lookaheads for epochs up to `MIN_SEED_LOOKAHEAD` ahead: ```rust fn initialise_lookahead<P: Preset>(state: &mut impl BeaconState<P>) -> Result<(), Error> { // Validate state before initialization if !state.proposer_lookahead().is_empty() { return Err(Error::LookaheadAlreadyInitialized); } let current_epoch = accessors::get_current_epoch(state); let mut all_proposers = Vec::with_capacity( (P::MIN_SEED_LOOKAHEAD + 1) as usize * P::SlotsPerEpoch::USIZE ); for offset in 0..=P::MIN_SEED_LOOKAHEAD { let epoch = current_epoch + offset; let proposers = compute_proposer_indices::<P>(state, epoch) .map_err(|e| Error::ProposerCalculationFailed(epoch, e))?; all_proposers.extend(proposers); } state.proposer_lookahead_mut().extend(all_proposers); Ok(()) } ``` The implementation must maintain compatibility with pre-EIP-7917 beacon states during the transition period. Grandine's existing fork management infrastructure should be extended to handle the proposer lookahead field conditionally based on the active fork version. We can't simply assume clean state transitions but doesn't adequately handle potential race conditions or partial activation scenarios ### Client Integration Benefits * Beacon API Enhancements - Grandine's Beacon Node API implementation may require updates to expose proposer lookahead information to external consumers. This enables based preconfirmation protocols to access proposer schedules via simple API calls rather than complex state calculations. New API endpoints could include ``` GET /eth/v1/beacon/proposer_lookahead/{epoch} GET /eth/v1/beacon/states/{state_id}/proposer_lookahead GET /eth/v1/validator/proposer_duties_extended/{epoch} ``` and enhanced state responses including proposer lookahead data. * Validator Client Integration - The built-in validator client component can benefit from the deterministic proposer schedules for improved block production timing and preparation. This integration supports Grandine's target users, particularly staking pools that run multiple validators per machine. ### Performance Considerations * Memory Overhead - The `proposer_lookahead` field will be accessed frequently during block processing. The `proposer_lookahead` field adds approximately `(MIN_SEED_LOOKAHEAD + 1) * SLOTS_PER_EPOCH * sizeof(ValidatorIndex)` bytes to each beacon state. With typical values, this represents roughly 64 validator indices (2 epochs × 32 slots), requiring approximately 256 bytes of additional storage per state. Given Grandine's lightweight profile of ~2.5GB memory usage, this overhead is quite negligible. * Computational Load Distribution - EIP7917 shifts computational load from per-slot proposer calculation to per-epoch batch processing which aligns well with Grandine's parallelization capabilities, as the `compute_proposer_indices` function can potentially be parallelized across multiple CPU cores when calculating proposer schedules for entire epochs. [Grandine's CPU utilization patterns show the client effectively uses around 80% of CPU while syncing](https://arxiv.org/pdf/2311.05252), indicating capacity for additional epoch boundary processing. * Structural Sharing Benefits - Since the proposer lookahead changes predictably at epoch boundaries, shared references can be maintained efficiently across state transitions. * We can consider using a more cache-friendly data structure for frequent lookups But still, for the peace of mind I can implement comprehensive benchmarking during development : * Memory usage profiling under various validator set sizes, realistically >1M validators * CPU utilization measurements during epoch boundary processing * Cache efficiency analysis with the additional lookahead field I hope Tokio Tracing for Debugging and Performance Analysis get implemented, which can ease out the observability stuff for me.