# Light client roadmap for Electra
## Light client history
With Altair, the foundation for light clients was layed out by introducing the concept of a sync committee. Each ~day, a subset of 512 validators is sampled to attest to the validity of the latest block on each slot. This allows light clients to follow the head of the chain without needing the ability to process full blocks, and without having to download public keys of all the other validators. The sync process is explained here:
- https://www.youtube.com/watch?v=85MeiMA4dD8 (video)
- https://github.com/ethereum/consensus-specs/blob/dev/specs/altair/light-client/sync-protocol.md (specs)
With Capella, the light client sync protocol got extended to also allow tracking of the execution block corresponding to the latest consensus block. This enabled some use cases for accelerating sync. When the beacon node is behind (e.g., starting from an older checkpoint, or after a prolonged power outage), it could use a light client to quickly obtain the latest execution block hash from the network, to be forwarded to the execution node. The execution node could then immediately sync to the latest state, instead of having to apply all the blocks one by one. Erigon also supports using an embedded light client to allow sync without any connected beacon node.
- https://github.com/ethereum/consensus-specs/blob/dev/specs/capella/light-client/sync-protocol.md (specs)
However, trust-minimized access to the latest execution block has much higher untapped potential.
- https://www.youtube.com/watch?v=ZHNrAXf3RDE (video)
## Light client data backfill
Beacon nodes can only compute light client data efficiently while the corresponding blocks are being imported. Computation requires access to the parent block's post-state, which is expensive (seconds) to restore at a later time, or may be entirely impossible (checkpoint state too recent). For light client data availability to improve, it is vital to develop a reliable backfill protocol. The currently available sync protocol is not viable, as it only supports forward sync. When used to query past data, an attacker could create a fake history that eventually leads to canonical data but is non-canonical in the past.
The first step towards backfill is to ensure that only canonical data gets collected. Orphaned blocks should not be included in light client data even if they had high sync committee support at some time. This is because there is no simple way to prove their relation to the canonical chain. A proposal for this initial step exists, and also comes with a Python implementation that allows testing implementations in a standardized way, paving the way for other beacon nodes besides Lodestar and Nimbus to also adopt light client data serving.
- https://github.com/ethereum/consensus-specs/pull/3553
Once only canonical data is made available, proofs can be created that confirm that a particular `sync_aggregate` was in the `signature_slot`'s `SignedBeaconBlock` at `message.body.sync_aggregate`. And then, that this block was part of a later `BeaconState`'s `historical_summaries.block_roots`.
However, that still allows some ambiguity, as each `SyncCommitteePeriod` spans 8192 slots, and different peers may provide different canonical blocks. While this is not a safety problem per se, it would greatly help with caching infrastructure (e.g., Portal network) if there was a single deterministic block that represents its historical period.
The `is_better_update` function provides such a selector, but including a proof that a particular block actually is the best update for its period involves quite a bit of data. Notably, all sync aggregates for all blocks must be sent, together with corresponding inclusion proofs. And, a proof for the transition when the first block within the period got finalized must also be included.
To simplify this, the `BeaconState` could be minimally extended to incrementally track the best sync aggregate within the current period, and also to attest to the previous period's best sync aggregate. The backfill protocol would then define a libp2p / beacon-API endpoint that serves, for past periods:
1. A `LightClientUpdate` from requested `period` + 1 that proves that the entirety of `period` is finalized.
2. `BeaconState.historical_summaries[period].block_summary_root` at (1)'s `attested_header.beacon.state_root` + Merkle proof.
3. For each epoch's slot 0 block within requested `period`, the corresponding `LightClientHeader` + Merkle multi-proof for the block's inclusion into (2)'s `block_summary_root`.
4. For each of the entries from (3) with `beacon.slot` within `period`, the `current_sync_committee_branch` + Merkle proof for constructing `LightClientBootstrap`.
5. If (4) is not empty, the requested `period`'s `current_sync_committee`.
6. The best `LightClientUpdate` from `period`, if one exists, + Merkle proof that its `sync_aggregate` + `signature_slot` is selected as the canonical best one in (1)'s `attested_header.beacon.state_root`.
Only the proof in (6) depends on `BeaconState` tracking the best light client data. This modification would enshrine the logic of a subset of `is_better_update`, but does not require adding any `LightClientXyz` data structures to the `BeaconState`.
Such a modification is small and involves adding a small step in the state transition functions for blocks and epochs. It could be added in Electra without interfering with other new features.
Furthermore, a libp2p endpoint could be added to obtain a recent `LightClient(Optimistic|Finality)Update` that was missed. Likewise, it could come with an ancestry proof relative to the latest update. Such an endpoint could be added between forks and does not require extending `BeaconState`.
## Checkpoint sync security
Currently, users are required to look up a recent finalized block root from a trusted explorer to initialize their beacon nodes. Because that may be considered to be complicated, many users just fall back to clone whatever the `/finalized` state from a trusted server happens to be. This is convenient, but is not secure.
Once a backfill protocol for light client data exists, historical light client data can be assumed to be reliably available. This allows initialization of a light client based from a deeply finalized block root from 1-2 years ago. There is research that suggests that the weak subjectivity period for light client data can be much longer than for the full consensus protocol.
- https://github.com/metacraft-labs/DendrETH/tree/main/docs/long-range-syncing
Extending the network definition files with a deeply finalized trusted checkpoint block root would enable a much more secure default behaviour.
- https://github.com/ethereum/consensus-specs/blob/dev/configs/mainnet.yaml
Namely, the following entries would have to be added to `config.yaml`:
1. `GENESIS_TIME` and `GENESIS_VALIDATORS_ROOT`, to allow computing the fork digests necessary for signature validation and to allow initialization of a beacon clock.
2. `TRUSTED_BLOCK_ROOT`, the root of a non-controversial historical epoch boundary block that could be baked into consensus clients as a default starting point for initializing a light client.
The `TRUSTED_BLOCK_ROOT` would have to be periodically updated, e.g., once per hard fork (or, whenever there was no sync committee majority for an entire period), and could point to a block 1-2 hardforks ago. The new config keys could be optional (or be initialized with stub values), to keep supporting pre-genesis network definitions and networks that do not wish to opt in to this security feature.
The checkpoint beacon state would still be downloaded from an external beacon API, but it would no longer have to be trusted to provide canonical content, as was the case when users blindly download the `/finalized` state.
Nimbus already supports trust-minimized checkpoint syncing through `--trusted-block-root` and `--external-beacon-api-url` launch arguments.
- https://nimbus.guide/start-syncing.html#checkpoint-sync
The new config fields could be added at any time, irrespective of a fork schedule, but only work reliably once a light client data backfill protocol is implemented. As that one requires a change to `BeaconState`, the earliest timing would be Electra.
## Beacon state snap sync
Once a decentralized mechanism to obtain a recent finalized block header by default without user configuration is adopted, it becomes possible to start working on removal of any external beacon API dependency.
A snap sync mechanism could be developed for obtaining the `BeaconState` via libp2p. The state would be split into chunks deterministically which can be downloaded in parallel from multiple peers. The chunks would be verified using Merkle proofs.
Once that is in place, the default sync experience if users don't configure anything can be replaced, from the non-secure genesis sync (current), to a sync process where:
1. Light client is initialized from `TRUSTED_BLOCK_ROOT` (`config.yaml`)
2. Light client syncs to current period
3. Canonical checkpoint state for snap sync is determined (maybe the best canonical `LightClientUpdate` from previous period?)
4. Checkpoint state is synced using snap sync
5. Node is transitioned into a full node, same as if it was checkpoint synced
6. Forward sync to apply all future blocks, backward sync to backfill past blocks / blobs / light client data
7. During sync, keep using light client to keep execution node synced while beacon node is still behind
8. Sync deposit logs from execution node once they become available. No longer necessary with https://eips.ethereum.org/EIPS/eip-6110 (Electra)
## Slashings for sync committee messages
While libp2p gossip validation rules make it difficult for non-canonical light client data to spread, there is no slashing mechanism that punishes dishonest sync committee members who sign malicious histories.
Slashings would add more security, but cannot provide absolute security. The maximum applicable penalty is 512 (sync committee size) * 32 ETH (full validator balance), about 50 million USD at time of writing. Furthermore, validators who already exited and commit slashable offences are not penalizable.
Nevertheless, slashings may help the protocol, if they can be designed in a simple to verify way.
- https://github.com/ethereum/consensus-specs/issues/3321
The research is not in a state ready for inclusion into Electra.
## Forward compatibility for SSZ data structures
Applications building on top of Ethereum typically only need access to tiny parts of information from the blockchain. For example, a decentralized staking pool may want to check whether a particular validator has been slashed, by extracting that bit from the `BeaconState`. Such data needs to be combined with a corresponding Merkle proof.
Right now, those Merkle proofs are not necessarily stable across forks. Adding new fields to a SSZ `Container` leads to re-assignment of generalized indices whenever the total number of fields reaches a new power of 2. Removal of fields leads to re-assignment of indices for all subsequent fields. Applications need to be vary of such changes and have to keep getting maintained even when none of their functionality changes.
EIP-7495 proposes a mechanism to make generalized indices forward compatible. It also enables the introduction of optional values and allows reordering of fields within `Variant` without affecting serialization or merkleization.
- https://eips.ethereum.org/EIPS/eip-7495
A reference implementation for Remerkleable (used in Ethereum consensus-specs) is available.
- https://github.com/ethereum/EIPs/tree/master/assets/eip-7495
Adopting `StableContainer` requires a one-time re-indexing of the underlying `Container` at the time of introduction. From then onward, applications consuming Merkle proofs no longer need to maintain their proof verifiers to remain compatible with future forks. That includes difficult-to-update verifiers based on zero knowledge circuits or in hardware wallets.
For Electra, `BeaconState`, `BeaconBlockBody`, and `ExecutionPayload` could be candidates to be wrapped. These structures frequently change and are also the ones containing the most useful data.
The optional mechanism could also be used to prune keys for fully exited validators from the beacon state. Such a change could be applied at a later fork, if deemed too risky for Electra.
The current Verkle design proposes the use of optional in the beacon state. EIP-7495 may be an alternative that is more future-proof than the current proposal based on EIP-6475. Or, alterntively, getting rid of the need for optionals in Verkle could avoid either of the proposals.
## Provable JSON-RPC execution API
Applications consuming the Ethereum JSON-RPC are required to rely on a trusted server, because many response items do not bundle a correctness proof.
These are the items of interest:
1. Execution block header
2. Execution state
3. Transactions
4. Receipts
5. Withdrawals
6. Deposits (with EIP-6110, Electra)
For (1), consistency of `eth_getBlockByHash` can be verified by re-computing the execution block hash, and cross-checking it against the `block_hash` in `LightClientHeader.execution`. Note that at this time, the fields `transactions_root`, `receipts_root` and `withdrawals_root` use a different hashing mechanism across consensus `ExecutionPayloadHeader` and execution block header. Further note that `eth_getBlockByHash` also includes all transaction hashes, as well as all withdrawal objects.
For (2), popular JSON-RPC providers offer `eth_getProof` but this is not standardized on https://ethereum.org/en/developers/docs/apis/json-rpc. Verkle will replace it, so there is a path forward to allow responses to come with proofs. Such proofs should be standardized and also be forward compatible to avoid the continuous maintenance burden for applications.
For (3), there is no commitment to a transaction ID anywhere on the blockchain. It is a concept that purely exists in JSON-RPC. So, to verify that a particular transaction ID is included, the entire transaction must be downloaded. And, also, all the other transactions within the block, otherwise the MPT cannot be recreated to validate its root against `transactions_root`. So, essentially, `eth_getBlockByHash` with `true` as second argument.
For (4), the situation is similar to (3), except that one has to use a series of `eth_getTransactionReceipt` because there is no batch API. And, when there are reorgs, the responses may actually refer to a different instance of the transaction where it is part of an orphaned block, so it needs retries.
For (5) and (6), these are small objects and come as part of `eth_getBlockByHash`.
While it is possible to evolve JSON-RPC somewhat to include more compact proofs, the general clunkiness cannot be incrementally solved. Items such as a transaction's `from` address, or an individual receipt's actual gas usage are also not available in on-chain data as of Deneb, and thus can only be computed but not directly proven.
As (2) will have a breaking change with Verkle, that opportunity could also be used to modernize (3) through (6), and to fix the inconsistent roots in (1). Applications that rely on proofs will have to be updated anyway (due to Verkle).
A series of EIPs describe a way to make JSON-RPC provable:
- https://eips.ethereum.org/EIPS/eip-6493 (`SignedTransaction` / `PooledTransaction` / `Receipt`)
- https://eips.ethereum.org/EIPS/eip-6404 (`transactions_root`)
- https://eips.ethereum.org/EIPS/eip-6466 (`receipts_root`)
- https://eips.ethereum.org/EIPS/eip-6465 (`withdrawals_root`)
The EIPs are currently under review and could be included in Electra. A demonstration of these EIPs is available on the web:
- https://eth-light.xyz
Wallet software can continue to use existing transaction types (they would still be converted to SSZ during payload building), or can choose to use the SSZ format natively when signing transactions.
The existing transaction ID (JSON-RPC/devp2p artifact) remains. A new transaction hash gets added that can be used with proofs. For SSZ transactions, the transaction hash is equal to the transaction ID. For RLP transactions, the hash is different. JSON-RPC responses related to SSZ transactions are backward-compatible and can be processed by existing infrastructure.
Applications that have a trusted JSON-RPC server available can continue to trust that server's responses without validating the extra proof fields. Only applications that validate proofs today need to be updated to consume the new format. Proofs build on top of the `StableContainer` proposal from EIP-7495, so are forward compatible, but could also be imagined with a different basis other than EIP-7495.
## Decentralized JSON-RPC execution API
Once the JSON-RPC responses contain correctness proofs, it becomes possible for the JSON-RPC API to be offered in a decentralized manner. Providers can no longer lie to validating clients!
This improves privacy, because validating clients are no longer subject to privacy policies from centralized JSON-RPC providers that may link wallet information across multiple requests with the requesting IP address. Together with privacy efforts on libp2p such as Dandelion and sending requests to different peers, further advances can be made.
Decentralized JSON-RPC is just an idea for now, and won't be ready for Electra. Any developments would not have to be tied to a fork either, as it does not affect consensus rules. However, they depend on the availability of JSON-RPC execution API correctness proofs.
## SSZ query language
To generalize an API in the style of `eth_getProof` for SSZ, a query language is necessary that allows:
- Requesting any sub-tree of an SSZ object (notably, only fields that the client knows about, to stay forward compatible)
- Allowing to request summaries (hash_tree_root) instead of fully expanded subtrees, on a per-field basis
- Filtering, e.g., a transaction with certain root
- Simple back-references, e.g., a receipt at the same index in its tree as the found transaction in the other tree
- Specification relative to where the proof should be anchored
Such an API could be made available for both consensus and execution REST APIs.
When using forward compatible data types, e.g., based on EIP-7495, the request / response types can be statically synthesized from a given request. Notably, EIP-7495 style implementations for "union" style types would allow single-roundtrip responses without first having to query the runtime kind before the tree shape (and, thus, the response type) can be computed. Another SSZ extension may have to be necessary to indicate what parts of the tree have been summarized, e.g., to support filtering responses.
Nimbus currently supports very simple queries as part of the verifying web3signer to ensure that Graffiti and fee recipient are correct when blocks are signed:
- https://nimbus.guide/web3signer.html#verifying-web3signer
However, a full language specification does not exist at this time, and this won't be ready for Electra and also does not require to be synchronized with any given fork. The idea should be taken into consideration before creating all sort of specialized proving endpoints, though.