# How Light Clients Track Ethereum's Consensus Layer Light clients offer a minimal, efficient way to stay in sync with the chain—without storing all the data or processing every block. In this article, we'll break down how light clients track Ethereum's consensus layer, as per the Helios design, and how data is processed to the execution layer for verifications. ## What is a light client? ### The Problem Users typically access Ethereum through centralized providers like Infura, Alchemy, or Tenderly. These companies run high-performance nodes on cloud servers so that others can easily access chain data. When a wallet queries its token balances or checks whether a pending transaction has been included in a block, it almost always does so through one of these centralized providers. The trouble with the existing system is that users need to trust the providers, and there is no way to verify the correctness of their queries. ![image](https://hackmd.io/_uploads/BkuQr9rJex.png) (This image is regarding our project 'Selene' which I worked on with my university friends. It was inspired by Helios and coded in Golang for client diversity. We had to stop it due to some major problems at the execution level while the consensus worked fine. So code snippets would be in Golang and the article is based upon it!) So a light client provides fully trustless access to a blockchain without running a full node! Woohoo! Interesting, isn't it? ![image](https://hackmd.io/_uploads/HJF-LcByxg.png) Data Flow: - The Consensus Layer provides sync information and verified block headers to the RPC Server. - The RPC Server passes block information and verified headers to the Execution Layer. - The Execution Layer validates Merkle proofs based on the state root and requested data. ## The Challenge of Light Clients in a PoS World In Proof-of-Stake (PoS) Ethereum, light clients face a fundamental challenge: how can a node with minimal resources authenticate the blockchain state without downloading and processing the entire validator set (currently over 500,000 validators and growing)? The answer is elegant: **sync committees**. ## What is a Sync Committee? A sync committee is a randomly sampled subset of 512 validators selected from the active validator set for a period of 256 epochs (approximately 27 hours). During this period, validators in the sync committee are expected to sign attestations for each slot's beacon block header. These signatures can be aggregated using BLS signature schemes, allowing light clients to verify blocks with minimal computational overhead. ``` // From the Ethereum specs class SyncCommittee(Container): pubkeys: Vector[BLSPubkey, SYNC_COMMITTEE_SIZE] // SYNC_COMMITTEE_SIZE = 512 aggregate_pubkey: BLSPubkey // Pre-computed aggregate for optimization ``` The sync committee concept was introduced as the "flagship feature" of the Altair hard fork, which went live on October 27, 2021. ### Why Not Use Block Proposers or Attesters? You might wonder: why create a new committee structure when the protocol already has block proposers and attesters? The answer lies in the computational requirements: - **Block proposers**: Determining the proposer for a slot requires access to the full validator registry and running the `get_beacon_proposer_index` function, which light clients cannot do efficiently. - **Attesters**: Similarly, computing attestation committees requires access to the full validator registry. Sync committees, by contrast, have two key advantages: 1. **Infrequent updates**: Changed only once every ~27 hours, reducing the verification burden 2. **Direct state access**: Stored directly in the beacon state, allowing verification via a simple Merkle branch For a light client, this is a game-changer. It can verify a sync committee with a Merkle proof from a known block header and then use that committee's public keys to authenticate signatures of newer blocks. ## The Sync Committee Selection Process The random selection of validators for sync committees is critical to Ethereum's security model. Selection uses the same RANDAO mechanism that determines block proposers and attesters, derived from validator-provided entropy in previous epochs. Selection probability is proportional to effective balance (do read about actual balance and effective balance; it's also a pretty interesting topic), meaning validators with 32 ETH have the highest chance of selection (effective balance is capped at 32 ETH). ### Reward Structure and Economics Being selected for sync committee duty comes with significantly higher rewards compared to standard attestations. A validator on sync committee duty can earn approximately 3x the rewards of standard attestations for the ~27 hour period. This compensates for: 1. The more frequent signing work (every slot versus once per epoch) 2. The criticality of the sync committee to light client functionality However, these higher rewards come with corresponding penalties. If a validator is offline during sync committee duty, they suffer penalties proportional to the missed rewards. This creates strong economic incentives for validators to maintain high uptime when serving on sync committees. ## BLS Signatures: The Cryptographic Foundation At the heart of the sync committee mechanism is the (BLS) signature scheme on the BLS12-381 elliptic curve. This signature scheme has a magical property: **signature aggregation**. With BLS, the consensus layer can: 1. Collect individual signatures (σ₁, σ₂, ..., σₙ) from sync committee members 2. Aggregate them into a single 96-byte signature (σ_agg) 3. Create an aggregate public key (pk_agg) from the public keys of participating validators 4. Verify the aggregate signature against the aggregate public key The verification equation is identical to that used for a single signature and key: ``` e(g₁, σ_agg) = e(pk_agg, H(m)) ``` Where: - e: The pairing function on the BLS12-381 curve - g₁: A generator point of the curve - H(m): The hash of the message being signed, mapped to a curve point - pk_agg: The aggregate public key of participating validators ### Sync Aggregates Structure The Beacon chain encodes sync committee signatures in `SyncAggregate` objects: ``` class SyncAggregate(Container): sync_committee_bits: Bitvector[SYNC_COMMITTEE_SIZE] // Which validators signed sync_committee_signature: BLSSignature // Aggregated signature ``` The `sync_committee_bits` field is a bit vector where each bit corresponds to a validator in the sync committee. You may have easily guessed that the set bit (1) indicates that validator participated in signing, while an unset bit (0) indicates they did not. ## Connecting Light Clients to the Network: The RPC Interface Light clients need a way to retrieve sync committee data from the network. This is accomplished through specialized endpoints in the Beacon API: ``` // Get the current sync committee for a state GET /eth/v1/beacon/states/{state_id}/sync_committees // Submit sync committee signatures POST /eth/v1/beacon/pool/sync_committees ``` So with this consensus client RPC service also provides the light client with three essential data types: 1. **Sync Committee Updates**: Information about changes in sync committee membership 2. **Finality Updates**: Data about newly finalized blocks 3. **Optimistic Updates**: Information about the latest head of the chain ```go func (in *Inner) sync(checkpoint [32]byte) error { // Reset store and checkpoint in.Store = LightClientStore{} in.lastCheckpoint = nil // Perform bootstrap with the given checkpoint in.bootstrap(checkpoint) ``` Focus on the code here: before getting updates we perform a bootstrap. First, if we have a checkpoint saved in our configuration, to verify it we fetch bootstrap via RPC call and then verification occurs: ```go isValid := in.is_valid_checkpoint(bootstrap.Header.Slot) if !isValid { if in.Config.StrictCheckpointAge { panic("checkpoint too old, consider using a more recent checkpoint") } else { log.Printf("checkpoint too old, consider using a more recent checkpoint") } } verify_bootstrap(checkpoint, bootstrap) apply_bootstrap(&in.Store, bootstrap) ``` Mainly two checks take place here. The first one is checkpoint validity that can be easily verified via: ``` (currentSlot - blockHashSlot) < MaxCheckpointAge / 12 ``` To verify bootstrap there are two checks required: - Current committee validity - The sync committee (a complex structure) is serialized and hashed using Ethereum's SSZ (Simple Serialize) to get a root hash and if the computed path ends at the Header.state_root, the proof is valid. - Tree root hash of header hash is calculated and is compared with the checkpoint to determine if they are equal or not. ```go HeaderValid := bytes.Equal(headerHash[:], checkpoint[:]) ``` Selene fetches these updates periodically using Nimbus consensus RPC (you understood that we are using it for the weak subjectivity, right?): ```go // Request historical updates updates, err = client.RPC.GetUpdates(currentPeriod, MAX_REQUEST_LIGHT_CLIENT_UPDATES) // Process each update for _, update := range updates { if err := client.verify_update(&update); err != nil { return err } client.apply_update(&update) } // Get latest finality update finalityUpdate, err := client.RPC.GetFinalityUpdate() // ...verify and apply... // Get latest optimistic update optimisticUpdate, err := client.RPC.GetOptimisticUpdate() // ...verify and apply... ``` ## Verification Process: From Trust to Cryptographic Certainty The verification process is the heart of the light client's security model. It transforms data received from potentially untrusted RPC endpoints into cryptographically verified certainty. ### 1. Participation Check First, the light client checks that sufficient validators participated in signing: ```go bits := getBits(update.SyncAggregate.SyncCommitteeBits[:]) if bits == 0 { return ErrInsufficientParticipation } ``` Sync committees are expected to have high participation, with, say, 90% of the validators contributing. To verify the aggregate signature, we need to aggregate the public keys of all the contributors. Like we can do it more easily by doing subtraction instead of addition, it will involve only 51 elliptic curves. ### 2. Temporal Validity Check Next, the light client verifies that the timestamps make chronological sense: ```go validTime := expectedCurrentSlot >= update.SignatureSlot && update.SignatureSlot > update.AttestedHeader.Slot && update.AttestedHeader.Slot >= updateFinalizedSlot if !validTime { return ErrInvalidTimestamp } ``` This ensures: - The signature slot must be in the past relative to the current slot - The signature slot must be after the attested header slot - The attested header must be after or at the finalized header slot These chronological checks prevent time-based attacks and ensure the light client processes events in a coherent order. ### 3. Period Validity Check The light client must verify that the update belongs to either the current sync committee period or the next one: ```go storePeriod := utils.CalcSyncPeriod(store.FinalizedHeader.Slot) updateSigPeriod := utils.CalcSyncPeriod(update.SignatureSlot) var validPeriod bool if store.NextSyncCommitee != nil { validPeriod = updateSigPeriod == storePeriod || updateSigPeriod == storePeriod+1 } else { validPeriod = updateSigPeriod == storePeriod } if !validPeriod { return ErrInvalidPeriod } ``` This check prevents the light client from processing updates that claim to be from sync committees far in the future or past. ### 4. Relevance Check To avoid wasting resources on outdated information, the light client verifies that the update provides new data: ```go updateAttestedPeriod := utils.CalcSyncPeriod(update.AttestedHeader.Slot) updateHasNextCommittee := store.NextSyncCommitee == nil && update.NextSyncCommittee != nil && updateAttestedPeriod == storePeriod if update.AttestedHeader.Slot <= store.FinalizedHeader.Slot && !updateHasNextCommittee { return ErrNotRelevant } ``` This check allows updates that either advance the finalized header or provide information about the next sync committee that the client doesn't yet have. ### 5. Finality Proof Verification For finality updates, the light client must verify that the finalized header is correctly included in the state: ```go if update.FinalizedHeader != (consensus_core.Header{}) && update.FinalityBranch != nil { if !isFinalityProofValid(&update.AttestedHeader, &update.FinalizedHeader, update.FinalityBranch) { return ErrInvalidFinalityProof } } ``` This validation uses the SSZ tree root hash calculation similar to that was done for current sync committee to ensure the finalized block is genuinely part of the state represented by the attested header. ```go return utils.IsProofValid(attestedHeader, toGethHeader(finalizedHeader).Hash(), finalityBranch, 6, 105) ``` ### 6. Next Committee Proof Verification When a new sync committee is introduced, the light client verifies it was correctly derived: ```go if update.NextSyncCommittee != nil && update.NextSyncCommitteeBranch != nil { if !isNextCommitteeProofValid(&update.AttestedHeader, update.NextSyncCommittee, *update.NextSyncCommitteeBranch) { return ErrInvalidNextSyncCommitteeProof } } ``` This ensures a secure transition between committees, preventing an attacker from injecting a malicious committee. This proof validity is calculated by SSZ tree root hash. ### 7. The Signature Verification: Cryptographic Heart of the System The most critical verification step is validating the BLS signatures from the sync committee: ```go // Select the appropriate sync committee for this signature period var syncCommittee *consensus_core.SyncCommittee if updateSigPeriod == storePeriod { syncCommittee = &client.Store.CurrentSyncCommittee } else { syncCommittee = client.Store.NextSyncCommittee } // Extract the public keys of all validators who signed participatingKeys, err := utils.GetParticipatingKeys( syncCommittee, [64]byte(update.SyncAggregate.SyncCommitteeBits), ) if err != nil { return fmt.Errorf("failed to get participating keys: %w", err) } // Compute the fork version and its SSZ‐root (ForkDataRoot) forkVersion := utils.CalculateForkVersion(&forks, update.SignatureSlot) forkDataRoot := utils.ComputeForkDataRoot( forkVersion, genesisValidatorsRoot, ) // Verify the aggregated BLS signature over the attested header if !verifySyncCommitteeSignature( participatingKeys, &update.AttestedHeader, &update.SyncAggregate, forkDataRoot, ) { return ErrInvalidSignature } ``` ### Detailed how to get participating pkeys ```go // GetParticipatingKeys retrieves the participating public keys from the committee based on the bitfield represented as a byte array. func GetParticipatingKeys(committee *consensus_core.SyncCommittee, bitfield [64]byte) ([]consensus_core.BLSPubKey, error) { //so we know that there are at most 512 validators in a sync committee so the bitfield is 64*8 = 512 bits var pks []consensus_core.BLSPubKey numBits := len(bitfield) * 8 // Total number of bits if len(committee.Pubkeys) > numBits { return nil, fmt.Errorf("bitfield is too short for the number of public keys") } //so this loops over each of the 512 bits and if the validators are participating i.e value is 1 then pubkey is fetched and appended for i := 0; i < len(bitfield); i++ { byteVal := bitfield[i] for bit := 0; bit < 8; bit++ { //for eg. like there would be 64 bytes and bytes will be 8 bits long so this loop runs for each of the 8 bits in a particular byte. //in this like 11111010 is byteVal so bit will run from 0 -> 7th index so like we know that & operator gives value 1 only when both are one so in this way we get the keys if (byteVal & (1 << bit)) != 0 { index := i*8 + bit if index >= len(committee.Pubkeys) { break } pks = append(pks, committee.Pubkeys[index]) } } } return pks, nil } ``` ### The detailed signature verification: ```go // Collect public keys from participating validators collectedPks := make([]*bls.Pubkey, 0, len(pks)) for i := range pks { var pksinBytes [48]byte = [48]byte(pks[i]) dkey := new(bls.Pubkey) err := dkey.Deserialize(&pksinBytes) if err != nil { return false } } // Compute the signing root from the attested header and fork data signingRoot := ComputeCommitteeSignRoot(toGethHeader(attestedHeader), forkDataRoot) // Deserialize the aggregate signature var sig bls.Signature signatureForUnmarshalling := [96]byte(signature.SyncCommitteeSignature) if err := sig.Deserialize(&signatureForUnmarshalling); err != nil { return false } // Verify the signature against the collected public keys return utils.FastAggregateVerify(collectedPks, signingRoot[:], &sig) ``` This verification is the mathematical proof that a supermajority of the sync committee has attested to this block header, providing cryptographic certainty without requiring the light client to trust the RPC endpoint. ### Applying the Update After verifying the updates we apply them and the checkpoint is saved like this (Now you understand why bootstrap header was verified with checkpoint!): ```go if store.FinalizedHeader.Slot%32 == 0 { checkpoint := toGethHeader(&store.FinalizedHeader).Hash() checkpointBytes := checkpoint.Bytes() return &checkpointBytes } ``` ## From Consensus to Execution: Accessing Block Data After verifying consensus data, the light client must process execution payloads to make the data useful for applications: ```go func (client *Inner) send_blocks() error { // Get slot from the optimistic header slot := client.Store.OptimisticHeader.Slot payload, err := client.get_execution_payload(&slot) if err != nil { return err } // Get finalized slot from the finalized header finalizedSlot := client.Store.FinalizedHeader.Slot finalizedPayload, err := client.get_execution_payload(&finalizedSlot) if err != nil { return err } // Send payload converted to block over the BlockSend channel go func() { block, err := PayloadToBlock(payload) if err != nil { log.Printf("Error converting payload to block: %v", err) return } client.blockSend <- block }() // Send finalized block go func() { block, err := PayloadToBlock(finalizedPayload) if err != nil { log.Printf("Error converting finalized payload to block: %v", err) return } client.finalizedBlockSend <- block }() // Send checkpoint information go func() { client.checkpointSend <- client.lastCheckpoint }() return nil } ``` The light client fetches execution payloads and verifies their authenticity: ```go func (client *Inner) get_execution_payload(slot *uint64) (*consensus_core.ExecutionPayload, error) { // Fetch the block for the given slot block, err := client.RPC.GetBlock(*slot) if err != nil { return nil, err } // Convert the block to a Geth block to access its hash Gethblock, err := beacon.BlockFromJSON("capella", block.Hash) if err != nil { return nil, err } blockHash := Gethblock.Root() latestSlot := client.Store.OptimisticHeader.Slot finalizedSlot := client.Store.FinalizedHeader.Slot // Determine which verified header to compare against var verifiedBlockHash geth.Hash if *slot == latestSlot { verifiedBlockHash = toGethHeader(&client.Store.OptimisticHeader).Hash() } else if *slot == finalizedSlot { verifiedBlockHash = toGethHeader(&client.Store.FinalizedHeader).Hash() } else { return nil, ErrPayloadNotFound } // Verify the block hash matches the expected hash if !bytes.Equal(verifiedBlockHash[:], blockHash.Bytes()) { return nil, fmt.Errorf("%w: expected %v but got %v", ErrInvalidHeaderHash, verifiedBlockHash, blockHash) } // Extract and return the execution payload payload := block.Body.ExecutionPayload return &payload, nil } ``` This verification step ensures that the execution data (transactions, state roots, etc.) matches the cryptographically verified consensus header, maintaining the security guarantees throughout the processing pipeline. You can check out the complete code at https://github.com/BlocSoc-iitr/selene/tree/dev/consensus Citations: [1] https://a16zcrypto.com/posts/article/building-helios-ethereum-light-client/ [2] https://eth2book.info/ [3] https://github.com/a16z/helios [4] https://github.com/BlocSoc-iitr/selene/tree/dev/consensus [5] https://ethereum.stackexchange.com/