Try   HackMD

Notes on Lighthouse Architecture

Below are my notes from investigating the Lighthouse code base, with a focus on understanding the CL client and eip-4844 functionalities.

Dependencies

Tokio

Tokio is an event-driven, non-blocking I/O platform for writing asynchronous I/O backed applications. It is used in Lighthouse for communication between asynchronous tasks.

Beacon Node

NOTE: This diagram is not complete. My goal right now is to try to understand how the Lighthouse consensus layer works at a high level, so I have intentionally skipped some functionalities (metrics, validator etc), error paths and details. Any feedback is welcomed!

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Main

Entry point of the Lighthouse program. Parses CLI args, network configs and pass all parameters to beacon_node constructor when the beacon_node subcommand is used.

ProductionBeaconNode constructor

  • build the beacon chain
    • create data path, db path and freezer db path in file system
    • calculate the current slot (code)
    • runs the fork choice rule to determine the head (code)
    • get the head_block and head_state from store (code)
    • build all caches, including community_caches, pubkey_cache, exit_cache
    • store the beacon chain in the database
  • start the networking stack (code)
  • start HTTP API & metrics server
  • send a head update to execution engine (code)
    • update execution engine forkchoice
    • spawn a routine that tracks the status of the execution engines
    • spawn a routine that removes expired proposer preparations.
    • spawns a routine that polls the exchange_transition_configuration endpoint.
  • spawn a routine which ensures the EL is provided advance notice of any block producers. (code)
  • spawn a routine which checks the validity of any optimistically imported transition blocks - for merge transition (code)

Router

  • handle all messages incoming from the network service.
  • handle RPC related functionalities
  • handle gossip (pubsub) messages and pass it down to the Processor. code
  • Processor
    • processes validated and decoded messages from the network
    • spawns a BeaconProcessor "manager" task which checks the receiver end of the channel
    • e.g. when a BeaconBlockAndBlobsSidecars message is received, it sends WorkEvent<GossipBlockAndBlobsSidecar> to a mpsc channel using an mpsc::Sender. This is then handled by the BeaconProcessor below.

BeaconProcessor

Handles events produced by beacon_processor_send via a mpsc channel.

  • spawn_worker: called by the "manager" task to spawns a blocking worker thread to process some Work.
  • e.g. the GossipBlockAndBlobsSidecar Work type is handled here

Block Processing

per_block_processing logic is another interesting area to look into as part of eip-4844 implementation, as it's now processes blob kzg commitments. code

Usage

per_block_processing is called in various places:

  1. beacon_chain/src/beacon_chain.rs: unable to find reference
  2. beacon_chain/src/block_verification.rs: used when:
    • on startup
    • handling Work::ChainSegment
    • handling Work::RpcBlock
    • handling Work::GossipBlock
    • handling Work::DelayedImportBlock
  3. beacon_chain/src/fork_revert.rs
  4. store/src/reconstruct.rs
  5. consensus/state_processing/src/block_replyer.rs