Try   HackMD

EPF Dev Update #13

A lot of progress has been made in client development and especially EIP-4844 in the past week as client teams had an on-site interop event. Some important updates on EIP-4844 include:

  • Devnet-4 went well and helped found a few bugs in clients
  • Discussions around moving blob/block sync from coupling to decoupling (See proposal by Jacek here)
  • More details on recent EIP-4844 related updates can be found on the last EIP-4844 Implementers' Call Notes.

The decision to couple/decouple blob & blocks is likely to impact the builder API changes quite substaintially, and given that there isn't a final decision made yet (waiting until 4844 network simulation tests result is out), I'm shifting my focus temporarily to other non-builder related work.

Summary for week 14 (2023/1/30-2023/2/6)

  • mev-rs: I had a conversation with @ralexstokes regarding adding 4844 support. He is in the process of updating mev-rs for Capella and will figure out an approach to versioning response data. Once this is ready and the spec matures, I plan to continue the work on adding Deneb support to mev-rs.
  • lighthouse: I picked up my previous work on the Lighthouse backfill sync issue:
    • I have created a draft PR and looking forward to get some feedback from the Lighthouse team.
    • I've compared the WIP branch (rate-limiting backfill processing to 1 batch per slot) against the latest stable version (no rate-limiting), and it does seem to reduce the CPU usage substantially (~20% CPU). I've published the test results on this page: Monitoring Lighthouse Backfill Processing CPU Usage
    • I plan to gradually increase the number of batches to 3 (6s,7s,10s after slot start), and monitor the CPU usage
    • I published a page on Monitoring CPU & Memory Using Glances & Grafana
    • I discovered the lighthouse-metrics repo, which contains lots of useful dashboards and interesting metrics for Lighthouse - worth a look for anyone interested in beacon chain metrics
  • ethereum-consensus PR to add Capella types and presets to the Rust library was merged
    Image Not Showing Possible Reasons
    • The image file may be corrupted
    • The server hosting the image is unavailable
    • The image path is incorrect
    • The image format is not supported
    Learn More →
  • builder-spec PR to add Capella support - addressed more review comments and on track to merge soon
    Image Not Showing Possible Reasons
    • The image file may be corrupted
    • The server hosting the image is unavailable
    • The image path is incorrect
    • The image format is not supported
    Learn More →
    .

Lighthouse rate-limiting backfill sync

I created a diagram in my last update to illustrate the proposed rate-limiting approach, and I thought it would be interesting to share how the tests look like in code (PR here), with some additional annotations:

/// Ensure that backfill batches gets rate-limited and processing is scheduled at specified intervals.
#[tokio::test]
async fn test_backfill_sync_processing() {
    let mut rig = TestRig::new(SMALL_CHAIN).await;

    // send backfill work to `BeaconProcessor`
    for _ in 0..3 {
        rig.enqueue_backfill_batch();
    }

    // assert only the first batch is processed (`CHAIN_SEGMENT` event)
    rig.assert_event_journal(&[CHAIN_SEGMENT, WORKER_FREED, NOTHING_TO_DO])
        .await;

    let slot_duration = rig.chain.slot_clock.slot_duration().as_secs();

    // The 2nd & 3rd batches should arrive at the beacon processor after the scheduled intervals (1 batch per slot)
    tokio::time::sleep(Duration::from_secs(slot_duration)).await;
    rig.assert_event_journal(&[CHAIN_SEGMENT, WORKER_FREED, NOTHING_TO_DO])
        .await;

    tokio::time::sleep(Duration::from_secs(slot_duration)).await;
    rig.assert_event_journal(&[CHAIN_SEGMENT, WORKER_FREED, NOTHING_TO_DO])
        .await;
}

/// Ensure that backfill batches get processed as fast as they can when rate-limiting is disabled.
#[tokio::test]
async fn test_backfill_sync_processing_rate_limiting_disabled() {
    // disable rate-limiting
    let chain_config = ChainConfig {
        disable_backfill_rate_limiting: true,
        ..Default::default()
    };
    let mut rig = TestRig::new_with_chain_config(SMALL_CHAIN, chain_config).await;

    // send backfill work to `BeaconProcessor`
    for _ in 0..3 {
        rig.enqueue_backfill_batch();
    }

    // ensure all batches are processed immediately
    rig.assert_event_journal_contains_ordered(&[CHAIN_SEGMENT, CHAIN_SEGMENT, CHAIN_SEGMENT])
        .await;
}