---
tags: epf,ethereum
---
# EPF Dev Update #13
A lot of progress has been made in client development and especially EIP-4844 in the past week as client teams had an on-site interop event. Some important updates on EIP-4844 include:
- Devnet-4 went well and helped found a few bugs in clients
- Discussions around moving blob/block sync from coupling to decoupling (See proposal by Jacek [here](https://t.co/HVX4Oau568))
- More details on recent EIP-4844 related updates can be found on the last [EIP-4844 Implementers' Call Notes](https://docs.google.com/document/d/15EatedrJanNxBZGPVASvwq9xgbTs5UxjsDfjpM6ppSY/edit?usp=sharing).
The decision to couple/decouple blob & blocks is likely to impact the [builder API changes](https://hackmd.io/@jimmygchen/B1dLR74Io) quite substaintially, and given that there isn't a final decision made yet (waiting until 4844 network simulation tests result is out), I'm shifting my focus temporarily to other non-builder related work.
## Project links
- [Project: EIP-4844 Consensus Client](https://github.com/eth-protocol-fellows/cohort-three/blob/master/projects/4844-consensus-client.md)
- [Design: Builder API for Blobs](https://hackmd.io/@jimmygchen/B1dLR74Io)
## Summary for week 14 (2023/1/30-2023/2/6)
- **[`mev-rs`](https://github.com/ralexstokes/mev-rs)**: I had a conversation with **@ralexstokes** regarding adding 4844 support. He is in the process of updating `mev-rs` for `Capella` and will figure out an approach to versioning response data. Once this is ready and the spec matures, I plan to continue the work on adding `Deneb` support to `mev-rs`.
- **[`lighthouse`](https://github.com/sigp/lighthouse)**: I picked up my previous work on the Lighthouse [backfill sync issue](https://hackmd.io/ixGLGIbvTsa-VWcxMXcc2A?view#Lighthouse-rate-limit-historical-block-backfill):
- I have created a draft [PR](https://github.com/sigp/lighthouse/pull/3936) and looking forward to get some feedback from the Lighthouse team.
- I've compared the WIP branch (rate-limiting backfill processing to 1 batch per slot) against the latest `stable` version (no rate-limiting), and it does seem to reduce the CPU usage substantially (~20% CPU). I've published the test results on this page: [Monitoring Lighthouse Backfill Processing CPU Usage](https://hackmd.io/@jimmygchen/SJuVpJL3j)
- I plan to gradually increase the number of batches to 3 (6s,7s,10s after slot start), and monitor the CPU usage
- I published a page on [Monitoring CPU & Memory Using Glances & Grafana](https://hackmd.io/@jimmygchen/rkeO8e82s)
- I discovered the [lighthouse-metrics](https://github.com/sigp/lighthouse-metrics) repo, which contains lots of useful dashboards and interesting metrics for Lighthouse - worth a look for anyone interested in beacon chain metrics
- [**`ethereum-consensus`**](https://github.com/ralexstokes/ethereum-consensus/pull/168) PR to add `Capella` types and presets to the Rust library was merged :ship:
- [**`builder-spec`**](https://github.com/ethereum/builder-specs/pull/60) PR to add `Capella` support - addressed more review comments and on track to merge soon :tm:.
## Lighthouse rate-limiting backfill sync
I created a diagram in [my last update](https://hackmd.io/ixGLGIbvTsa-VWcxMXcc2A?view#Lighthouse-rate-limit-historical-block-backfill) to illustrate the proposed rate-limiting approach, and I thought it would be interesting to share how the tests look like in code ([PR here](https://github.com/sigp/lighthouse/pull/3936)), with some additional annotations:
```rust
/// Ensure that backfill batches gets rate-limited and processing is scheduled at specified intervals.
#[tokio::test]
async fn test_backfill_sync_processing() {
let mut rig = TestRig::new(SMALL_CHAIN).await;
// send backfill work to `BeaconProcessor`
for _ in 0..3 {
rig.enqueue_backfill_batch();
}
// assert only the first batch is processed (`CHAIN_SEGMENT` event)
rig.assert_event_journal(&[CHAIN_SEGMENT, WORKER_FREED, NOTHING_TO_DO])
.await;
let slot_duration = rig.chain.slot_clock.slot_duration().as_secs();
// The 2nd & 3rd batches should arrive at the beacon processor after the scheduled intervals (1 batch per slot)
tokio::time::sleep(Duration::from_secs(slot_duration)).await;
rig.assert_event_journal(&[CHAIN_SEGMENT, WORKER_FREED, NOTHING_TO_DO])
.await;
tokio::time::sleep(Duration::from_secs(slot_duration)).await;
rig.assert_event_journal(&[CHAIN_SEGMENT, WORKER_FREED, NOTHING_TO_DO])
.await;
}
/// Ensure that backfill batches get processed as fast as they can when rate-limiting is disabled.
#[tokio::test]
async fn test_backfill_sync_processing_rate_limiting_disabled() {
// disable rate-limiting
let chain_config = ChainConfig {
disable_backfill_rate_limiting: true,
..Default::default()
};
let mut rig = TestRig::new_with_chain_config(SMALL_CHAIN, chain_config).await;
// send backfill work to `BeaconProcessor`
for _ in 0..3 {
rig.enqueue_backfill_batch();
}
// ensure all batches are processed immediately
rig.assert_event_journal_contains_ordered(&[CHAIN_SEGMENT, CHAIN_SEGMENT, CHAIN_SEGMENT])
.await;
}
```