owned this note
owned this note
Published
Linked with GitHub
---
tags: epf,ethereum
---
# EPF Dev Update #11
## Project links
- [Project: EIP-4844 Consensus Client](https://github.com/eth-protocol-fellows/cohort-three/blob/master/projects/4844-consensus-client.md)
- [Design: Builder API for Blobs](https://hackmd.io/@jimmygchen/B1dLR74Io)
## Summary for week 12 (2023/1/16 - 2023/1/23)
- I implemented the builder updates ([lighthouse#3808](https://github.com/sigp/lighthouse/pull/3808)) in Lighthouse last week, and started looking at adding tests this week
- While working on the tests, I realised the [`TestingBuilder`](https://github.com/sigp/lighthouse/blob/f04486dc7148ef6b26765c567dae289faa97cfe8/beacon_node/execution_layer/src/test_utils/mock_builder.rs#L61) utility uses types from the [`ethereum-consensus`](https://github.com/ralexstokes/ethereum-consensus) and [`mev-rs`](https://github.com/ralexstokes/mev-rs) libraries, and both need to be updated for Capella and Deneb, so I went down that :rabbit: :hole::
- Add Capella types and presets [ethereum-consensus#168](https://github.com/ralexstokes/ethereum-consensus/pull/168)
- Add Deneb (EIP-4844) types and presets [ethereum-consensus#170](https://github.com/ralexstokes/ethereum-consensus/pull/170)
- Currently working on adding Capella & Deneb types to the [`mev-rs`](https://github.com/ralexstokes/mev-rs) library
- I started looking into [this backfill sync issue](https://github.com/sigp/lighthouse/issues/3212) in lighthouse, which addresses a resources issue by rate-limiting backfill sync. I've pushed my WIP branch [here](https://github.com/jimmygchen/lighthouse/pull/4) which I'll continue to work on. More details below.
- Continue to address feedback on my outstanding PRs, more details below.
## Lighthouse: rate limit historical block backfill
While waiting for my other PRs to get reviewed, I got interested and started looking into this historical backfill sync issue: [lighthouse#3212](https://github.com/sigp/lighthouse/issues/3212)
Backfill sync for a node happens if a node is initially setup using [checkpoint sync](https://lighthouse-book.sigmaprime.io/checkpoint-sync.html) - which is siginicantly faster than syncing from genesis, because it syncs from a recent finalized checkpoint. After the forward sync completes, the beacon node then starts the "backfill sync" to download the previous blocks prior to the checkpoint.
Right now the "backfill sync" process is not rate limited, and some user have reported nodes becoming overwhelmed during the sync. To address this issue, **@michaelsproul** propose to rate-limit the backfill process. See more details in the issue.
**@paulhaunder** was very kind to offer some help and provided an excellent writeup [here](https://github.com/sigp/lighthouse/issues/3212#issuecomment-1384674956), which explains the components involved and provided a proposed solution the problem.
To help with my understanding, I created the below diagram based on **@paulhauner**'s notes, comparing backfill processing with / without rate-limiting
```mermaid
sequenceDiagram
participant event_rx
participant BeaconProcessor
participant backfill_queue
Title: Existing / Default backfill batch processing
event_rx->>BeaconProcessor: new backfill batch work
alt if worker available
BeaconProcessor->>BeaconProcessor: process backfill batch immediately
else no available worker
BeaconProcessor->>backfill_queue: push to queue
end
loop next loop
alt if worker available
BeaconProcessor-->>backfill_queue: pop from queue
BeaconProcessor->>BeaconProcessor: process backfill batch
end
end
```
```mermaid
sequenceDiagram
participant event_rx
participant BeaconProcessor
participant backfill_queue as backfill_queue (existing)
participant backfill_scheduled_q as backfill_scheduled_q (new)
participant BackfillScheduler
Title: backfill batch processing with rate-limiting
event_rx->>BeaconProcessor: new backfill batch work
BeaconProcessor->>backfill_scheduled_q: push to a "scheduled" queue
loop At 6,7,10 seconds of after slot start
BackfillScheduler-->>backfill_scheduled_q: pop work from queue
BackfillScheduler->>event_rx: send scheduled backfill batch work
event_rx->>BeaconProcessor: receive scheduled backfill batch work
end
alt if worker available
BeaconProcessor->>BeaconProcessor: process backfill batch immediately
else no available worker
BeaconProcessor->>backfill_queue: push to queue
end
loop next loop
alt if worker available
BeaconProcessor-->>backfill_queue: pop from queue
BeaconProcessor->>BeaconProcessor: process backfill batch
end
end
```
I've created an draft implementation, and will continue with improving and testing it next week. WIP branch can be found here for anyone interested: https://github.com/jimmygchen/lighthouse/pull/4
## Updates to outstanding PRs
- Builder updates for Blobs (EIP-4844) [lighthouse#3808](https://github.com/sigp/lighthouse/pull/3808):
- initial implementation pushed last week, needs to be tested, currently waiting for `mev-rs` type updates
- Add and update types for Capella [builder-specs#60](https://github.com/ethereum/builder-specs/pull/60)
- addressed some review feedback and seems to be on track
- Add and update types for EIP-4844/Deneb [builder-specs#61](https://github.com/ethereum/builder-specs/pull/61)
- the [discussion](https://github.com/ethereum/builder-specs/pull/61#discussion_r1064630311) for whether the bump up the `submitBlindedBlock` endpoint to v2 continued in the R&D discord [`4844-testing`](https://discord.com/channels/595666850260713488/1031999860997619843/1062880839119155241) channel.
- I created a diagram to illustrate the builder block proposal flow in my last update [here](https://hackmd.io/2fE1YDszTYeXBo_6zzuo1g?view#Lighthouse-Builder-updates-for-EIP-4844)
- replaced all EIP-4844 references with `Deneb`, which is the new fork name decided during a call earlier this week
- Add `getBlobsSidecar` endpoint [beacon-APIs#286](https://github.com/ethereum/beacon-APIs/pull/286)
- addressed some review feedback, there are still ongoing discussions on the endpoint path