Eitan Seri-Levi

@uncle-bill

Software Engineer at Sigma Prime

Joined on Oct 29, 2022

  • https://github.com/eserilev/il-boost Local Inclusion List Boost aka LIL-Boost All my homies outsource block production to censorship resistant MEV marketplaces-Big Sean (real) LIL-Boost is a sidecar that runs alongside an Ethereum node. It leverages Commit-Boost to identify set constraints on builder relays and verify the constraints inclusion. What is Commit-Boost https://github.com/Commit-Boost
     Like  Bookmark
  • Persisting Data Columns Currently the das branch does not act on columns received over gossip. On top of that, it persists the columns inconsistently. The inconsistency arises beacuse put_kzg_verified_blobs marks the RpcBlock as Available once all blobs have been received. Any data columns received over gossip after will be ignored and the peer will be banned. To solve for this situation: Prevent put_kzg_verified_blobs from marking the block as available Act on and persist columns recevied over gossip Persisting columns received over gossip The data columns are persisted on the LRU cache, a key value store where the key is the block root and the value is the pending_component.
     Like  Bookmark
  • Assumptions The following notes assume that: Column index custody requirements for a node will only change at epoch boundries A syncing node must fetch the columns its required to custody from the past 4096 epochs (i.e. the data availability window) A syncing node must validate columns via sampling as its syncing Non Finalized Sync Theres some incompatabilities with the current sync infra. Currently there is an implicit assumption that at each "batch" (i.e. epoch), we will request both blobs and blocks from a single, randomly selected, peer. However with data columns, this assumption is no longer valid.
     Like  Bookmark
  • PR: https://github.com/sigp/lighthouse/pull/4718 During the EPF, I put together some code that abstracted away some of the levelDB specific database functionality within the beacon node backend. The existing KeyValueStore trait was already doing some of the abstraction, however the code always "assumed" the KV store was levelDB. TLDR I copied the slasher db architecture! I created a BeaconNodeBackend type that implements the KeyValueStore and ItemStore traits. I then replaced all references of levelDB w/ the new BeaconNodeBackend type. Within the BeaconNodeBackend type a cfg! macro was used to activate levelDB/Redb specific code paths based on config values. Recently I was able to add Redb as an alternative database implementation. Redb is an ACID embedded key value store implemented in pure rust. Data is stored in copy-on-write B-trees. See the design doc for more info I've ran both the levelDB and Redb db implementations on mainnet and here are the results
     Like  Bookmark
  • Ethereum Protocol Fellowship Final Update Abstract As a participant in cohort four of the Ethereum Protocol Fellowship, my goal was to contribute to Lighthouse, an Ethereum consensus client written in Rust. Specifically I was focused on adding new database implementations to the slasher backend, modularizing the beacon node backend, adding block v3 endpoint functionality to the Beacon API, and providing a continous stream of contributions to the Lighthouse codebase. I also decided to contribute to rust-libp2p by adding tracing + structured logging. The slasher is a piece of software that deteces "slashable" events from validators and reports them to the protocol. The Lighthouse slasher has several existing database implementations with batch processing times of roughly one second. There was interest in adding additional db implementations, namely Redb and Sqlite. I was able to reach 5 second batch times with the redb implementation and 23 second batch times with the sqlite implementation. Theres still some more optimization work that can be done to reduce batch times further. The beacon node is at the core of the Ethereum proof-of-stake protocol. It is responsible for running the beacon chain, and uses distributed consensus to agree on blocks both proposed and attested on by validators in the network. Beacon nodes communicate their processed blocks to their peers via a peer-to-peer network, which also manages the lifecycle process of active validator clients. Lighthouses beacon node backend is setup to use LevelDB, an on-disk key value database store. Parts of the Lighthouse codebase assume the database is always LevelDB which makes it difficult to add other database implementations. I spent some time modularizing the beacon node backend and abstracting over the current LevelDB implementation. The changes here should hopefully make it easier to switch between different database implementations within the beacon node. The block v3 endpoint is the new block production endpoint that is introduced as part of the Deneb upgrade. The endpoint is used by validator clients to produce an unsigned block. In the old block v2 endpoint, the validator client would have to specificy wether they want to use the builder relay or the local exectuion client when requesting a new block. This added unecessary complexity to the validator client and resulted in two seperate endpoints, one for the builder relay, i.e. blinded block endpoint and one for the local exeuction client, i.e. full payload end point. The new v3 endpoint consolidates the two v2 endpoints and removes the need for the validator to specify wether they want to fetch blocks from the builder relay vs the local execution client.
     Like 1 Bookmark
  • The past three weeks have been dedicated to the following topics Implementing feedback from block v3 endpoint review Adding Block v3 endpoint to the Lighthouse validator client Sqlite slasher db improvements Adding tracing examples in rust-libp2p Feedback from Block v3 endpoint review We found some additional opportunities to refactor and delete some code. I had combined most of the v2 and v3 logic into one "flow", however the code branched out at certain places to handle the different endpoint versions. We found a way to combine these branches into one fully coherent flow, and were able to delete around 400+ lines of code in the process. The refactor here mostly involved playing with types and adding a conversion from a full payload to a blinded payload.
     Like  Bookmark
  • Over the last few weeks I've started to reach the later stages of some of my big projects. Redb slasher db implementation Sqlite slasher db implementation Tracing in libp2p Block v3 endpoint Lighthouse v4.5.0 Redb Slasher db implementation PR: https://github.com/sigp/lighthouse/pull/4529
     Like 1 Bookmark
  • I'm really proud of the work I've done so far during this cohort. I find myself looking back to six months ago when I was just begining to poke around the Lighthouse codebase. I was new to the Rust programming language and client development in general. Fast forward to now and I am finally feeling productive programming in Rust. I'm also much more comfortable with certain parts of Lighthouse codebase. There's still a lot to learn, it feels like I'm just barely scratching the surface, but I'm excited about the progress I've made so far. Block v3 endpoint progress rust-libp2p tracing updates modularizing the beacon node benchmarking slasher backend implementations Block V3 endpoint Over the past month I've worked on implementing the new block v3 endpoint. In an ideal scenario, implementing the v3 endpoint would have simply required some small tweaks to the existing full/blinded v2 endpoint logic. However in the Lighthouse v2 flow, there were some heavy usage of abstract function type parameters. To illustrate the issues I faced, take the following example:
     Like  Bookmark
  • Most of week 5 and 6 have been dedicated to the following issues Block v3 endpoint Handling genesis files in lighthouse Sqlite impl in slasher backend Begin modularizing beacon node backend Block v3 endpoint PR: https://github.com/sigp/lighthouse/pull/4629
     Like 1 Bookmark
  • Block V3 Endpoint The deneb spec has introduced a new block generation endpoint for validators. Spec: https://github.com/ethereum/beacon-APIs/pull/339 Previously there were separate endpoints for returning blinded vs unblinded blocks. This new endpoint combines these two endpoints into one single endpoint. Per Michael Sproul at lighthouse, here is some additional information about what this endpoint needs to do. There are a bunch of conditions already implemented that determine when to use a builder block vs a local block, including:
     Like  Bookmark
  • Motivation The Ethereum Protocol Fellowship provides aspiring protocol developers like myself a pathway to gaining the experience needed to make meaningful contributions to the core Ethereum protocol. For the EPF I chose to contribute to Lighthouse, an Ethereum consensus client implementation written in Rust. I started as a Lighthouse open source contributor in April, 2023. I began with simple tasks, making updates to documentation, updating log messages etc. Each contribution I made gave me the confidence and comfort I needed to take on bigger, more complex issues. Now as an EPF participant I am tasked with working on a "large" project. However, based on what I've experienced so far, client developers dont just work on individual features. They frequently work on multiple tasks at the same time. Furthermore, client developers provide technical suppport to users and spend time investigating/fixing bugs. With this in mind, I propose a three-part project. These parts wont be completed in a specific order, but instead will be worked on in parallel. Part 1: The "Big" Project
     Like  Bookmark
  • SSZ block proposal flow in lighthouse Improve transport connection errors Lighthouse Slasher DB Looking at libp2p Learning networking SSZ block proposal flow in lighthouse According to benchmarks related to encoding and decoding of signed-blinded-beacon-block payloads with SSZ vs JSON SSZ seems 40-50x faster
     Like 1 Bookmark
  • According to benchmarks related to encoding and decoding of signed-blinded-beacon-block payloads with SSZ vs JSON SSZ seems 40-50x faster Considering 20-40ms per coding on average, that's up to 200-300ms JSON latency (or more). Sending the data SSZ encoded could reliably shave 200-250ms off each getPayload roundtrip. Recently I've worked to support SSZ request bodies in lighthouse in POST beacon/blocks and POST beacon/blinded_blocks endpoints via the PRs below: Support SSZ request body for POST /beacon/blinded_blocks endpoints (v1 & v2)
     Like  Bookmark
  • Thursday July 13th Attended the first EPF 4 Office Hours. I really enjoyed Marios presentation regarding the history of Ethereum! The following open issue seems like an interesting project to explore for the cohort. I began taking notes on redb. I also have an example implementation of a redb database in Rust. My goal was to mimic some of the structs and types used in lighthouse while having a playground I can use to test changes quickly. That repo can be found here: https://github.com/eserilev/redb-impl Friday July 14th
     Like  Bookmark
  • PR: https://github.com/sigp/lighthouse/pull/4529 Currently the Lighthouse slasher backend supports two different database implementations; lmdb and mdbx. I am currently working on adding a redb implementation. Redb is a simple, portable, high-performance, ACID, embedded key-value store. Lmdb, Mdbx and Redb share many similarities. However there are some differences with redb that are important to note, especially in the context of the existing slasher backend. Database Tables Lmdb and Mdbx do not have a concept of database tables. For these two implementations we end up creating 9 separate database instances within the slasher backend. However, Redb introduces the concept of database tables. Therefore, for this implementation, its probably not necessary to create separate database instances. We can instead create one database instance, and 9 tables contained within that instance. We still need the redb implementation to support the existing OpenDatabases interface so I've defined a Database struct as follows:
     Like  Bookmark
  • Tuesday June 20th, 2023 Today I did some light reading. Just starting out! Wednesday June 21st, 2023 A few weeks back I submitted a PR for implementing the expected_withdrawals HTTP API in Lighthouse. Issue https://github.com/sigp/lighthouse/issues/4029 PR
     Like  Bookmark
  • In lmdb and mdbx implementations there is a concept of an environment. An environment supports multiple databases, all residing in the same shared-memory map. For redb, no such concept exists I think we want to create an Environment struct for our redb implementation (see code snippets below) For now we created an Environment struct thats simply a wrapper around redb::Builder (the redb Builder allows us to create/open databases.) pub struct Environment { builder: redb::Builder, }
     Like  Bookmark