Hello everyone! I'm Ankur and am excited to participate in the 4th cohort of the fellowship as a permissionless participant! Since I have a full-time job I plan to contribute in the mornings of my weekdays and primarily on the weekends.

I'm highly interested in working on the Lighthouse consensus client, and the project on optimising attestation aggregation looks very interesting to me. I've decided work on this project for this fellowship.

I'll use this note to document my updates prior to the fellowship officially kicking off, over a period of 2 weeks.

The First Week

Learning Rust

The majority of this week was spent learning Rust from scratch. The Rust Book was my primary reading material here, and videos from NoBoilerplate to get a general sense of the ideology behind Rust.
I then solved a few challenges from the Advent of Code 2022 in Rust to get a hang of things, the solutions to which can be found here.
Coming from a Typescript background, I was very curious about the concurrency models supported by Rust. I was surpised to learn that pretty much everything is possible, and EventLoop style concurrency is supported via custom community maintained runtimes like Tokio. I reviewed the Async Book to get a general understanding of Rust's support for async/.await, and also spent some time playing around with Messaging Passing style of concurrency via channels. In the coming weeks I would be interested in trying out the Shared Memory style of concurrency as well.

Ethereum Beacon Chain

The next section of the week was spent learning about how the Ethereum Beacon Chain works. I started by getting a basic understanding from ethereum.org's documentation and then reviewed the articles linked below. A major goal of this was to understand what Attestations are, their aggregation - when/why it's necessary and any other constraints.

  1. https://ethos.dev/beacon-chain
  2. https://github.com/ethereum/annotated-spec/blob/master/phase0/beacon-chain.md#attestationdata
  3. https://eth2book.info/capella/part3/containers/dependencies/
  4. https://medium.com/@aditya.asgaonkar/bitwise-lmd-ghost-an-efficient-cbc-casper-fork-choice-rule-6db924e57d1f

Lighthouse

Finally, the remainder of the week was spent understanding how Attestation Aggregations are currently implemented in Lighthouse. The current implementation is based on a greedy approach to the maximum weighted coverage problem. I was able to develop a high level understanding of the algorithm. Some of the other articles I reviewed:

  1. https://lighthouse-blog.sigmaprime.io/attestation-packing.html
  2. https://lighthouse-blog.sigmaprime.io/optimising-attestation-packing.html (high level overview only)

Plan for next week

  1. Spend more time on understanding the currently implemented algorithm and review the way the quality of attestation packing is evaluated.
  2. Understand the current implementation on a code-level.
  3. Understand Satalia's proposal for an algorithm for optimal attestation aggregation:
    1. https://lighthouse-blog.sigmaprime.io/optimising-attestation-packing.html
    2. https://lighthouse-blog.sigmaprime.io/docs/satalia-01-problem-definition.pdf
    3. https://lighthouse-blog.sigmaprime.io/docs/satalia-02-exact-approaches.pdf
    4. https://lighthouse-blog.sigmaprime.io/docs/satalia-03-results.pdf
  4. Review Satalia's implementation of the above algorithm. Thanks to @michaelsproul for sharing this link.

The Second Week

Reviewing Satalia's Papers

Satalia has provided 3 excellent documents focussing on the following:

  1. Breaking down the problem statement and setting up the notation to express the problem statement and the candidate solution mathematically.
  2. Breaking down the problem into two subproblems - The aggregation stage and the packing stage. The aggregation stage refers to the problem of creating maximally aggregated "aggreagtes" \(A^*\), and the packing stage refers to taking aggregates from \(A^*\) and figuring out which aggregates to choose to achieve maximum coverage of the validator set.
    The paper explores a solution to the aggregation problem by representing the aggregates as a graph, where each edge encodes the "aggregation compatibility" b/w the nodes representing the aggregates. \(A*\) is then computed using the Bron-Kerbosh algorithm, with some heuristics to achieve better performance.
    An MIP based solution is provided to the packing problem, but I haven't explored it deeply.
  3. The third paper contains the results of implementing the above algorithms, compared to the current implementation.

I have gone through the papers a few times, and I feel I have a working understanding of the proposed solution to the aggregation stage.
The articles from Sigma Prime's blog 1 and 2 were also very helpful, particularly the first one which contains an analysis of the attestation packing efficiencies of various points at the time the article was published.

Learning Rust

Given that the project heavily utilizes graphs and associated algorithms, I figured it might be a good idea to try to implement a few basic graph algorithms in Rust myself. It turns out that representing graphs in Rust is not trivial, and requires careful thought to prevent dependency cycles, and also requires using smart pointers like Rc<> for managing multiple ownership and RefCell<> for allowing mutation of Nodes after they have been allocated.

I implemented a simple DFS in Rust here, utilizing a combination of Rc<> and Weak<> to prevent depenedency cycles. I put it up for review with my Twitter connections and received a lot of feedback on improvements which I plan to play around with this week.

I was also suggested to go through Learn Rust With Entirely Too Many Linked Lists. This book is really great for understanding pointers and memory management, I'm halfway through this book currently.

Setting up Lighthouse

I was able to setup the Lighthouse codebase locally, run the test-cases and also use the start_local_testnet.sh script to run a few instances of lighthouse and an instance of geth. I had a few hiccups with a few libraries missing, but I was able to find them on brew.

Plan for the next week

  1. Start looking into Lighthouse's code-base and identify sections that are relevant to the project. Also try to build a general understanding of the code-base on a high level.
  2. Continue with the book Learn Rust With Entirely Too Many Linked Lists.
  3. Review Satalia’s implementation of the above algorithm.
  4. It might be a good idea to go through the ethereum yellow paper as well. I've tried on numerous occasions before and given up but this time I've heard there's a really nice yt playlist which breaks it down, so I might try going through it.
  5. EPF officially kicks-off in 4 days 🤩

The (first 3 days of the) Third Week

While reviewing Satalia's reference implementation, I noticed that they were using a few outdated feature gates because of which the code wasn't compiling with latest rust nightly tool-chain. I submitted a small fix to update the reference implementation. @michaelsproul suggested that we should migrate the code to stable rust, as a crate inside Lighthouse. I agree with this and this is something which Geemo suggested as well.

I've been having a lot of fun learning Rust, the more I learn the more I appreciate the language, tho I'm still figthing the borrow checker from time to time. The Learn Rust With Entirely Too Many Linked Lists book is really well written and has been an awesome read so far.

I attended the EPF4 Kickoff Call today, and am really excited given that the fellowship has officially kicked-off. Mario gave a really great overview of the history of Ethereum.

So this wil be all for this document, Week 1 has officially started 🎊 so going forward I'll be documenting my updates on a weekly basis.

-> Next Update

Select a repo