I started working on some issues on Reth, but I quickly realized that the project was more complex than I had anticipated. The code base was highly intricate, making it difficult for me to dive straight in and get started. Additionally, there were many other developers interested in committing to the project, including both fellows and non-fellows. This made it highly competitive just to apply to issues that would help me understand the code base and get familiar with the different crates (packages). Also, it seems that although you are the first responder to a "first good issue", the chances of being "snipped" by another developer are pretty high.
We decided with another fellow Alessandro to contact the Reth team trying to get a research topic or something more significant that would help us get started and feel more comfortable with the code base. We asked for more sustainable projets that they have in their backlog. As the validator client is still in an alpha state, the devs were feeling more comfortable by tackling those issues themselves - which is totally understandable.
While Reth is an amazing project with highly competent and passionate developers, I found it overwhelming as a newcomer to the world of core development and that as I am also still working for my current company, I need a slightly different set up in order to succeed in making this cohort a success and struggling over getting issues jeapordise my goals.
That sets me back to looking more into Lighthouse's project ideas and I really enjoyed looking and learning m that area of core development and I have decided to work on several projects.
Lighthouse has long been missing a critical metric for when blocks are missed by local validators. See https://github.com/sigp/lighthouse/issues/3414
I since have started reading lots of resources related to the topic. That allows me to understand the different cases enumerated by Paul Hauner which would label a missed block by a validator from the Consensus Layer point of view - here LH.
Also, there’s quite a lot of technical pointers described by Paul in the issue such as why some naive approaches cannot be used (eg. calling highly computationally expensive method which would result in eating up too much resources)
I then started looking more into the beacon_chain.rs and other files and looking at the different ways of getting data (cache, calling the state directly, …) and getting more familiar with the CL.
I think I have a better idea of how to start tackling this issue and will work on writing some code as of week 6.
Also, I have since started digging into the different reasons why a block or an attestation can be missed which made me go down the rabbit hole of relays and MEV boost with some noticeable reading:
All that led me to read more about how runtime performance is extremely important and how it affects missing attestations for European based validators for instance and help me understand more the CL as a whole.
As such I also decided to look into another project idea proposed once again by Paul Hauner:
Lighthouse has a scheduler called the [BeaconProcessor](https://github.com/sigp/lighthouse/tree/unstable/beacon_node/beacon_processor)
which it uses for queuing and quality-of-service. It would be interesting to have pie-chart style metrics to show which work types are taking how much percentage of total running time. This may highlight areas for optimisation.
I would love to get on with improving the time of some tasks - here the ones launched by the BeaconProposer.
It feels like a good decision to have switched to Lighthouse and I wanna make my project for this EPF to add some useful metrics to Lighthouse / RPC and have a better understanding and improve the potential performance issues at runtime. I will finalize my project for this cohort this week.