# EPF Cohort 5 - Week 15 Date: 23/09/2024 ### Updates 1. We changed the `networkx` library out for `rustworkx`. The benchmarks are presented below ([credits](https://www.rustworkx.org/dev/benchmarks.html)) ![image](https://hackmd.io/_uploads/Sy1Ube100.png) 2. Almost done with the sample gossip (distribution part). Will detail it in the next update. 3. I setup a non-validator node (geth + prysm). * I had initially started(fellowship phase 0) with geth + lighthouse and everything worked great but I didn't have the funds to let it run. * But after getting the fellowship stipend I decided to setup the node again. I first tried geth + nimbus, but nimbus has a lot of stale code in its entrypoint due to which it fails to initialize libp2p. * So I then switched to geth + prysm. It works great but some of the logs don't add up. ### Anecdote on Setting up a Node So I'm the human version of a fuzzer, I tend to use software in an UNINTENDED manner. Therefore, I hit a lot of edge cases. Same thing happened with Nimbus. Nimbus differentiates the syncing logic into two, obtaining checkpoints a.k.a `trustedNodeSync` and post checkpoint download syncing a.k.a normal beacon logic flow. In other words, it removes the logic of obtaining checkpoints outside of the beacon chain logic. They have a good reason to do so, "keep the users aware of the trust involved" by forcing a UX interaction. Otherwise server configuration end up in docker-compose or kubernetes, never to be changed. Originally, the logic was one then they split it into two, but some stale logic remained. Despite the clear documentation, I wanted to enable all options provided historically and hit the edge case where the node failed to download the state root from a remote endpoint. I'm in the process of refactoring their entire entrypoint file, not sure if they will accept it (they are too pedantic of their logic which is not necessarily a bad thing) After switching to prysm, everything worked great but just to experiment I had killed the EL and CL multiple times at different points in time of their lifecycle and the CL client now continuously complains that the EL is not syncing. This problem persists despite the clients staying in sync by continuously verifying blocks. Weird. Will look into it. ### Next Steps 1. Test against random graphs now that the library is switched out. 2. Finish sample gossip part. 3. Wrap up the simulator for one month of "Red Team" i.e. writing attack vectors. ### Personal Notes I haven't been able to attend ACD calls for quite sometime now. Though I feel more integrated into the Ethereum workflow I want to get back to ACD calls. Also, answering the question from the previous update. Most people attribute erasure codes for the capability of reconstruction but that is slightly incorrect (or limiting). The real use IMO is to force an adversary's hand at deleting(make unavailable) data. With the use of erasure codes the adversary cannot delete the smallest fraction of the data without deleting ATLEAST half(3/4 in 2D) of it. This inference is usually implied the probability calculations and digging deeper into them revealed it for me.