Code : https://github.com/meldsun0/samba EPF5 - Week [18,19, 20] Over the past two weeks, our focus has been on implementing the Bucket Table and integrating it with the existing codebase. Although using the Discv5 Bucket Table would have been ideal, its interface did not meet our requirements for the primary operations. Compounding this issue, after thoroughly examining its implementation, we found that the best path was to use the KBucket class as reference and implement our own KBucket. The main challenge we encountered was that the implementation found at https://github.com/Consensys/discovery does not allow for the forced removal of a BucketEntry/NodeRecord from the KBucket. This functionality is crucial, especially after receiving a failing Ping Message. Using the current implementation would result in an unnecessary second ping, even when we already know that the node is inactive. To address this, we developed our own implementation, which includes addOrUpdate and remove operations, and removing the onLivenessConfirmed and the offerNewNode methods. We have also added tests for the new NodeTable definition, which is our term for the KBucket implementation. More tests will be added in the future to ensure the implementation functions as expected. I have might doubts regarding some implementation details that I wish to validate with someone from Consensys. Additionally, we reused and adapted the LivenessChecker from the aforementioned repository, replacing our connection pool. This integration allows us to utilize the same DHT structure for tracking active nodes, a task that was previously handled separately before week 18. Additionally, I spoke with a member of the Besu team who expressed interest in EIP-7639 and shared some design insights. Another concern that has arisen is the stability of the actual implementations of the UTP library in Java. Something that we need for the Offer and FindContent messages. For this week: Week 20. I am planning to focus on some of these tasks: * Improve tests * Understand how we can integrate what we have with Hive * Advance with Offer and FindContent message. * Get a better idea of everyting around UTP. * Try to see if I can reach someone workinng on Discv5. EPF5 - Week [16,17] During week 16 we have been implementing the Ping Operation and the Find Nodes Operation. Basically these two operations need to send the corresponding message and handle its response. This is partially done although testing and dealing with some DHT issues are still pending. Apart from that, whenever the node receives a Ping Message and a FindNode message, the client should also need to know how to handle it and this is what I will be focusing my time on this week. We have also been doing some refactors in order to include the ssz decoding for each of the messages. This is done. So for this week I plan to finish these two operations (Ping and Find Nodes). For each we should: * Send the corresponding message and handle its response. * Receive the corresponding message and apply the corresponding business logic. * Test for all of the above. I am also planning to add some github actions on our repo. The is also a dashboard to keep track of our progress on the repo. EPF5 - Week [13,14,15] In these last three weeks we focused on developing key features and basic architecture of the Portal Node in order to have the first connection with another peer working. We could finally accomplish that although we still have to solve ssz problems. The basic structure we have coded is the starting point for continuing adding more pieces to this solution. Today, when we run the Portal Node, a Rest endpoint is bootstrapped and by using DiscV5 we can connect to bootnodes and keep up in sync by sending a PING message and getting a response through a discV5 TALK message. We have also created the necessary abstraction to handle a connectionPool and the routing table although its implementation is something to be done on next iterations. EPF5 - Week [12] Continue developing the core services for the Portal Node, using the code from Teku and Besu as references. Our goal for this week is to complete the essential components needed to ensure the Node operates with basic functionality. This includes managing asynchronous operations, developing History Network API, implementing various handlers, and establishing connections with all relevant pairs. Specifically, we need to focus on: Asynchronous Operations: Ensure that all async operations are handled efficiently to maintain smooth performance and responsiveness. API Development: providing necessary endpoints (History Network) and functionalities. Handler Implementation: Develop and integrate different handlers to process various types of requests and events effectively. Connection Management: Establish and manage connections with each pair to ensure proper communication and data exchange. By the end of the week, we aim to have this foundational structure in place and tested. Following this, we will integrate it with Portal-specific components to tailor it to our specific needs. Please also review and utilize the libraries listed below for the Samba project: 'org.hyperledger.besu.internal:metrics-core:24.7.0' 'org.hyperledger.besu.internal:core:24.7.0' 'org.hyperledger.besu.internal:config:24.7.0' 'org.hyperledger.besu:plugin-api:24.7.0' 'tech.pegasys.teku.internal:infrastructure-events:24.8.0' 'tech.pegasys.teku.internal:async:24.8.0' 'tech.pegasys.teku.internal:serviceutils:24.8.0' We will also be using : 'io.vertx:vertx-core:4.5.9' 'io.vertx:vertx-web:4.5.9' EPF5 - Week [11] * Code the initial design in order to have a Node up and running with a discovery service needed by using discv5. Once we have this we can start investing our time in working on each of the operations:     * Joining the network     * Finding Nodes     * Neighborhood Gossip     * Storing content     * Finding Content * Connect with Mentors. There are a lof of moving parts and getting a better idea from someone that has already used certain libraries would be really helpful. * Merge Derek code on the same repo. * Sync call with Derek next Friday.   EPF5 - Week [10] * Understand better how Netty works and how it could be used for the portal Client. I also learnt how Teku and Besu use it. * Code a PoC for the discovery process using Discv5 * Found that Besu uses discv4 but Teku uses Discv5 * Coordinates with Derek a possible roadmap. EPF5 - Week [9] * During this week I spent a considerable time understanding and coding some parts of the domain model and the DHT. * I also could connect with another EPF fellow and I joined Portal calls on Monday to get a better idea of new developments and learning experiences from other teams building clients in another technology. EPF5 - Weeks [0,8] * Topics and materials I have studied: * Discv5 protocol: * https://github.com/ethereum/devp2p/blob/master/discv5/discv5-wire.md * https://github.com/ethereum/devp2p/blob/master/discv5/discv5-rationale.md * https://github.com/ethereum/devp2p/blob/master/discv5/discv5-theory.md * UTP: * https://www.bittorrent.org/beps/bep_0029.html * https://github.com/arvidn/libtorrent/blob/9aada93c982a2f0f76b129857bec3f885a37c437/src/utp_stream.cpp#L3199 * https://github.com/bittorrent/libutp * https://www.rasterbar.com/products/libtorrent/utp.html * Low Extra Delay Background Transport (LEDBAT) * Kademlia: * https://pdos.csail.mit.edu/~petar/papers/maymounkov-kademlia-lncs.pdf * https://www.ethportal.net/concepts/protocols/kademlia * Portal Wire Protocol: * https://github.com/ethereum/portal-network-specs/blob/master/portal-wire-protocol.md * Checking other Portal Clients: * https://github.com/optimism-java/shisui/tree/portal * Checking Besu code around p2p: * https://github.com/hyperledger/besu/tree/main/ethereum/p2p/src/main/java/org/hyperledger/besu/ethereum/p2p/discovery/internal * Coding: * A multi-module gradle project was created. * Code : https://github.com/meldsun0/samba : * Demo using discv5 library and connecting with peers. * WIP: * Modeling main entities: * Packet types * DHT * UTP: * Work on a UTP library: * https://github.com/meldsun0/nethermind/tree/portal/utp-refactor