Try   HackMD

EPF - Eighth Update

My last update was based on the implementation of the Req/Res Protocol module for the Consensus Layer peer-to-peer setup project. Some changes were still pending for this to be fully functional so that will be the main topic of this update

Req/Res Protocol

The work for this feature took place in this pull request:

I worked on implementing the required responses for each request. This is a code snippet for the match arm of a new RPC event:

BehaviourEvent::Rpc(rpc_message) => {
    match rpc_message.event {
        Ok(received) => match received {
            rpc::RPCReceived::Request(substream, inbound_req) => {
						...
						},
            rpc::RPCReceived::Response(_, _) => (),
        },
        Err(_) =>  (),
    }
}

I only match the event of the RPC message since the other data is not relevant for the message processing. The following is the RPCReceived::Request match arm:

rpc::RPCReceived::Request(substream, inbound_req) => {
    match inbound_req {
        InboundRequest::Status(status)=>{
            swarm.behaviour_mut().rpc.send_response(rpc_message.peer_id, (rpc_message.conn_id,substream), RPCCodedResponse::Success(RPCResponse::Status(status)));
        },
        InboundRequest::Ping(_) => {
            swarm.behaviour_mut().rpc.send_response(rpc_message.peer_id, (rpc_message.conn_id,substream), RPCCodedResponse::Success(RPCResponse::Pong(rpc::methods::Ping {
                data: 0,
            })));
        },
        InboundRequest::MetaData => {
            swarm.behaviour_mut().rpc.send_response(rpc_message.peer_id, (rpc_message.conn_id,substream), RPCCodedResponse::Success(RPCResponse::MetaData(MetaData{
                seq_number: 0,
                attnets: EnrAttestationBitfield::default(),
                syncnets: EnrSyncCommitteeBitfield::default(),
            })));
        },
        _ => {
            swarm.behaviour_mut().rpc.send_response(rpc_message.peer_id, (rpc_message.conn_id,substream), RPCCodedResponse::Error);
        },
    }
}

For each request I use the send_response method of the RPC and build the specific RPCResponse for it.

Debugging

I spent quite some time doing small fixes during the debugging stage. But after some time I got it working. This is the output of a local lighthouse node:

Feb 06 18:22:35.807 DEBG Connection established                  connection: Listener, peer_id: 16Uiu2HAmQAF6yFrbm7u1q9XJUsqupMYDgs6XZcuBavBrG6RBo7h5, service: libp2p
Feb 06 18:22:35.808 DEBG Obtained peer's metadata                new_seq_no: 0, peer_id: 16Uiu2HAmQAF6yFrbm7u1q9XJUsqupMYDgs6XZcuBavBrG6RBo7h5, service: libp2p

In the lighthouse client, the first step after connecting to a new peer is to request the metadata. In the above snippet, we can see that this metadata has been received successfully! 🎉🎊🎉🎊

Eventually you’ll be disconnected for the node having “Too many peers”, this clearly shows the need of a peer manager.

Feb 06 18:23:00.295 DEBG Peer Manager disconnecting peer         reason: Too many peers, peer_id: 16Uiu2HAmQAF6yFrbm7u1q9XJUsqupMYDgs6XZcuBavBrG6RBo7h5, service: libp2p

However, in the real world scenario I still get disconnected without the protocol being used, so I need to start looking at:

  • Networking debugging tools
  • Different client compatibilities

Helios CL P2P Possible Contributions

I laid out a document with possible contributions to the current state of this project.

This is for any contributor interested on contributing to this new feature.

Rust Ethereum Consensus Specs

While exploring the helios codebase, I noticed this issue:

I’m aware that I need to refactor some parts of the code to be compatible with some of the current or future helios crates or standards. So I decided to check out the Alex Stokes’ Ethereum Consensus repository.

I noticed that the P2P Interface types were only implemented for the phase0 specs, so I reached out to Alex and asked about how could this be implemented. These are the links for the issue and pull request:

Next Steps

My first priority will be to test locally with other CL clients to see if any other errors come up. But also I need to implement the peer manager to keep a stable amount of peers.