This document is a WIP. I expect to add more info as I learn more about trin.
In this document, I try to better document how trin is built. Focusing on the history network.
Let's start by learning what happens at startup. This is an outline of the functions called:
We start with the main
function. This is called when the program is executed, or at cargo run
. main
reads the config of the program (the flags included in the launch command) and calls run_trin
.
run_trin
does a lot of things, but we'll focus on how it starts the sub-protocols. Each sub-protocol has an asociated function of the form initialize_history_network
. This function starts some services (see the image above) but also returns four values. Those values are used as follows:
Return variable name | Type | Usage |
---|---|---|
history_handler | HistoryHandler | spawns a thread where handle_client_queries is executed |
history_network_task | HistoryNetworkTask | spawns a thread with the task |
history_event_tx | HistoryEventTx | one of the arguments to spawn the main portal events handler by calling PortalnetEvents::new |
history_jsonrpc_tx | HistoryJsonRpcTx | one of the arguments to launch the JSON-RPC server by calling launch_jsonrpc_server |
main
This is the entry point to Trin. It's the function executed when you run cargo run
.
run_trin
and handle errorsCtrl+c
shutdown signalrun_trin
PortalnetConfig
↗️Discovery
by calling Discovery::new
talk_req_rx
by calling Discovery::start
Arc
utp_talk_reqs_tx
and utp_talk_reqs_rx
by starting a mpsc::unbounded_channel()
Discv5UdpSocket
by calling Discv5UdpSocket::new
UtpSocket
by calling UtpSocket::with_socket(discv5_utp_socket)
UtpSocket
on an Arc
PortalStorageConfig
by calling PortalStorageConfig::new
↗️MasterAccumulator
by calling MasterAccumulator::try_from_file
HeaderOracle
by calling HeaderOracle::new
HeaderOracle
on RwLock
and that in Arc
initialize_history_network
↗️initialize_history_network
history_jsonrpc_tx
and history_jsonrpc_rx
by calling mpsc::unbounded_channel::<HistoryJsonRpcRequest>()
history_jsonrpc_tx
to its corresponding field in the header_oracle
history_event_tx
and history_event_rx
by calling mpsc::unbounded_channel::<TalkRequest>()
HistoryNetwork
↗️
HistoryNetwork::new
HistoryNetwork
in an Arc
HistoryRequestHandler
history_network_task
by calling spawn_history_network
↗️spawn_history_heartbeat
history_handler
, history_network_task
, history_event_tx
and history_jsonrpc_tx
HistoryNetwork::new
PortalStorage
PortalStorage::new
PLRwLock
and that in Arc
ChainHistoryValidator
and wrap in Arc
OverlayProtocol
by calling OverlayProtocol::new
HistoryNetwork
, only field is the OverlayProtocol
just built wrapped in Arc
spawn_history_network
HistoryEvents
HistoryEvents::start
Ctrl+c
shutdown signalOverlayProtocol::new
kbuckets
↗️command_tx
by calling OverlayService::spawn
OverlayProtocol
with values passed as arguments and the values gotten in the functionHistoryEvents::start
TalkRequest
). when an event is found call HistoryEvents::handle_history_talk_request
. also, error handlingHistoryEvents::handle_history_talk_request
OverlayProtocol::process_one_request
and handles errorsOverlayService::spawn
command_tx
and command_rx
by calling mpsc::unbounded_channel
response_tx
and response_rx
by calling mpsc::unbounded_channel
OverlayService
OverlayService::initialize_routing_table
OverlayService::start
OverlayService::start
/// The main loop for the overlay service. The loop selects over different possible tasks to
/// perform.
///
/// Process request: Process an incoming or outgoing request through the overlay.
///
/// Process response: Process a response to an outgoing request from the local node. Try to
/// match this response to an active request, and send the response or error over the
/// associated response channel. Update node state based on result of response.
///
/// Ping queue: Ping a node in the routing table to perform a liveness check and to refresh
/// information relevant to the overlay network.
///
/// Bucket maintenance: Maintain the routing table (more info documented above function).