Try   HackMD

[Draft] Notes on Trin Architecture

This document is a WIP. I expect to add more info as I learn more about trin.

In this document, I try to better document how trin is built. Focusing on the history network.

Let's start by learning what happens at startup. This is an outline of the functions called:

We start with the main function. This is called when the program is executed, or at cargo run. main reads the config of the program (the flags included in the launch command) and calls run_trin.

run_trin does a lot of things, but we'll focus on how it starts the sub-protocols. Each sub-protocol has an asociated function of the form initialize_history_network. This function starts some services (see the image above) but also returns four values. Those values are used as follows:

Return variable name Type Usage
history_handler HistoryHandler spawns a thread where handle_client_queries is executed
history_network_task HistoryNetworkTask spawns a thread with the task
history_event_tx HistoryEventTx one of the arguments to spawn the main portal events handler by calling PortalnetEvents::new
history_jsonrpc_tx HistoryJsonRpcTx one of the arguments to launch the JSON-RPC server by calling launch_jsonrpc_server

Detailed Description

main

This is the entry point to Trin. It's the function executed when you run cargo run.

  • start tracing logger
  • parse config from cli
  • call run_trin and handle errors
  • listen for Ctrl+c shutdown signal

run_trin

  • define trin version
  • log info about trin version and confg being used
  • setup temp trin data directory if in ephemeral mode
  • configure node data dir based on the provided private key
  • build instance of PortalnetConfig ↗️
  • initialize base discovery protocol ↗️
    • build instance of Discovery by calling Discovery::new
    • get talk_req_rx by calling Discovery::start
    • wrap the discovery instance on an Arc
  • initialize prometheus metrics
  • initialize and spawn uTP socket ↗️
    • get utp_talk_reqs_tx and utp_talk_reqs_rx by starting a mpsc::unbounded_channel()
    • build instance of Discv5UdpSocket by calling Discv5UdpSocket::new
    • build instance of UtpSocket by calling UtpSocket::with_socket(discv5_utp_socket)
    • wrap the UtpSocket on an Arc
  • build instance of PortalStorageConfig by calling PortalStorageConfig::new ↗️
  • initialize validation oracle ↗️
    • build MasterAccumulator by calling MasterAccumulator::try_from_file
    • build HeaderOracle by calling HeaderOracle::new
    • wrap HeaderOracle on RwLock and that in Arc
  • initialize portal sub-protocols
    • initialize state sub-network service and event handlers, if selected
    • initialize trin-beacon sub-network service and event handlers, if selected
    • initialize chain history sub-network service and event handlers, if selected, by calling initialize_history_network ↗️
  • launch JSON-RPC server ↗️
  • spawn handler threads ↗️
  • spawn main portal events handler ↗️
  • spawn network threads ↗️
  • return RPC handle

initialize_history_network

  • get history_jsonrpc_tx and history_jsonrpc_rx by calling mpsc::unbounded_channel::<HistoryJsonRpcRequest>()
  • assing history_jsonrpc_tx to its corresponding field in the header_oracle
  • get history_event_tx and history_event_rx by calling mpsc::unbounded_channel::<TalkRequest>()
  • build instance of HistoryNetwork ↗️
  • build instance of HistoryRequestHandler
  • get history_network_task by calling spawn_history_network↗️
  • call spawn_history_heartbeat
  • return history_handler, history_network_task, history_event_tx and history_jsonrpc_tx

HistoryNetwork::new

  • parse config
  • build instance of PortalStorage
    • call PortalStorage::new
    • wrap in PLRwLock and that in Arc
  • build instance of ChainHistoryValidator and wrap in Arc
  • build instace of OverlayProtocol by calling OverlayProtocol::new
  • return a new instance of HistoryNetwork, only field is the OverlayProtocol just built wrapped in Arc

spawn_history_network

  • parse bootnode config and log info about them
  • spawn a thread that handles history events ↗️
    • build instance of HistoryEvents
    • spawn history event handler by calling HistoryEvents::start
    • make sure we establish a session with the boot node
    • listen for Ctrl+c shutdown signal

OverlayProtocol::new

  • get kbuckets ↗️
  • initialize metrics
  • get command_tx by calling OverlayService::spawn
  • return a new instance of OverlayProtocol with values passed as arguments and the values gotten in the function

HistoryEvents::start

HistoryEvents::handle_history_talk_request

OverlayService::spawn

  • get command_tx and command_rx by calling mpsc::unbounded_channel
  • get peers to ping ↗️
  • get (response_tx and response_rx by calling mpsc::unbounded_channel
  • spawn a service thread ↗️
    • build instance of OverlayService
    • log
    • run OverlayService::initialize_routing_table
    • run OverlayService::start

OverlayService::start

/// The main loop for the overlay service. The loop selects over different possible tasks to
/// perform.
///
/// Process request: Process an incoming or outgoing request through the overlay.
///
/// Process response: Process a response to an outgoing request from the local node. Try to
/// match this response to an active request, and send the response or error over the
/// associated response channel. Update node state based on result of response.
///
/// Ping queue: Ping a node in the routing table to perform a liveness check and to refresh
/// information relevant to the overlay network.
///
/// Bucket maintenance: Maintain the routing table (more info documented above function).