# Introduction This document describes the Tagion core network. The Tagion core network includes: * A byzantine consensus algorithm. * A distributed database (DART) with consensus and description of node validators. * The basic node selections. ## Network Architecture The Tagion network architecture consists of a collection of computer nodes that can send messages to each other ($A computer node is named node in this document). Each node can send a message to any other node in the network. The nodes in the network have different roles and will change them over the lifetime depending on the Tagion consensus rules. The node roles are: * ***Active Node A*** - node in this role category is responsible for the validations and consensus. * ***Standby Node A*** - node waiting to be selected as an active node. * ***Swap Node A*** - node selected to become an active node waiting to be swapped with an ***Active Node***. * ***Prospect Node A*** - node in this category is waiting to become a ’real’ node in the network. All the nodes communicate via a protocol call libp2p [4]. The role of the node actors is selected randomly according to the node consensus selection rules in appendix I. ![](https://hackmd.io/_uploads/r1x2gkBeo.png) The ***Active Node*** validates a transaction and uses the Hashgraph algorithm to reach consensus. The Hashgraph algorithm can produce a consensus ordered transaction list (Defined as a ***transaction list***). So, all the Active Nodes will produce the same order of transactions. The transaction list is executed in the consensus order in appendix C. ## The network security The Hashgraph algorithm is asynchronous byzantine fault tolerant, meaning that if more than 2/3 of the actors are not evil (following protocol), then the network will reach a consensus. If more than 1/3 and less than 2/3 of the network actors are not evil, then the network cannot reach a consensus. If more than 2/3, then the network has been taken over, meaning new protocols have been enforced by the majority, potentially destroying data integrity. The mechanical node swapping described in section 1.1 describes this swapping of nodes, which is ***UDR*** random. Static calculations based on the different number of Active and Available Nodes are available. The probability of a coordinated attack taking over the network can be estimated from appendix C. Different censorious for an attack probability has been estimated in the calculation shown in table 1. The probability numbers in column ”Halt” represents the probability of the network to halt/slow down, and the column ”Take-over” represents the probability that the Evil nodes can coordinate an attack. From numbers, it can be seen that the network gets more robust against attack when the ratio between total nodes and evil and active nodes. | Active Nodes $N$ | Evil Nodes $E$ | Total Nodes $M$ | Halt $$p1/3$$ | Take - over $$p2/3$$ | | ---------------- | -------------- | --------------- | ------------------------------------------------ | -------------------------------------------------- | | 31 | 31 | 101 | 0.23 | $$\begin{align*}1.7 \cdot 10^{-7} \end{align*}$$ | | 31 | 31 | 301 | $$\begin{align*}4.5 \cdot 10^{-5}\end{align*}$$ | $$\begin{align*}1.2 \cdot 10^{-17}\end{align*}$$ | | 31 | 31 | 501 | $$\begin{align*}2.8 \cdot 10^{-7} \end{align*}$$ | $$\begin{align*}2.75 \cdot 10^{-17} \end{align*}$$ | | 37 | 31 | 101 | 0.2 | $$\begin{align*}1.4 \cdot 10^{-9} \end{align*}$$ | | 43 | 31 | 101 | 0.21 | $$\begin{align*}1.4 \cdot 10^{-12} \end{align*}$$ | | 43 | 31 | 301 | $$\begin{align*}1.1 \cdot 10^{-6} \end{align*}$$ | $$\begin{align*}3.5 \cdot 10^{-34} \end{align*}$$ | | 31 | 43 | 501 | $$\begin{align*}1.2 \cdot 10^{-5} \end{align*}$$ | $$\begin{align*}4.5 \cdot 10^{-18} \end{align*}$$ | | 31 | 61 | 501 | $$\begin{align*}3.9 \cdot 10^{-4} \end{align*}$$ | $$\begin{align*}3.8 \cdot 10^{-14} \end{align*}$$ | | 31 | 61 | 1001 | $$\begin{align*}5.4 \cdot 10^{-7} \end{align*}$$ | $$\begin{align*}2.1 \cdot 10^{-20} \end{align*}$$ | | 31| 61|10001| $$\begin{align*}1.2 \cdot 10^{-17} \end{align*}$$ | $$\begin{align*}2.6 \cdot 10^{-41} \end{align*}$$ | Table 1: Probability of an attack of the versus nodes ## Node Architecture The node core program is implemented in the programming language D with some C and Go libraries for crypto, network, and virtual engine functions. It is structured, as shown in the figure below. | HiRPC (HiBON) Dataformat for communication | | | ------------------------------------------ | ----------- | | NODE | | | User API - TLS 1.2 | P2P Network | | Tagion Virtual Machine | | | Consensus mechanism : Hashgraph | | | Storage : Distributed Database DART | | | Blockchain : Epoch Records | | Table 2: Tagion Node stack A Tagion Node is divided into units. Each unit handles a service function in the following manner: * A smart contract is sent to the Transaction-service-unit, fetching the inputs date from the distributed Data-base DART unit and verifying the signatures of the inputs. * The DART unit connects to other DARTs via the P2P unit. * The transaction unit forwards the smart contract to the Coordinator unit, gossiping this information to the network via the P2P unit. * When the Coordinator receives an event with a smart contract, the smart contract is verified and executed via the Tagion Virtual Machine (***TVM***) unit, and the results of the outputs are verified. * The Coordinator adds it to an event in the Hashgraph and gossips the information via the P2P unit to other nodes in the network. * When the Coordinator finds an epoch, it makes a list of ordered transactions and forwards this list to the Transcript-service-unit. * The transcript unit executes the smart contracts in order and produces a Recorder list. * A Recorder contains the list of DART instructions where the inputs will be removed and outputs added. * The Coordinator sends the Recorder to the DART-uint, which executes this list. * The DART unit forwards the Recorder to the Recorder unit, and the Recorder adds this to a blockchain. * If the network does not reach a consensus, the Coordinator will send an undo instruction to the Recorder-unit. * If the Recorder-unit receives an undo-instruction, the Recorder will send the undo-Recorder-list to the DART unit, and the DART unit will perform this action and put the DART in the previous state before the last Epoch. * An undo-Recorder-list has defined a Recorder where the order is reversed, and the (remove/add) instructions are inverted to (add/remove). * The Logger and Monitor units are used for debugging and monitoring the network. ![](https://hackmd.io/_uploads/B13_lJrgs.png) Figure 2: The Tagion Node Architure Each service is running as an independent task and communicates via commutation channels. The different services modules perform the service as described in the list below. * **Coordinator** - This service manages the hashgraph-consensus and controls another related service for the node. The Coordinator generates and receives events and relays them to the network. This service generates the epoch and sends the information to the TVM services. * **Transaction** - This service gets the incoming transaction script, validates, verifies, and fetches the data from the DART, and sends the information to the Coordinator. * **DART** - Services to the Distributed database. * **P2P** - This service handles the peer-to-peer communication protocol used to communicate between the nodes. * **TVM** - Handles the executions of the smart contracts. * **Transcript** - Services the Epoch and orders the smart-contract execution Recorder. This service records the history of the DART. * **Logger** - The service handles information logging for the different services. * **Monitor** - The Monitor service shows the activities in graphical mode. Estimated bandwidth requirement and the average propagation for a transaction are the formulas in Appendix E. From the simple experimental model result. Example for a network with nodes $N=11$ an event size of $E_{size}=500bytes$ and network delay of $t_{net}=300ms$ the estimated epoch propagation delay and the bandwidth of: $$\begin{align*} n_{round} &= 2.2 \cdot ln (11) & \approx 5.27 \\ t_{epoch} &= 3.5 \cdot n_{round} \cdot 300 ms & \approx 5.5s\\ B &= 500bytes \cdot 11^{2} & \approx 60.5kbytes \\ BW &= 8 \cdot B/(n_{round} \cdot 300ms) &\approx 28kbit/s \end{align*}$$ And with $N$= 31: $$\begin{align*} n_{round} &= 2.2 \cdot ln (31) & \approx 8.6 \\ t_{epoch} &= 3.5 \cdot n_{round} \cdot 300 ms & \approx 8s\\ B &= 500bytes \cdot 31^{2} & \approx 481kbytes \\ BW &= 8 \cdot B/(n_{round} \cdot 300ms) &\approx 152kbit/s \end{align*}$$ And with $N$ = 101: $$\begin{align*} n_{round} &= 2.2 \cdot ln (101) & \approx 10.15 \\ t_{epoch} &= 3.5 \cdot n_{round} \cdot 300 ms & \approx 10.6s\\ B &= 500bytes \cdot 101^{2} & \approx 5.1.Mbytes \\ BW &= 8 \cdot B/(n_{round} \cdot 300ms) &\approx 1.2Mbit/s \end{align*}$$ And with $N$ = 1001: $$\begin{align*} n_{round} &= 2.2 \cdot ln (1001) & \approx 15 \\ t_{epoch} &= 3.5 \cdot n_{round} \cdot 300 ms & \approx 16s\\ B &= 500bytes \cdot 1001^{2} & \approx 501.Mbytes \\ BW &= 8 \cdot B/(n_{round} \cdot 300ms) &\approx 80Mbit/s \end{align*}$$ In this example, the epoch delay increases around 5s when $N$ increases by a decade, and the bandwidth requirement increases around 50 times or more.