# Experimental report
## Resources
1) Code repository: https://github.com/EtherCS/recovery/tree/rec/con
2) Experimental result figures: https://docs.google.com/presentation/d/1A7lm7CSYhArzke9BCNpZxPAwH9Ui63gOENgrhmE9LsI/edit#slide=id.g1b9fc842d7f_2_67
3) Experimental result data (can be used to generate result figures): https://docs.google.com/spreadsheets/d/1I2vbDY3NWRHbkP_R2CNsZI3xfgPQR0qaSLQUC4LtDBY/edit#gid=568259864
4) Experimental metadata (where you can compute experimental results): https://drive.google.com/drive/folders/1Yo4lVFxYVe2k8bh8UclvxQcqeN04QhUr?usp=sharing
## Requirements
| Requirement | version |
|-------------|---------|
| Tendermint | v0.35.8 |
| Go | Go1.18 |
## Setup
| Machine | Description | Amount |
|--------------------|---------------------------------------|--------|
| EC2 t2.micro | 1 vCPU core, 1GB RAM, us-east-2(Ohio) | 6 |
| Arch Linux Server | 48 CPU cores, 128GB RAM | 1 |
*Note: EC2 t2.micro is used to run consensus nodes; Arch Linux Server is used to run clients who are responsible for sending transactions*
## Designs
### Metrics
We measure the performance of our Tendermint-Recovery (Tendermint-REC for short) using the following metrics:
(1) **Transaction throughput**: the throughput of the confirmed transactions, measured in transaction-per-second (TPS);
(2) **Confirmation latency**: the delay between the time that a transaction is issued by a client until it is confirmed by the protocol. In our experiments, we compute this latency by tracing the labeled transactions;
(3) **Recovery overhead**: the time used for recovery;
### Experimental results
1) **The relationship between TPS and confirmation latency**: We compare the TPS of our Tendermint-REC with the TPS of original Tendermint that doesn't implement rollback detection and handling.

**Conclusion**: Tendermint-Rec trades moderate performance decrease for recovery. As shown in the above figure, we can find that Tendermint-REC can achieve ~4000 TPS while the peak TPS of Tendermint is ~4400. Furthermore, the confirmation latency is close when TPS of the two protocols is close.
2) **The TPS and confirmation latency in different number of clients**: We run multiple clients to send transactions to consensus nodes, by which we hope to evaluate the performance of the two protocols (Tendermint and Tendermint-REC) under different workloads. Specifically, a client is launched to respectively establish a connection with each consensus node, and continuously sends (valid) transactions via these connections.


**Conclusion**: Similar to the result of "TPS-latency", Tendermint-REC's performance is close to Tendermint's performance under different workloads.
3) **Microbenchmark (recovery overhead)**: In the microbenchmark experiments, we focus on overhead when Tendermint-REC triggers the recovery procedure. Considering a consensus group consisting of two byzantine nodes (n_1, n_2) and two honest nodes (n_3 and n_4), to construct inconsistent states in n_3 and n_4, we let each byzantine node n_1 (or n_2) run two instances n_1 and n'_1 (or n_2 and n'_2), to respectively communicate with n_3 and n_4. The network connection is illustrated as follows:

Besides, to trigger the recovery, we run a client, who connects n_3 and n_4 to help monitor the states. Once detecting inconsistent states, the client informs nodes to enter recovery. The testing result is given in the follows. Note that the legend indicates the number of blocks needing to be rollbacked, e.g., |*b*| = 1 indicates 1 blocks will be pruned after recovery.

**Conclusion**: Compared to the consensus latency (about 500ms ~ 1s per block), the recovery overhead is acceptable (millisecond level). For example, when |*b*|=7, *x*=500, meaning the protocol rollbacks to block height |*H-7*| (*H* is the latest block height) and each block contains ~500 transactions, the recovery procedure will cost ~70ms.