### Factor that affect block time & block size: * Consensus param that affects p2p: * mempool size * block.max_bytes, block.max_gas * Node performance: * hardware (single - multi core) * DBs (goleveldb, pebble,...) * iavl (full node & prune) * Other factors: * geolocation, set up node at different geo to compare. ### Some gather question * Increase mempool size from x to 200 leads to increased block times? what single incident did you observe? * handful of txs causes same blocktimes as 200 txs -> relation between txs submitted and blocksizes. It might be that for the case of handful of txs, the network has the same mempool size as the case of 200txs, the differences here is the txs that is finalized into the block. * Why network slow (for all nodes) at certain time, what factor???, we should investigate mempool and block size * Extended block, is this the same case as the above case. * Attack vector: long memo -> this goes back to the problem of high load/mempool - our experience: block size affects block time, we have observe several network delaying producing block because of block size ### Our solution * For `mempool size`, `block.max_bytes`, `block.max_gas` investigation, we will setup testnet (locally) or use current testnet (how many nodes do we have/ how decentralize). Its best if we can use Inj testnet * If using your testnet, we will also add in some node of ours. * Need a way to reconig nodes without halt/reset chain? Maybe a module that can change config params onchain * Try diff config for testnet * Write load test script. Need to run these txs in parallel, cause we want to include hundress of txs into 1 block (cant do in sequence): * Random txs for mempool test * Exchange txs for max_gas test * IBC txs for IBC DDos * After run spam script, query block time, block size for certain of time. The result will be a correlation chart between the cheking params to be tested and the block size and block time. Example: ![265780912-23233227-1a4c-4176-8895-846a7b51fd74](https://hackmd.io/_uploads/rkYJb2VTT.png) * For node performance investigation, we benchmark commit time with different prune mode/db/hardware * For extended block, we still dont know what is the root prob. We tracked all block time in 14/2/2024 here the result: https://drive.google.com/file/d/1wpkTcXBbbcFXfvVB6xJKcnZ2fByDp3K0/view?usp=sharing It doesn't seem to happen often, focusing on the 16:00 time frame when the frequency of blocks is very long. Will research further