# Week 11 Update: Odinson Last week was a bit slow, Dapplion is currently OOO and so I was waiting for Michael's comments on the work and the further progress. But this week there has been significant progress on the project. Wrote benchmarks for `state_cache.rs` file, ran Lighthouse and Reth nodes and spammed http queries to test out the `StateCache` `memory_size` performance. Went through the Rust and Lighthouse book, and figured out how to run the nodes in holesky, and the various configurations. And finally, studied a bit of KZG polynomial committment scheme, because I have been planning to implement that in Rust, to learn and practice ## Work for the week After previous work, Michael asked me to run the benchmarks, and test out the performance, by spamming http queries after having a big number for `--epochs-per-migration` so that a large number of finalized epochs stay in the hot db. So the command I used was ```shell= lighthouse bn \ --network holesky \ --execution-endpoint http://127.0.0.1:8551 \ --execution-jwt /tmp/jwt.hex \ --checkpoint-sync-url "https://checkpoint-sync.holesky.ethpandaops.io" \ --datadir "$HOME/lh-test" \ --epochs-per-migration 999999 \ --state-cache-size 128 \ --state-cache-max-bytes 134217728 \ --http --http-address 127.0.0.1 --http-port 5052 \ --metrics --metrics-address 127.0.0.1 --metrics-port 5054 ``` having `state_cache_max_bytes` of 128MB and `state_cache_size` of 128, the `--epochs-per-migration` being set to 999999. And the logs that I received were ```sh odinson lighthouse$curl -s http://127.0.0.1:5054/metrics | egrep -i \ 'store_beacon_state_cache_memory_size|beacon_state_memory_size_calculation_time|store_beacon_state_cache_size' # HELP beacon_state_memory_size_calculation_time Time taken to calculate the memory size of a beacon state. # TYPE beacon_state_memory_size_calculation_time histogram beacon_state_memory_size_calculation_time_bucket{le="0.005"} 43 beacon_state_memory_size_calculation_time_bucket{le="0.01"} 43 beacon_state_memory_size_calculation_time_bucket{le="0.025"} 43 beacon_state_memory_size_calculation_time_bucket{le="0.05"} 43 beacon_state_memory_size_calculation_time_bucket{le="0.1"} 43 beacon_state_memory_size_calculation_time_bucket{le="0.25"} 43 beacon_state_memory_size_calculation_time_bucket{le="0.5"} 43 beacon_state_memory_size_calculation_time_bucket{le="1"} 43 beacon_state_memory_size_calculation_time_bucket{le="2.5"} 43 beacon_state_memory_size_calculation_time_bucket{le="5"} 43 beacon_state_memory_size_calculation_time_bucket{le="10"} 43 beacon_state_memory_size_calculation_time_bucket{le="+Inf"} 43 beacon_state_memory_size_calculation_time_sum 0.00008412399999999994 beacon_state_memory_size_calculation_time_count 43 # HELP store_beacon_state_cache_memory_size Memory consumed by items in the beacon store state cache # TYPE store_beacon_state_cache_memory_size gauge store_beacon_state_cache_memory_size 1016 # HELP store_beacon_state_cache_size Current count of items in beacon store state cache # TYPE store_beacon_state_cache_size gauge store_beacon_state_cache_size 128 odinson lighthouse$seq -f %.0f "$LOW" "$HEAD" | xargs -P 6 -n1 -I{} \ curl --compressed -sS -m 60 -o /dev/null \ "http://127.0.0.1:5052/eth/v1/beacon/states/{}/validators?status=active_ongoing&id=0,1,2,3,4" seq -f %.0f "$LOW" "$HEAD" | xargs -P 8 -n1 -I{} \ curl --compressed -sS -m 30 -o /dev/null \ "http://127.0.0.1:5052/eth/v1/beacon/states/{}/committees" odinson lighthouse$seq -f %.0f "$LOW" "$HEAD" | xargs -P 6 -n1 -I{} \ curl --compressed -sS -m 60 -o /dev/null \ "http://127.0.0.1:5052/eth/v1/beacon/states/{}/validators?status=active_ongoing&id=0,1,2,3,4" # TYPE beacon_state_memory_size_calculation_time histogram beacon_state_memory_size_calculation_time_bucket{le="0.005"} 49 beacon_state_memory_size_calculation_time_bucket{le="0.01"} 49 beacon_state_memory_size_calculation_time_bucket{le="0.025"} 49 beacon_state_memory_size_calculation_time_bucket{le="0.05"} 49 beacon_state_memory_size_calculation_time_bucket{le="0.1"} 49 beacon_state_memory_size_calculation_time_bucket{le="0.25"} 49 beacon_state_memory_size_calculation_time_bucket{le="0.5"} 49 beacon_state_memory_size_calculation_time_bucket{le="1"} 49 beacon_state_memory_size_calculation_time_bucket{le="2.5"} 49 beacon_state_memory_size_calculation_time_bucket{le="5"} 49 beacon_state_memory_size_calculation_time_bucket{le="10"} 49 beacon_state_memory_size_calculation_time_bucket{le="+Inf"} 49 beacon_state_memory_size_calculation_time_sum 0.00009045799999999994 beacon_state_memory_size_calculation_time_count 49 # HELP store_beacon_state_cache_memory_size Memory consumed by items in the beacon store state cache # TYPE store_beacon_state_cache_memory_size gauge store_beacon_state_cache_memory_size 968 # HELP store_beacon_state_cache_size Current count of items in beacon store state cache # TYPE store_beacon_state_cache_size gauge store_beacon_state_cache_size 128 ``` The thing to focus on here is the `store_beacon_state_cache_memory_size` metric, which dropped from 1016 to 968, meaning that the `recompute_cached_bytes` function I wrote to recalculate the `cached_bytes` function in the [PR](https://github.com/sigp/lighthouse/pull/7803) is working and the `store_beacon_state_cache_size` stays the same at 128 because the earlier states get removed to make space for the new ones. Also to make sure that this is not redundant or vague, we can see: ![telegram-cloud-photo-size-5-6168141566628186183-y](https://hackmd.io/_uploads/r1ebrj-cel.jpg) the `beacon_state_memory_size_calculation_time_count` increases every time, meaning it take more time to calculate every time the entire recount runs. I showed these logs to Michael and he approved that the metrics and the calculations look good! Apart from this, I wrote some benchmarks for the `state_cache.rs` in the `store/benchmarks` directory. and this is what I got: ```shell= Running benches/state_cache.rs (target/release/deps/state_cache-46117f5a31f61609) Gnuplot not found, using plotters backend state_cache_insert_without_memory_limit time: [9.0244 µs 9.0964 µs 9.1473 µs] change: [+0.1206% +1.7307% +3.1607%] (p = 0.04 < 0.05) Change within noise threshold. Found 1 outliers among 10 measurements (10.00%) 1 (10.00%) low mild state_cache_insert_with_memory_limit time: [8.8781 µs 8.9653 µs 9.0206 µs] change: [-1.3848% -0.1398% +1.1336%] (p = 0.84 > 0.05) No change in performance detected. ``` showing that there has been no regression post introduction of `memory_size` in the `StateCache`. Finally, changed the status of my PR from draft to ready for review! ## Resources 1. Project [PR](https://github.com/sigp/lighthouse/pull/7803) 2. [Reth Docs](https://reth.rs/run/ethereum/) 3. [Lighthouse Book](https://lighthouse-book.sigmaprime.io/run_a_node.html) 4. Benchmarking in [Rust](https://nnethercote.github.io/perf-book/benchmarking.html) 5. KZG Polynomial Commitment Scheme by [LearnWeb3](https://learnweb3.io/lessons/kzg-polynomial-commitment-scheme/) 6. Scroll's intro to [KZG](https://docs.scroll.io/en/learn/zero-knowledge/kzg-commitment-scheme/) ## Conclusion Pretty good progress and next week, I am looking forward to Dapplion's and Michael's comments on what's next, what I should look into, run or how i can test it out further