owned this note
owned this note
Published
Linked with GitHub
# PoV reclaim and storage root
This document describes the current results from research on more accurate PoV size calculation during the block building by calling the storage root after every transaction.
## Environment for benchmarking
I used [block production benchmark](https://github.com/paritytech/polkadot-sdk/blob/53e30e5c60bdef92ae46f2f9b6d29a4d113e7419/cumulus/test/service/benches/block_production.rs#L30) to initially evaluate impact on performance. The test uses state with [10k accounts](https://github.com/paritytech/polkadot-sdk/blob/53e30e5c60bdef92ae46f2f9b6d29a4d113e7419/cumulus/test/service/src/bench_utils.rs#L50), and uses simple extrinsic: [`transfer_keep_alive`](https://github.com/paritytech/polkadot-sdk/blob/53e30e5c60bdef92ae46f2f9b6d29a4d113e7419/cumulus/test/service/src/bench_utils.rs#L164).
## Base results
The results of running benchmark with no modifications:
```
Block production/(proof = true, transfers = 544) block production
time: [116.49 ms 117.05 ms 117.62 ms]
thrpt: [4.6249 Kelem/s 4.6478 Kelem/s 4.6699 Kelem/s]
```
The `storage_root` call itself it is not expensive comparing to all (`544*250µs=136ms`) extrinsics cost:
```
durations: block_builder::push: duration: 250.72µs result: Ok(())
...
durations: block_builder::push: duration: 246.18µs result: Ok(())
durations: block_builder::push: duration: 249.489µs result: Ok(())
durations: storage_root: duration=8.399046ms
durations: block_builder::build: duration: 11.466643ms, finalized_duration:11.067813ms
```
The _flamegraph_ for this scenario:

## Naive approach
As the first shot, I've just added call to [storage root](https://github.com/michalkucharczyk/polkadot-sdk/commit/4bee52570b49cf1876c395d8c661789ee1d2f51a) in the runtime.
Ekhm... well, results are a bit scarring:
```
Block production/(proof = true, transfers = 544) block production
time: [4.8737 s 4.8973 s 4.9232 s]
thrpt: [110.50 elem/s 111.08 elem/s 111.62 elem/s]
```
(which means `1.9%` of original throughput)
But it is kinda expected - with every new transaction we re-evaluate all the changes introduced by previous transactions. Smells like `n*n` problem.
Also the list of keys modified during every transaction is quite impressive:
```
0x26aa394eea5630e07c48ae0c9558cef70a98fdbe9ce6c55837576c60c7af3850 system eventCount
0x26aa394eea5630e07c48ae0c9558cef70ccf055743738b7a91a6fb88ece33cac system ExtrinsicWeightReclaimed
0x26aa394eea5630e07c48ae0c9558cef734abf5cb34d6244378cddbf18e849d96 system blockWeight
0x26aa394eea5630e07c48ae0c9558cef780d41e5e16056765bc8461851072c9d7 system events
0x26aa394eea5630e07c48ae0c9558cef7a86da5a932684f199539836fcb8c886f system allExtrinsicsLen
0x26aa394eea5630e07c48ae0c9558cef7b99d880ec681799c0cf30e8886371da987ca7263f2b39f8d0aaf796bdfa9e630b616e29423cd2033c363d424a55d6f4c21ca22c8158b2ddf2a367a3cff715f0f system account
0x26aa394eea5630e07c48ae0c9558cef7b99d880ec681799c0cf30e8886371da9ce0ab18ef7f884d6b8a5e13cecbde2a866f2bb30c55af6fa8498da01a357d568482826af6036dc3f68c05983ed9ba13c system account
0x26aa394eea5630e07c48ae0c9558cef7df1daeb8986837f21cc5d17596bb78d166ccada06515787c10000000 system extrinsicData
0x26aa394eea5630e07c48ae0c9558cef7ff553b5a9862a516939d82b3d3d8661a system executionPhase
0x3a65787472696e7369635f696e646578 :extrinsic_index
0x3a7472616e73616374696f6e5f6c6576656c3a :transaction_level:
0xc2261276cc9d1f8598ea4b6a74b15c2f57c875e4cff74148e4628f264b974c80 balances totalIssuance
```
A _flamegraph_ for this follows. Basically almost all time was spent in `sp_io::storage::ExtStorageRootVersion2::call`:

## Incremental storage root building
Well, we need something more sophisticated. I spent a while experimenting with this, and have some [dirty](https://github.com/michalkucharczyk/polkadot-sdk/commits/storage-root-optimizations-dev/) working code.
#### Summary of work done so far
An incremental storage root optimization was implemented with the following improvements:
• **Snapshot-based delta keys tracking**: Tracks only the keys that have been modified since the last snapshot, rather than iterating over all keys modified since the beginning. The core challenge is managing the contract between storage operations (`get`/`append`/`remove`), transaction management (`commit`/`rollback`), and storage root requests that create *snapshots* (look [here](https://github.com/michalkucharczyk/polkadot-sdk/blob/ea5a371c876fe6d932a98b4ad6222a22bcd21aa3/substrate/primitives/state-machine/src/overlayed_changes/xxx.rs#L422) to get the idea) - this is supported by the [`xxx::Changeset` ](https://github.com/michalkucharczyk/polkadot-sdk/blob/ea5a371c876fe6d932a98b4ad6222a22bcd21aa3/substrate/primitives/state-machine/src/overlayed_changes/xxx.rs#L10) helper that maintains dirty key sets and creates snapshots on-demand,
• **`changes_mut` optimization**: Avoids expensive filtering operations on the `HashMap` containing all modified keys by using `changes_mut2()` that directly looks up only the keys from the snapshot set (which requires materialization of appended keys),
• **Backend snapshots**: [`xxx::BackendSnapshots`](https://github.com/michalkucharczyk/polkadot-sdk/blob/ea5a371c876fe6d932a98b4ad6222a22bcd21aa3/substrate/primitives/state-machine/src/overlayed_changes/xxx.rs#L172) was implemented to manage backend transaction snapshots along with [`TrieBackendStorageWithReadOnlyOverlay`](https://github.com/michalkucharczyk/polkadot-sdk/blob/ea5a371c876fe6d932a98b4ad6222a22bcd21aa3/substrate/primitives/state-machine/src/trie_backend_essence.rs#L748) for layered storage access. As discussed with Sebastian: the performance gains from these component are not significant - probably could be skipped. However it helps to reduce the size of backend snapshots if there are many rollbacks.
• Call `sp_io::storage::root` only once in `post_dispatch_details` on the runtime side,
• Use `foldhash` as the default hasher for MemoryDB (the one used in internal HashMap - not the one used to compute trie node hashes),
All this work results in complexity reduction: transforms _per transaction_ storage root computation from `O(all_modified_keys)` to `O(modified_keys_since_snapshot)`, providing some performance gains.
#### Results
But we are still not there:
```
Block production/(proof = true, transfers = 544) block production
time: [248.33 ms 250.24 ms 252.54 ms]
thrpt: [2.1541 Kelem/s 2.1739 Kelem/s 2.1907 Kelem/s]
```
(which is `~47% `of original throughput).
The `perf` _flamegraph_ for this approach follows. `sp_io::storage::ExtStorageRootVersion2::call` is shown with arrow, and also zoomed below for reference:


There is one more potential optimization - we could use [`bytes::Bytes`](https://crates.io/crates/bytes) to avoid copying bytes in `sp_trie::deltra_trie_root` function.
But something was wrong with this graph, and sadly, at that point I realized that _flamegraph_ script did not correctly resolved all callstacks, and there was one more significant `storage_root`-related pile hanging around:

I don't see any options to optimize it. `blake2` hashing is called on the trie hash nodes containing the updated values, so it must be executed.
## Where we can go from here?
Seems that computing the **right** storage root for every transaction is not an option.
What can be done:
1. We can estimate the proper (assuming no other _surprises_) PoV size by using different hashing algorithm in `MemoryDB` used by `OverlayChanges`. This would be used **only** to access the right nodes in the backend, and record them in the PoV. The value of the storage root, and entire content `MemoryDB` would be useless. At the end of the block, we would need still to compute the storage root again (using the right hashing) - but this should not trigger any new accesses.
This solution, however, requires introduction of new host-function, (or better: re-defining existing PoV-size host function) which should call this fake storage-root machinery. I will now work on PoC to see what we can get in terms of performance - most of the work done so far can be re-used. This approach is super complex, and feels a bit hacky. (*My gut feeling is that we could get maybe 90-95% of initial bandwidth.*)
2. Accept fact that PoV size is just estimation, and implement touching items that are read or deleted - what was already proposed by Sebastian in [#6020](https://github.com/paritytech/polkadot-sdk/issues/6020).
3. Do nothing - wait for NOMT.
4. Do not call storage root for every transaction. If I recall correctly Sebastian proposed to call storage root only when we approach (or exceed) the PoV size limit.
## Getting the right flamegraph
My `perf` flow felt to be not correct. The flamegraphs were a bit scattered and also did not seem trustworthy. In particular, in the logs I was seeing that triggering storage root building shall be 20% of entire transaction processing:
```
2025-09-03T13:53:37.767503Z DEBUG durations: trigger_storage_root_size_estimation: duration=65.91µs snapshot_len=3 transcation_nodes=0
2025-09-03T13:53:37.767556Z DEBUG durations: block_builder::push: duration: 323.039µs result: Ok(())
```
while framegraph was not reflecting this ratio:
 This was a serious discrepancy.
Also the individual backtraces on the flamegraph seemed to be *scattered*, many of them starting at `unknown` mapping, refer to this [example](https://michalkucharczyk.github.io/files/23-pov-reclaim/dwarf-flamegraph.svg?s=trigger).
After short investigation I managed to improve the graphs. Leaving these *obvious* technical details here for future investigators:
- add support for [`perfmap` in wasmtime](https://github.com/michalkucharczyk/polkadot-sdk/commit/2e24e34be80be3eb16bc05c33b9222be0efb95b6),
- the main issue was using `dwarf` for resolving the stack - this method was not perfect, and more accurate results can be achieved using *frame pointer*. This requires building the codebase with `force-frame-pointers`.
- build benchmark:
```bash
export RUSTFLAGS="-C force-frame-pointers=yes"
cargo bench --bench block_production
```
- profile benchmark:
```bash
# note: some OSs may require this:
# sysctl -w kernel.perf_event_paranoid=0
# sysctl -w kernel.kptr_restrict=0
# /tmp/perf-pid.map will be created
export WASMTIME_PROFILING_STRATEGY=perfmap
perf record -F 999 -g -- /path-to/target/release/deps/block_production-xxxx --bench
```
- converting to flamegraph:
```bash
#pre-requisites:
cargo install addr2line --features=bin
git clone https://github.com/brendangregg/FlameGraph.git
#processing (/tmp/perf-pid.map will be read by perf automatically)
perf script | rustfilt > perf.script
FlameGraph/stackcollapse-perf.pl perf.script > perf.folded
FlameGraph/flamegraph.pl perf.folded > flamegraph.svg
```
New flamegraphs shine. [Here](https://michalkucharczyk.github.io/files/23-pov-reclaim/original-flamegraph.svg) is the snapshot for unchanged (*master-like*) runtime:

## Fake storage root: `FoldHash` instead of `Blake2` and keys filtering
Switching to `Blake2` did not bring expected gain:
```
Block production/(proof = true, transfers = 544) block production
time: [230.73 ms 231.51 ms 232.33 ms]
thrpt: [2.3415 Kelem/s 2.3498 Kelem/s 2.3577 Kelem/s]
```
The next step is filtering the keys in the snapshot. Once the key was used once for computing the storage root, there is no need to use it again for subsequent transactions.
I also used [`RapidHash`](https://docs.rs/rapidhash/latest/rapidhash/) which is slightly better then `FoldHash`. This improvements brings us to:
```
Block production/(proof = true, transfers = 544) block production
time: [182.98 ms 183.61 ms 184.37 ms]
thrpt: [2.9507 Kelem/s 2.9628 Kelem/s 2.9731 Kelem/s]
```
And here is new [flamegraph](https://michalkucharczyk.github.io/files/23-pov-reclaim/update2-flamegraph.svg):
