# Week 13 Update: Odinson
It was a comparatively slow week as I have been waiting for further reviews on my [PR](https://github.com/sigp/lighthouse/pull/7803) but meanwhile, did some other works in the Lighthouse repository. After my PR solving the issue of extra fields in logs, [here](https://github.com/sigp/lighthouse/pull/8009), Michael and Jimmy left some comments which I fixed that day, and it got merged. Other than that, I had some talks with Eitan from Lighthouse regarding a previous issue I was trying to solve, and discussed on the different approaches to solving that problem, and about PeerDAS as well. Also, I had worked on refactoring the Validator Attestation Service long back in this [PR](https://github.com/sigp/lighthouse/pull/7649), where Eitan had left some comments previously. Got some time to work on it, and it was turning out to be a pretty big refactor with a large number of changes across the validator attestation and proposal lines. So I made the commit and asked Eitan to review it further.
Apart from that, came across this article on [EIP 7503](https://eips.ethereum.org/EIPS/eip-7503), gave a read, and understood the differences and how it is better/different to Tornado Cash. Finally, gave a read to [this article](https://x.com/hazeflow_xyz/status/1966526662809358754) by Hazeflow on why Based rollups are the future of Ethereum, how interoperability has been a challenge, why centralized sequences made sense previously, what problems based rollups solve etc. It's a good read!
Finally, got my DevConnect Argentina [ticket](https://x.com/impoulav/status/1966404729308922314) and am looking forward to booking my flights, getting the visa approval and being present in my first DevCon with all the client teams and fellow fellows!
## Work for the week
There has not been much change to my state cache PR because I am awaiting further reviews, but I have been running the logs for a long time everyday to test out the changes and i observed some positive changes!
So these are the metrics that I got after running the Lighthouse + Reth node for a few hours:
```shell
odinson lighthouse$curl -s http://127.0.0.1:5054/metrics | egrep -i 'store_beacon_state_cache_memory_size|beacon_state_memory_size_calculation_time|store_beacon_state_cache_size'
# HELP beacon_state_memory_size_calculation_time Time taken to calculate the memory size of a beacon state.
# TYPE beacon_state_memory_size_calculation_time histogram
beacon_state_memory_size_calculation_time_bucket{le="0.005"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.01"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.025"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.05"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.1"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.25"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.5"} 0
beacon_state_memory_size_calculation_time_bucket{le="1"} 0
beacon_state_memory_size_calculation_time_bucket{le="2.5"} 487
beacon_state_memory_size_calculation_time_bucket{le="5"} 489
beacon_state_memory_size_calculation_time_bucket{le="10"} 489
beacon_state_memory_size_calculation_time_bucket{le="+Inf"} 489
beacon_state_memory_size_calculation_time_sum 625.1362320440004
beacon_state_memory_size_calculation_time_count 489
# HELP store_beacon_state_cache_memory_size Memory consumed by items in the beacon store state cache
# TYPE store_beacon_state_cache_memory_size gauge
store_beacon_state_cache_memory_size 1066
# HELP store_beacon_state_cache_size Current count of items in beacon store state cache
# TYPE store_beacon_state_cache_size gauge
store_beacon_state_cache_size 2
```
1. the `store_beacon_state_cache_memory_size 1066` is in MiB, which was previously bytes. So I updated the `hot_cold_store.rs` to divide the bytes to MiB.
2. the memory size stays withing range normally. The one main concern that I have asked Michael about is the low number of state cache that is stored, which is indicated here as `store_beacon_state_cache_size 2`.
Next, when I hammered some HTTP queries, this is what I saw
```shell
# HELP beacon_state_memory_size_calculation_time Time taken to calculate the memory size of a beacon state.
# TYPE beacon_state_memory_size_calculation_time histogram
beacon_state_memory_size_calculation_time_bucket{le="0.005"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.01"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.025"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.05"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.1"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.25"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.5"} 0
beacon_state_memory_size_calculation_time_bucket{le="1"} 0
beacon_state_memory_size_calculation_time_bucket{le="2.5"} 1364
beacon_state_memory_size_calculation_time_bucket{le="5"} 1370
beacon_state_memory_size_calculation_time_bucket{le="10"} 1374
beacon_state_memory_size_calculation_time_bucket{le="+Inf"} 1383
beacon_state_memory_size_calculation_time_sum 2074.078908337999
beacon_state_memory_size_calculation_time_count 1383
# HELP store_beacon_state_cache_memory_size Memory consumed by items in the beacon store state cache
# TYPE store_beacon_state_cache_memory_size gauge
store_beacon_state_cache_memory_size 1684
# HELP store_beacon_state_cache_size Current count of items in beacon store state cache
# TYPE store_beacon_state_cache_size gauge
store_beacon_state_cache_size 5
```
Here, the `store_beacon_state_cache_memory_size 1684` did spike up to 1684, but in the next count, as is the `recompute_cached_bytes` function, as the memory sie went above the threshold, the pruning happened and
```shell
odinson lighthouse$curl -s http://127.0.0.1:5054/metrics | egrep -i 'store_beacon_state_cache_memory_size|beacon_state_memory_size_calculation_time|store_beacon_state_cache_size'
# HELP beacon_state_memory_size_calculation_time Time taken to calculate the memory size of a beacon state.
# TYPE beacon_state_memory_size_calculation_time histogram
beacon_state_memory_size_calculation_time_bucket{le="0.005"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.01"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.025"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.05"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.1"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.25"} 0
beacon_state_memory_size_calculation_time_bucket{le="0.5"} 0
beacon_state_memory_size_calculation_time_bucket{le="1"} 7
beacon_state_memory_size_calculation_time_bucket{le="2.5"} 2453
beacon_state_memory_size_calculation_time_bucket{le="5"} 2487
beacon_state_memory_size_calculation_time_bucket{le="10"} 2572
beacon_state_memory_size_calculation_time_bucket{le="+Inf"} 2604
beacon_state_memory_size_calculation_time_sum 4778.682415834011
beacon_state_memory_size_calculation_time_count 2604
# HELP store_beacon_state_cache_memory_size Memory consumed by items in the beacon store state cache
# TYPE store_beacon_state_cache_memory_size gauge
store_beacon_state_cache_memory_size 1064
# HELP store_beacon_state_cache_size Current count of items in beacon store state cache
# TYPE store_beacon_state_cache_size gauge
store_beacon_state_cache_size 7
```
So I believe the dynamic memory size approach is working, and after the updated subtrees implementation I spoke about [last week](https://hackmd.io/@Odinson/SkMVH3q9xx?stext=3357%3A8%3A0%3A1757852224%3AUPpMI8), the `beacon_state_memory_size_calculation_time_count` comes about 2604, so sum: 4,778.682 s → avg ≈ 1.84 s per measurement.
Apart from this, in another work, this [commit](https://github.com/sigp/lighthouse/pull/7649/commits/4117e41b36ba69df7bff28513a2f06d5cef158dc) focuses on refactoring the Validator Attestation Service to directly construct `SingleAttestation` objects. Previously, the service might have been converting `Attestation<E>` to `SingleAttestation` as an intermediate step.
1. `beacon_node/beacon_chain/src/beacon_chain.rs`: Modifications are made to functions that produce attestations, changing their return types and internal logic to work directly with `SingleAttestation` instead of `Attestation<E>`. This includes changes in `impl<T: BeaconChainTypes> BeaconChain<T>` methods related to attestation production.
2. `beacon_node/beacon_chain/src/early_attester_cache.rs`: The `EarlyAttesterCache` implementation was updated to handle `SingleAttestation` directly. This involves changes in the `try_attest` method, where `Attestation<E>` was replaced with `SingleAttestation` in its return type and internal construction.
3. `beacon_node/beacon_chain/src/test_utils.rs`: Similar refactoring was applied in `test_utils.rs` to align with the new `SingleAttestation` direct construction approach, ensuring consistency across the codebase for attestation handling.
This refactoring simplifies the attestation process by removing unnecessary conversions and directly utilizing `SingleAttestation`, which should lead to cleaner code and potentially improved performance in the Validator Attestation Service.
## Resources
1. Project [PR](https://github.com/sigp/lighthouse/pull/7803)
2. Reconstructed Validator Attestation Service [PR](https://github.com/sigp/lighthouse/pull/7649)
3. Removal of extra fields in logs [PR](https://github.com/sigp/lighthouse/pull/8009)
4. [EIP 7503](https://eips.ethereum.org/EIPS/eip-7503)
5. [Article](https://x.com/hazeflow_xyz/status/1966526662809358754) on why Based rollups are the future of Ethereum
6. [Analysis](https://ethresear.ch/t/an-analysis-of-attestation-timings-in-a-6-s-slot/23016) of attestation timings in a 6s slot
## Conclusion
Looking forward to completing the VAS refactor PR and getting to other works in the Lighthouse codebase, while also looking forward to the further updates and word to be done in the state cache PR. Apart from that, hoping to focus some more on working on my rust skills next week!