# Week 20 Update: Odinson
And November is here... Just a few more updates and weeks remain. In the beginning of the week, Michael added a few changes to the default Memory Size of the State Cache and made a few review comments, then asked to add some more tests for the same implementation, to figure out if it works correctly. I did those and have also been taking a look at other legwork in the LightHouse repository, amidst the team's work to ship Fusaka to the mainnet. Also, the CICD passed when he ran the tests, and I have been running my local branch's beacon node with Reth, and the benchmarks do show that the pruning, recompute and memory aware caching has been capable of keeping a check at restricting any possible OOM situations, so i guess that's a positive sign!
The [PR](https://github.com/sigp/lighthouse/pull/8291) I made last week fixing incorrect time estimation for custody sync got merged, and came across this issue [regarding range sync modifications](https://github.com/sigp/lighthouse/issues/8341) and did some research on that as well. More on that below.
Apart from that, attended [ACDC#168](https://www.youtube.com/watch?v=JelYN_iyU84) and got to know the CL teams' current updates, states and what the future for Glamsterdam is, and also, we've got a new name for H-star upgrade: `Heka` :D
## Work for the week
Michael left some reviews on the project [PR](https://github.com/sigp/lighthouse/pull/7803) where he changed the `DEFAULT_STATE_CACHE_MAX_BYTES` to 4GB from the previous 3GB and also changed the flag name to `state_cache_max_mb` from `state_cache_max_size` as users can get it mixed up with `state_cache_size`, the number of beacon states. He added some comments where he asked me to add a number of cache fields of the `BeaconState` in the new `memsize.rs` in the types crate, like `committee_caches`, `progressive_balances_cache`, `pubkey_cache` etc. Some of them used structural sharing, some used Arc and some used `rpds`. While the ones that uses `rpds`, it was not very logical to measure their structural sharing as we can't implement `MemorySize` for them. So I had to leave some of the cache fields out.
I made those changes, implementing `MemorySize` for several cache fields and also adding a test for the `state_cache_max_bytes_flag` CLI flag in `tests/beacon_node.rs`. This just checks and verifies that the value supplied is stored in the config.
After this, I also added a test based on this implementation of the MemorySize for `committee_caches` and `epoch_fields` and to test for structural sharing and memory tracking in `beacon_state/tests.rs`.
Now regarding the "range sync modifications" [issue](https://github.com/sigp/lighthouse/issues/8341) that ethDreamer created for Gloas, I looked into the pipeline for range sync, the files that had such functions to be modified and to be changed and this is what I came up with:
I found that in `beacon_chain.rs`, [process_chain_segment](https://github.com/sigp/lighthouse/blob/v8.0.0-rc.2/beacon_node/beacon_chain/src/beacon_chain.rs#L2823) passes the `chain_segment` to [filter_chain_segment](https://github.com/sigp/lighthouse/blob/b59feb042c13fa74304acb920e720efde885d3bd/beacon_node/beacon_chain/src/beacon_chain.rs#L2845) function and the [send_chain_segment](https://github.com/sigp/lighthouse/blob/b59feb042c13fa74304acb920e720efde885d3bd/beacon_node/network/src/network_beacon_processor/mod.rs#L513) method in `network_beacon_processor/mod.rs ` uses the [process_chain_segment](https://github.com/sigp/lighthouse/blob/v8.0.0-rc.2/beacon_node/network/src/network_beacon_processor/sync_methods.rs#L531) method.
in [range.rs](https://github.com/sigp/lighthouse/blob/v8.0.0-rc.2/beacon_node/network/src/sync/range_sync/range.rs), [blocks_by_range_response](https://github.com/sigp/lighthouse/blob/b59feb042c13fa74304acb920e720efde885d3bd/beacon_node/network/src/sync/range_sync/range.rs#L204) receives the RPC blocks straight from the network and processes them. might as well be considered for rename.
the [send_chain_segment](https://github.com/sigp/lighthouse/blob/b59feb042c13fa74304acb920e720efde885d3bd/beacon_node/network/src/network_beacon_processor/mod.rs#L513) method is also used in [process_batch](https://github.com/sigp/lighthouse/blob/b59feb042c13fa74304acb920e720efde885d3bd/beacon_node/network/src/sync/range_sync/chain.rs#L317) method
also in `chain.rs`, the [on_block_response](https://github.com/sigp/lighthouse/blob/b59feb042c13fa74304acb920e720efde885d3bd/beacon_node/network/src/sync/range_sync/chain.rs#L262) method also receives the Vec. So these are the functions that might need a change for Gloas. I have asked ethDreamer, will look into it once he gets back with any further instructions.
And, I wrote a smol [thread](https://x.com/impoulav/status/1984881738334576739) on the upcoming Fusaka upgrade, because of which, I was able to look into and deep dive and understand [EIP-7918](https://eips.ethereum.org/EIPS/eip-7918) which adds a reserve price so a blob must cost atleast `BLOB_BASE_COST` per L1 execution gas fee , [EIP-7825](https://eips.ethereum.org/EIPS/eip-7825), introducing protocol-level cap on the maximum gas usage per transaction to 16,777,216 (2^24) gas. Good for DoS resistance + client parallelization and [EIP-7642](https://eips.ethereum.org/EIPS/eip-7642) which cleans up p2p protocol, drops pre-merge bloat, leaner sync.
## Resources
1. Project [PR](https://github.com/sigp/lighthouse/pull/7803)
2. [Gloas Range Sync Modifications](https://github.com/sigp/lighthouse/issues/8341)
3. [ACDC#168](https://www.youtube.com/watch?v=JelYN_iyU84)
4. [EIP-7918: Blob base fee bounded by execution cost](https://eips.ethereum.org/EIPS/eip-7918)
5. [EIP-7825: Transaction Gas Limit Cap](https://eips.ethereum.org/EIPS/eip-7825)
6. [EIP-7642: eth/69](https://eips.ethereum.org/EIPS/eip-7642)
## Conclusion
Only 1 week left for my flight to Buenos Aires and 2 weeks for DevConnect and honestly, I am very excited. Hoping for my project to be merged and a bit nervous and hyped, as I will be able to interact with the core devs, other from the protocol and foundation and so many people over there. Looking forward to preparing my slides and speech for the final presentation as well, and hoping to maybe have a good work to do after the fellowship ends in a few weeks time!