# Week 8 Update: [Odinson](https://github.com/PoulavBhowmick03)
This last week has been pretty good in terms of development of the project and the [PR]() in Lighthouse. Received comments from dapplion and Michael, and had discussions with them in the group, regarding the direction of the caching, the changes required, the problems, which I will be discussing later.
Apart from that, this week, I was going through some libp2p work, not directly in their repo but using libp2p, basically a spreadsheet application that uses libp2p. got to learn a lot about not only gossipsub, peer to peer communication but also how each element works in details.
Joined the office hours, where Justin Drake spoke about a lot of stuffs, regarding the current state of Ethereum, lean chain and also got to know a lot about how to approach others for opportunities and things to do.
## Resources
1. [Project PR](https://github.com/sigp/lighthouse/pull/7803)
2. [Arc and double Arc memory sharing](https://doc.rust-lang.org/std/sync/struct.Arc.html)
## Work for the week
Dapplion made a comment talking about a potential issue in the PR, that was, memory tracking in case of shared Arcs, or to specifically quote his [comment](https://github.com/sigp/lighthouse/pull/7803#discussion_r2252133516), whether `memory_size only count the size of the data owned by the first copy of the Arc`.
He also added, mentioning to add a new metric that would track the timing consumption of `memory_size` to see how expensive it is.
Later, Michael also commented, talking about a potential issue in the initial incremental change to the `memory_size` in the `put_state` method, `memory_size` of the cache a state is probably larger than the amount of space we save by deleting it, because some of its memory might be shared with other states, which is true. The issue is, unless we recount after removing or adding to the `cached_bytes`, we won't necessarily know how much of the `memory_size` has changed.
Later, in the dev group, he mentioned an iterative approach to this, that we see if the `memory_size` of the cache is larger than the threshold, and to get rid of a certain number of it, and then again track the `memory_size`.
So I implemented those changes, I added a new function to avoid shared-Arc memory tracking: logic to avoid double-counting memory that is shared across Arcs (so `memory_size` no longer blindly sums owned bytes without considering shared ownership).
Replaced the previous incremental `cached_bytes` approach with a full measurement path: a new `measure_cached_memory_size()` helper uses milhouse's `MemoryTracker::track_item` on each `BeaconState` to compute a differential total and the elapsed measurement time. `recompute_cached_bytes()` now calls that helper, sets `self.cached_bytes` from the measured total, and then iteratively `culls` small batches of states until the cache is under `max_cached_bytes`, remeasuring after each cull so shared memory is accounted for correctly.
Lastly, added new timing metric `BEACON_STATE_MEMORY_SIZE_CALCULATION_TIME`, so we can see both the true cache size and how expensive the full `memory-size` calculation is in practice.
## Conclusion
Was a little slow this week, as my laptop crashed, and had to buy in after a few days, but awaiting Lighthouse team's comments and further reviews. This week, Potuz will be joining us in the office hours, would love to hear from him on the ethereum consensus and ePBS as well. Also, looking forward to studying more about the latest work and working on Lighthouse, and the project aswell.