# Ethereum CDAP Development Update #4
## Ongoing iteration of PortalStorage
Last update I had a first iteration of a Kademlia-based storage engine implemented in Rust for Trin, the flagship Portal Network client.
Since then, I've updated this implementation to make it robust and user-friendly for the other layers of Trin that will use it. This mainly included defining a `PortalStorageError` type that all PortalStorage methods return within a Result. This type wraps different error type that is thrown internally within `PortalStorage`. Previously I had been calling `unwrap` on many Options and Results while I got the capacity-management logic working.
I also cleaned up a lot of other great feedback received during review and then merged this into Trin's master branch: https://github.com/ethereum/trin/pull/69
This PR lived for far too long. I should have taken a more iterative approach, which I plan to do going forward.
## Next Goals
Primary goal is now to integrate this `PortalStorage` into the rest of Trin for the launch of the Trin testnet. This will involve refactoring the `OverlayProtocol` struct to use a `PortalStorage` instance instead of using a `DB` directly.
Other tasks that will be done for the testnet launch:
Currently `PortalStorage` can only find the farthest content ID in the database when using the history network's standard XOR distance function. I am adding a SQL implementation of the state network's custom distance function so that this struct will be useable by the state network as well.
I have modified the expected signature of the `content_key`->`content_id` conversion function, but after doing so I realized that there is a much simpler approach. `PortalStorage`'s `store`, `should_store`, and `get` methods will take a `ContentKey` type that has a `to_content_id()` method. This type and method are going to be merged by Victor this week, after which I am going to make this update.
Currently a `PortalStorage` instance in each Portal subnetwork (history, state) will all write/read a single database instance. I think that having this work properly with each network running in a different thread will be as simple as wrapping the database instances in `RwLock` and using the `write().await` and `read().await` syntax. I plan to implement this, while also doing research into whether this is the best approach from a disk I/O optimization perspective.
I am also going to make the capacity management run as a background thread within `PortalStorage` so that neither `new` or `store` ever block while managing capacity.