# EPF5 Week 4 Updates **update before ethcc** EthCC is coming up in less than a week! Very exciting, but it also means I need to hurry up with my project proposal since I'm presenting at [EPF Day](https://efdn.notion.site/EPF5-EthCC-in-Brussels-a75d77440b014461b24e3e9c7c8765be) on Sunday. After conversations with mentors like Mario and Manu, as well as Dan Cline who works on reth, I decided to shift my attention to SSZ since it seems like the community wants to switch to SSZ in the long run. Working on improvements to SSZ makes the most sense. With that in mind, I have two short projects in mind which I think can be impactful while feasible to work on part-time. [Here are the slides!](https://www.figma.com/deck/25jyYJshW2xJ0Q14IPXbpS/EPF-Day-%40-EthCC?node-id=1-34&t=E466WwOmgKEkB9Im-1&scaling=min-zoom&content-scaling=fixed&page-id=0%3A1) **during ethcc** EPF day went great! Got lots of great feedback, mostly about talking to Peter on the geth team since he's also working on ssz. I'm ging through his implementation at the moment to understand what he's doing to outperform other golang implementations. I'm talking to a few folks who are interested in an optimized rust implementation. I chatted with Henry de Valence at Penumbra briefly and he told me it's possible that streaming the encodings into keccak might not be worthwhile, especially considering the size of the encodings might not be that huge. This ties back into one of the challenges I pointed out in my talk that the overhead of encoding + hashing in lockstep might not even be worthwhile. I'll have to look into that. ## Serialize Profiler I want to build a simple and pretty rust benchmarking/profiling suite that works over abstract `Encode` and `Decode` traits. These traits are common among serialization crates (both RLP and SSZ) so it makes sense to do the same here. Here's how I want it to work: - Works with any crate that implements `Encode/Decode` traits. - Can take in data from some dataset or in real-time from the blockchain - Encodes and decode objects for all tested implementations - Compares their performance and plots them against each other. Would be cool to support flamegraph diffs to see *why* they perform differebtly. ## High Performance SSZ Implementation I have some ideas for how to build a highly performant implementation of SSZ in rust. If it works, that alone will be worthwhile. Maybe working on it will also yield ideas for how to improve the serialization scheme itself. These are the issues I want to tackle with my implementation: - zero-copy - streaming - efficient access (ideally random/direct) - faster little endian decoding with SIMD - other places vectorization can be leveraged - lockstep encoding and absorbtion for faster encode -> keccak process. this takes advantage of [sponge hash constructions](https://en.wikipedia.org/wiki/Sponge_function) to iteratively absorb the encoded input *as it's being encoded* in a single pass. finally, squeezing the absorbed input once the encoding is done. this means we dont have to encode *then* hash, which requires two passes over the data. **more on the lockstep encode + hash** encoding then hashing is something that happens a lot on ethereum. we go through the effort of optimizing both the encoding and hashing on their own, but i wonder if we should also be accounting the entire process and working on what i'll call "hashing-aware" ssz implementations.