# EPF5 Week 18 Update - Debugged the encoding derive macro, it works now! - I need to go back and rework my decoding implementation since I messed up re: variable size type decoding. That's coming next week. - Did some preliminary tests on list encoding and found a ~15% speed increase on my implementation vs lighthouse's --- # First Look at Encoding Speeds Unfortunately I couldn't get decoding to fully work this week, but I wanted to validate any improvements sooner rather than later. I devised a simple test to see if we're encoding faster than lighthouse does. We instantiate then encode a list of max length `C` with `N` elements, where: ```rust type C = typenum::U1099511627776; const N: u64 = 1_000_000; ``` This is how lighthouse's encoding performs: ![Screenshot 2024-10-14 at 12.09.05 AM](https://hackmd.io/_uploads/BkTwtz9yke.png) This is how sszb (my implementation) stacks up naively: ![Screenshot 2024-10-14 at 12.12.42 AM](https://hackmd.io/_uploads/HJBrcfckJg.png) The naive method allocates (and reallocates) a `Vec` buffer *inside* the encoding method. This means part of the time it takes to allocate the `Vec` is counted towards the total encoding time. Not to mention that a `Vec` gets passed around each encoding call. We can do better by *preallocating* a big enough `Vec` and passing around a mutable *slice* to the `Vec`, like so: ```rust let len = list.ssz_bytes_len(); let mut buf: Vec<u8> = vec![0u8; len]; list.ssz_write(&mut buf.as_mut_slice()) ``` This lets us shave a whole millisecond off lighthouse's encoding time, or roughly **15%**!!! ![Screenshot 2024-10-14 at 12.18.10 AM](https://hackmd.io/_uploads/rypFizc11x.png) --- I'm cutting it close with my fellowship project. Once I finish the decoding portion of my implementation I'll be finally be able to run some proper tests. For the sake of time, I'll be omitting the merkleization part. My main concern with this project is showing I can improve on encoding/decoding and merkleizing isn't central to that. After the basic improvements I want to show, I want to try pushing past that with some optimizations that might let the compiler optimize away some stuff (like bound checks). If I have time left over after that and preparing my presentation then I'll tackle merkleization.