# Week 15 update Hi there, I was investigating on the `one_byte_more` tests, for my [Progressive List pr ](https://github.com/ChainSafe/ssz-z/pull/64). I focused on this particular test `proglist_uint16_1_max_one_byte_more` where the decompressed data was coming out to be `ffff` instead of `fffff`. I confirmed with Etan from nimbus that these are buggy spec test data are generated due to multithreading. Etan also found another issue on bool tests where it fails to [detect invalid](https://github.com/ethereum/remerkleable/pull/9/files). I am using `v1.6.0-alpha.4` for my spec tests data but the updated new spec tests data which will not contain the buggy data is the release after `v1.6.0-beta.0` . I also started working on [ProgressiveBitlist](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-7916.md). `ProgressiveBitlist` is the progressive version of SSZ `Bitlist`. The serialization and deserialization are identical to `Bitlist[N]` — bits packed into bytes with a 1-bit terminator. The main difference is that `bitlist[N]` requires a fixed maximum capacity N, and its merkleization pads up to that capacity. ProgressiveBitlist removes the capacity bound and uses progressive Merkleization, so proofs remain stable and hashing cost scales with actual length, not max capacity. In bit_list.zig ```zig pub fn hashTreeRoot(allocator: std.mem.Allocator, value: *const Type, out: *[32]u8) !void { const chunks = try allocator.alloc([32]u8, (chunkCount(value) + 1) / 2 * 2); defer allocator.free(chunks); @memset(chunks, [_]u8{0} ** 32); @memcpy(@as([]u8, @ptrCast(chunks))[0..value.data.items.len], value.data.items); try merkleize(@ptrCast(chunks), chunk_depth, out); mixInLength(value.bit_len, out); } ``` In progressive_bit_list.zig ```zig pub fn hashTreeRoot(allocator: std.mem.Allocator, value: *const Type, out: *[32]u8) !void { const chunks = try allocator.alloc([32]u8, chunkCount(value)); defer allocator.free(chunks); @memset(chunks, [_]u8{0} ** 32); @memcpy(@as([]u8, @ptrCast(chunks))[0..value.data.items.len], value.data.items); try progressive.merkleizeChunks(allocator, chunks, out); mixInLength(value.bit_len, out); } ``` We will notice that `chunk_depth` is not used in `progressive_bit_list` and `ProgressiveBitList` uses progressive merkleization that scales with actual length. This reduced padding to fixed depth regardless of actual size and only processes actual chunks. # Work for next week - Investigate 80 spec test issues from `progressive_bit_list` - Start implementing [Progressive Container](https://eips.ethereum.org/EIPS/eip-7495)