# EPF6 - Week 14 Updates
## Summary
1. Review Jun's PR
2. AMA with Roberto Saltini
3. 1:1 meeting
4. Coding
## Details
### 1. Review Jun's PR
I was awaiting for bitlist and bitvector so I gladly reviewed Jun's PR. Just minor comments and double-checked the offset computation.
### 2. Meeting with Mario:
I’m currently running a bit behind schedule: what I’ve done so far is closer to a proof of concept, and the main next step is a major refactor to leverage optimizations from fastssz and dynssz. There are no external blockers — the work depends on me building up more knowledge in Go. Collaborating with Jun has been a great experience, and I’ve learned a lot about the Prysm client and Go libraries (such as sync, and the fastssz/dynssz hashing internals), as well as memory management and performance tricks. Beyond technical growth, this project has also helped me with project management, time management, and even lifestyle routines (like English lessons, training, and multitasking). At this point, I mainly need dedicated time to finish, and I’ll reach out whenever I need support.
### 3. AMA with Roberto Saltini
Fast Confirmation Rule (FCR) is a topic that I really like. I took my chance to ask Roberto about how Holesky incident would have been handled with FCR.
Roberto provided many resources after AMA:
- Dafny tutorial: https://dafny.org/dafny/OnlineTutorial/guide.html
- Tutorial on the FV of concurrent systems (the same principles can be applied to distr. systems): https://leino.science/papers/krml260.pdf
- Formal verification of the QBFT consensus protocol (a variant of PBFT) in Dafny:
- Presentation: https://www.youtube.com/watch?v=qcK7TxHQAxI
- Repo: https://github.com/ConsenSys/qbft-formal-spec-and-verification
- Formal verification of DVT
- Presentation: https://youtu.be/BC-McCK_dk4?si=dzBy-TAVqsnPWEXB
- Repo: https://github.com/ConsenSys/distributed-validator-formal-specs-and-verification
### 4. Coding
The Great Refactor. During this and the previous week, I realized that to truly benefit from performance optimizations, I must rely on either **fastssz** or **dynssz**. This week I shifted focus toward their implementations. Below is a summary of key learnings and potential changes to be applied in the codebase:
---
```golang
var pool *hasher.HasherPool
if d.NoFastHash {
pool = &hasher.DefaultHasherPool
} else {
pool = &hasher.FastHasherPool
}
hh := pool.Get()
defer func() {
pool.Put(hh)
}()
```
1. Pool selection (default hasher or fast hasher) **in our case, to begin with, only default hasher**
2. `pool.Get()` retrieves a reusable instance of the hasher from the pool.
3. `defer... pool.Put(hh)...` ensures the hasher is returned to the pool.
All these is a very performant code that avoids creating new hasher objects for every hash operation.
Key benefits of this pattern:
- memory efficiency: reuse same hasher objects instead of allocating new ones.
- reduced garbage collector presure: fewer allocations mean less GC overhead.
---
```golang
// - hh: The Hasher instance managing the hash computation state
// - pack: Whether to pack the value into a single tree leaf
// - idt: Indentation level for verbose logging (when enabled)
func (d *DynSsz) buildRootFromType(sourceType *TypeDescriptor, sourceValue reflect.Value, hh *hasher.Hasher, pack bool, idt int) error {
// dereference pointers handling and handling properly nil pointers
(...)
// priotize optimizations
if useFastSsz
(...)
if !useFastSsz && useDynamicHashRoot
(...)
if !useFastSsz && !useDynamicHashRoot
(...)
}
```
This code snippet shows the structure of the core recursion function.
- dereference pointers handling properly the case where we have nil pointer
- prioritize optimized algorithms
In our case, we stated that the problem we are about to solve has as input:
- an **sszinfo object**
- a **ssz serialized bytes** for the object
We begin without optimizations, but the project could later be extended to support sszinfo + Go object, following the dynssz approach to unlock further optimizations.
---
```golang
switch sourceType.SszType {
case SszTypeWrapperType:
(...)
case SszContainerType:
(...)
case SszProgressiveContainerType:
(...)
case SszVectorType, SszBitvectorType:
(...)
case SszListType, SszProgressiveListType:
(...)
case SszBitlistType, SszProgressiveBitlistType:
(...)
case SszCompatibleUnionType:
(...)
case SszBoolType:
(...)
case SszUint8Type:
(...)
case SszUint16Type:
(...)
case SszUint32Type:
(...)
case SszUint64Type:
(...)
case SszUint128Type,
(...)
default:
return fmt.Errorf("unknown type: %v", sourceType)
```
This `switch/case` is where the major deviation from dynssz and our hash tree root computation is. For us, the data from which we compute the hash tree root is already serialized. Unmarshalling and using all this could be an option, but we lose efficiency. Our target here is to:
- parse the right amount of serialized data
- hash properly
---
Had I known this type wrapper, I would have used in the past for testing.
```
// TypeWrapper represents a wrapper type that can provide SSZ annotations for non-struct types.
// It uses Go generics where D is a WrapperDescriptor struct that must have exactly 1 field,
// and T is the actual value type. The descriptor struct is never instantiated but provides
// type information with annotations.
```
---
- Basic vector root computation:
`AttestingIndices = [1,2,3,4,5]`
`serialization = 0x01000000000000000200000000000000030000000000000004000000000000000500000000000000`
If data is of basic type: `merkleize(pack(value)) if value is a basic object or a vector of basic objects.`
- Compose type vector root computation:
`merkleize([hash_tree_root(element) for element in value]) if value is a vector of composite objects or a container.`
- Each element:
- individually hash each field and merkleize up to get the hash tree root of the element
- Merkleize the vector composed of the hash tree root of each element
### Next steps
- Continue the refactor