# EPF Week 1 Update
This week I had dived deep into understanding the current zkVM landscape and began setting up the foundational work for benchmarking and integration.
## Exploration and Research
I started by exploring the existing SP1 and RISC Zero implementations that Grandine has been experimenting with for Beacon Chain snarkification. However after examining the codebase, I could not find either of the implementations so thought maybe those are kept in private repositories. I spent considerable time understanding how each system handles the serialization and deserialization of beacon state objects, as this appears to be a critical performance bottleneck.
One interesting observation was how differently each zkVM handles memory constraints during proof generation. SP1 seems more forgiving with larger state objects, while RISC Zero requires more careful memory management. I started implementing a basic benchmarking harness to quantify these differences.
The following struct captures the key parameters that drive computational complexity in beacon chain state transitions, namely `validator_count`, `slot` and `epoch`
```rust
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct BeaconStateBenchmark {
pub validator_count: usize,
pub slot: u64,
pub epoch: u64,
}
```
The test suite uses progressive complexity levels to identify performance cliff points where zkVMs might struggle.
```rust
pub struct ZkVMBenchmarkSuite {
pub test_cases: Vec<BeaconStateBenchmark>,
}
```
This method captures end-to-end proving performance by timing the full workflow from input serialization to proof generation. The `setup()` generates proving keys while`prove()` does the actual SNARK computation.
```rust
impl ZkVMBenchmarkSuite {
pub fn new() -> Self {
Self {
test_cases: vec![
// EXAMPLE test case: BeaconStateBenchmark { validator_count: 10000, slot: 1000, epoch: 31 },
],
}
}
pub async fn benchmark_sp1(&self, test_case: &BeaconStateBenchmark) -> BenchmarkResult {
let start_time = std::time::Instant::now();
let client = ProverClient::new();
let mut stdin = SP1Stdin::new();
stdin.write(&test_case);
let (pk, vk) = client.setup(BEACON_STATE_TRANSITION_ELF); // these maybe cached assuming the circuit doesn't change
let proof = client.prove(&pk, stdin).run().unwrap();
BenchmarkResult {
proving_time: start_time.elapsed(),
proof_size: proof.bytes().len(),
zkvm_type: "SP1".to_string(),
}
}
}
```
I also started a bit of research on OpenVM and Zisk. OpenVM's continuation support is particularly interesting because it could allow us to handle much larger validator sets by breaking the computation into smaller, manageable chunks. I started by looking into their documentation and tried to understand the overall architecture.
## Resources
In the past week, I went through the following videos, codebases and documentations
- [CL Intro](https://www.youtube.com/live/FqKjWYt6yWk?si=Qx7Fw0Unt7Q2hKBh)
- [Research and Overview](https://www.youtube.com/live/UClaoL12W00?si=sFuSpoQEB1GZTm_Z)
- [Gasper](https://youtu.be/cOivWPEBEMo?si=5fzQWk8AQ3diQt2h)
- [RISC Zero Documentation](https://dev.risczero.com/api/zkvm/)
- [SP1 Documentation](https://docs.succinct.xyz/docs/sp1/what-is-a-zkvm)
- [Grandine Consensus Client Repository](https://github.com/grandinetech/grandine)
## Challenges
The biggest challenge this week was understanding the nuances of how each zkVM handles Rust's standard library support and syscalls. Some operations that work in SP1 require different approaches in RISC Zero, and the newer systems like OpenVM and Zisk have their own quirks. I'm trying to visualize a compatibility matrix to track these differences, which should help me with the optimization work in the coming weeks.
## Conclusion
Next week, I plan to complete the basic benchmarking suite and start collecting initial performance data across different validator set sizes. I am also going to dive deeper into the continuation mechanisms in OpenVM and Zisk to understand how we might use them for handling Ethereum mainnet-scale validator sets. The goal is to have a clear performance baseline established so I can start meaningful optimization work in the upcoming weeks.