# Napkin Math: Communication Complexity of Decentralized Proving
*This note looks at the communication complexity required to parallelize a proof for a computation of 1 billion instruction cycles across a cluster of 1000 machines.*
*We first discuss the [communication complexity in the context of our recursive architecture on RISC Zero's Bonsai](#Parallelizing-recursive-proving), and then consider the [communication complexity of paralellizing folding](#Parallelizing-folding-based-proving). The main intention is for the author to gather his own thoughts, and to check understanding; hopefully, it will also be useful to others.*
## Summary of impressions
Our initial impression is that managing the communication complexity in folding poses a non-trivial challenge.
The intention here is to understand whether these challenges are real or imagined, and to understand the state of the art with respect to a horizontally scalable approach to folding. Please point out issues with assumptions here and make suggestions to improve the correctness/fairness/utility of this analysis.
It has been suggested that we can easily manage the challenges of horizontal scaling by selecting the folding witnesses to be as small as needed. Indeed, our napkin math below suggests that if we can tune down the folding witness size to 250 kB, we can match the communication complexity of a FRI-based system.
The question, then, is whether it's actually practical to tune the folding witness to be this small, and whether such small witness size results in a performant system.
The lower bound on witness size comes from the recursive verifier circuit that enforces the correctness of the folding. In Nova, this verifier circuit is ~10,000 multiplication gates. CycleFold improves this to closer to ~1,000 multiplication gates. Assuming each gate consists of (3) 255-bit field elements and rounding a bit for simplicitly, we estimate ~750 bits per gate in the circuit. This suggests the absolute minimum size for a folding witness in Nova is 7.5 MB. CycleFold offers a 10x improvement here, at 750 kB.
![](https://hackmd.io/_uploads/SJ_gJ-J-p.png)
We encounter an engineering tradeoff as we move toward very small witnesses: the performance gains of folding are strongest when the verification-of-folding is a small proportion of the work at each step.
The rest of this note articulates our [approach to parallelizing recursive proving on Bonsai](#Parallelizing-recursive-proving) and a [naive sketch of a folding-based analogue](#Parallelizing-folding-based-proving), in order to articulate some napkin math for each approach.
## Parallelizing recursive proving
We begin by presenting the current proving architecture we're using for RISC Zero's Bonsai. [Below, we consider folding](#Parallelizing-folding-based-proving).
*Put briefly, our rough accounting of our current architecture estimate a network comunication of 51 GB in order to use a network of 1000 machines to prove a 1 TB witness, which corresponds to a zkVM computation of 1 billion computation cycles. To put these numbers into context, this size is typical for proving construction of an Ethereum block.*
### Overview of the proving architecture
When we ask Bonsai to prove a large RISC-V program, we can divide the work into four categories: execution, segment proving, aggregation, and a Groth16 wrapper.
1. **Execution** - The executor runs through the full zkVM execution, identifying **segments** to prove, and then assigns each **segment** to a machine on a dynamically-sized AWS cluster. The execution happens on a single machine.
2. **Segment Proving** - Each segment prover generates the full witness for their segment, and then proves correctness of their segment. Segment proving can be fully parallelized.
3. **Proof Aggregation** - Proofs are aggregated in a binary tree using FRI-based recursion. Pairs of segment proofs are verified in a recursive prover until we reach a single STARK. The proving work for aggregation can be parallelized layer-by-layer.
4. **SNARK wrapping** - To facilitate on-chain proving, we (optionally) wrap the STARK in a Groth16 SNARK.
The details of these steps are shown in the picture below, and discussed in more detail in my [talk at zk10](https://www.youtube.com/watch?v=wkIBN2CGJdc).
![](https://hackmd.io/_uploads/HJUtistla.png)
### Communication complexity
Before running through the full [napkin math for network costs](#Napkin-Math-for-Network-Costs), we highlight a few key points:
1. **Witnesses are never sent over the network.**
Rather than sending full witnesses or sections of full witnesses over the network, on Bonsai we take memory snapshots every 1 million computation cycles, and distribute those snapshots to Provers, who then re-create and prove the witness data.
2. **Sending zkVM image data is the bulk of the network cost.** Re-creating witness data requires a zkVM image snapshot at a moment in time. Using our current image encoding, the size of each of these snapshot can be up to 200MB, depending on RAM allocation at the moment of the snapshot. We use 50MB for the analysis that follows, which is sufficient for proving most blocks with Zeth.
3. **Costs of proof aggregation are negligible.** Of the 51 GB of network costs that appear in the napkin math that follows, only 1 GB is sufficient for all the proof aggregation costs; the remaining 50 GB is distributing zkVM image snapshots.
The approach here leverages succinctness at each step of the process, which massively reduces any concerns with respect to communication complexity.
### Napkin Math for Network Costs
In this section, we give some quick estimates for the total communication complexity for proving a computation of 1 billion cycles.
We assume the computation is divided into 1000 segments, each consisting of 1 million computation cycles.
For simplicity, we assume the binary file being executed and the inputs to that binary are of negligible size.
- **Initial Execution generates segments.**
- The executor produces 1000 **segments**, each of size ~50MB. The **segment** includes the full state of the machine: all register values and an encoding of memory.
- **Segments are assigned to provers.**
- 1000 segments x 50 MB = 50 GB over the network.
The segment prover uses the 50 MB segment data (which consists of the state of the machine registers and the state of RAM at a given moment in time) to reconstruct the witness (~1GB), and then runs a STARK prover on this 1GB witness.
- **Segment provers return proofs.**
- 1000 STARKS x 250 kB = 250 MB over the network
- **Pairs of STARKs are assigned for recursion**
- In the first layer, 1000 STARKs are sent out; 500 are returned. In the next layer, 500 STARKs are sent out; 250 are returned. ...
- This totals to ~2000 STARKs assigned and ~1000 STARKs returned during the proof aggregation process. Assuming ~250kB per STARK, this amounts to 750MB over the network. We'll call it 1GB for simplicity.
- **STARK-to-SNARK Translation**
- We treat this step as negligible for the purposes of communication complexity.
TODO 250kB is just the STARK... How much extra overhead is there from receipt metadata? I guess this is totally application dependent?
In total, for proving a 1-billion-cycle computation on a cluster of 1000 machines involves a communication complexity of 51 GB. Note that 95% of the communication complexity here is assigning the initial tasks to the segment provers. After the initial segments are assigned, we're able to manage the entire process of organizing workers and aggregating work into a single gigabyte of communication complexity.
## Parallelizing folding-based proving
We sketch a simplistic scheme for parallelized folding. For a more detailed scheme, refer to [Paranova](https://zkresear.ch/t/parallelizing-nova-visualizations-and-mental-models-behind-paranova/198). The simplistic scheme described here should be sufficient for napkin math.
[As above](#Napkin-Math-for-Network-Costs), we consider a 1 billion cycle computation, where each row of the witness is 1 kB. The total witness size here is 1 TB.
We begin as above, splitting the computation into **segments** that correspond to 1 million computation cycles (1 GB witness size). As above, we construct a memory snapshot (50MB) for each segment, and we distribute one snapshot to each machine in our cluster. Each machine uses the memory snapshot to re-construct the 1 GB witness.
Now, each machine needs to do something with their 1 GB witness, and send something back over the network. Let's say, each machine splits their 1 GB witness into 1000 smaller witnesses, and uses folding to combine them. After this work, each machine is now holding a folded witness (1 MB) for the 1 million cycles they were asked to prove.
Now, each machine returns their witness over the network; the communication complexity is manageable because the folded witnesses are only 1 MB.
Now, our task is to aggregate 1000 witnesses, each of size 1 MB. Since we are sending the witnesses over the network at this stage, we don't need to continue distributing the memory images.
The communication complexity here is only marginally different than the task of aggregating 1000 STARKs, and within reach by tuning some parameters in the design above.
We conclude that the communication complexity of distributing a folding can match that of a FRI-based system if you can tune the witness size to match the size of the STARK. For some discussion of whether such small folding witnesses are possible/practical, see the [Summary of impressions](#Summary-of-impressions) at the top of this note.
## References
Twitter discussion that prompted this note
Paranova
CycleFold
My zk10 talk & slides
## Contact
For corrections, questions, and comments, contact paul@risczero.com.