This note looks at the communication complexity required to parallelize a proof for a computation of 1 billion instruction cycles across a cluster of 1000 machines.
We first discuss the communication complexity in the context of our recursive architecture on RISC Zero's Bonsai, and then consider the communication complexity of paralellizing folding. The main intention is for the author to gather his own thoughts, and to check understanding; hopefully, it will also be useful to others.
Our initial impression is that managing the communication complexity in folding poses a non-trivial challenge.
The intention here is to understand whether these challenges are real or imagined, and to understand the state of the art with respect to a horizontally scalable approach to folding. Please point out issues with assumptions here and make suggestions to improve the correctness/fairness/utility of this analysis.
It has been suggested that we can easily manage the challenges of horizontal scaling by selecting the folding witnesses to be as small as needed. Indeed, our napkin math below suggests that if we can tune down the folding witness size to 250 kB, we can match the communication complexity of a FRI-based system.
The question, then, is whether it's actually practical to tune the folding witness to be this small, and whether such small witness size results in a performant system.
The lower bound on witness size comes from the recursive verifier circuit that enforces the correctness of the folding. In Nova, this verifier circuit is ~10,000 multiplication gates. CycleFold improves this to closer to ~1,000 multiplication gates. Assuming each gate consists of (3) 255-bit field elements and rounding a bit for simplicitly, we estimate ~750 bits per gate in the circuit. This suggests the absolute minimum size for a folding witness in Nova is 7.5 MB. CycleFold offers a 10x improvement here, at 750 kB.
We encounter an engineering tradeoff as we move toward very small witnesses: the performance gains of folding are strongest when the verification-of-folding is a small proportion of the work at each step.
The rest of this note articulates our approach to parallelizing recursive proving on Bonsai and a naive sketch of a folding-based analogue, in order to articulate some napkin math for each approach.
We begin by presenting the current proving architecture we're using for RISC Zero's Bonsai. Below, we consider folding.
Put briefly, our rough accounting of our current architecture estimate a network comunication of 51 GB in order to use a network of 1000 machines to prove a 1 TB witness, which corresponds to a zkVM computation of 1 billion computation cycles. To put these numbers into context, this size is typical for proving construction of an Ethereum block.
When we ask Bonsai to prove a large RISC-V program, we can divide the work into four categories: execution, segment proving, aggregation, and a Groth16 wrapper.
The details of these steps are shown in the picture below, and discussed in more detail in my talk at zk10.
Before running through the full napkin math for network costs, we highlight a few key points:
The approach here leverages succinctness at each step of the process, which massively reduces any concerns with respect to communication complexity.
In this section, we give some quick estimates for the total communication complexity for proving a computation of 1 billion cycles.
We assume the computation is divided into 1000 segments, each consisting of 1 million computation cycles.
For simplicity, we assume the binary file being executed and the inputs to that binary are of negligible size.
TODO 250kB is just the STARK… How much extra overhead is there from receipt metadata? I guess this is totally application dependent?
In total, for proving a 1-billion-cycle computation on a cluster of 1000 machines involves a communication complexity of 51 GB. Note that 95% of the communication complexity here is assigning the initial tasks to the segment provers. After the initial segments are assigned, we're able to manage the entire process of organizing workers and aggregating work into a single gigabyte of communication complexity.
We sketch a simplistic scheme for parallelized folding. For a more detailed scheme, refer to Paranova. The simplistic scheme described here should be sufficient for napkin math.
As above, we consider a 1 billion cycle computation, where each row of the witness is 1 kB. The total witness size here is 1 TB.
We begin as above, splitting the computation into segments that correspond to 1 million computation cycles (1 GB witness size). As above, we construct a memory snapshot (50MB) for each segment, and we distribute one snapshot to each machine in our cluster. Each machine uses the memory snapshot to re-construct the 1 GB witness.
Now, each machine needs to do something with their 1 GB witness, and send something back over the network. Let's say, each machine splits their 1 GB witness into 1000 smaller witnesses, and uses folding to combine them. After this work, each machine is now holding a folded witness (1 MB) for the 1 million cycles they were asked to prove.
Now, each machine returns their witness over the network; the communication complexity is manageable because the folded witnesses are only 1 MB.
Now, our task is to aggregate 1000 witnesses, each of size 1 MB. Since we are sending the witnesses over the network at this stage, we don't need to continue distributing the memory images.
The communication complexity here is only marginally different than the task of aggregating 1000 STARKs, and within reach by tuning some parameters in the design above.
We conclude that the communication complexity of distributing a folding can match that of a FRI-based system if you can tune the witness size to match the size of the STARK. For some discussion of whether such small folding witnesses are possible/practical, see the Summary of impressions at the top of this note.
Twitter discussion that prompted this note
Paranova
CycleFold
My zk10 talk & slides
For corrections, questions, and comments, contact paul@risczero.com.