By Binyi Chen
SumCheck [LFKN92] is a classical protocol for a prover to convince the verifier that the evaluations of a multivariate polynomial on the boolean hypercube sum to a specific value, without the verifier to redo the evaluations. It has tons of applications in the industry and is a crucial building block in the recent HyperPlonk SNARK that eliminates the use of large FFTs. HyperPlonk is designed as an efficient SNARK solution for proving complex statements (e.g., circuits with more than \(2^{30}\) gates), thus motivating researchers to consider the hardware-acceleration perspective of the SumCheck proving algorithm. Recently, Shlomovits published an excellent post that analyzes the hardware-optimization perspective of SumCheck and encourages more research. In this post, we propose a few new techniques for hardware-accelerating SumCheck. The authors are not hardware experts, but hopefully, the ideas can inspire more experts to improve the hardware-friendliness of SumCheck further.
We briefly review the SumCheck protocol and refer to HyperPlonk and Shlomovits’ post for more background.
Recall that the goal is to prove \(\sum_{b \in \{0,1\}^\mu} f(b) = s\) for a (committed) polynomial \(f \in \mathbb{F}^{\le d}[X_1,\dots,X_\mu]\) with \(\mu\) variables, where each variable has degree \(d=O(1)\). For brevity, we set \(d=1\) in the following context, and it can be easily extended when \(d>1\).
The protocol runs in \(\mu\) rounds where in the \(i\)-th round \((1\le i \le \mu)\),
Finally, the problem is reduced to check \(f(\alpha_{1}, \dots, \alpha_\mu) = r_{1}(\alpha_{1})\), for which the prover provides a polynomial commitment opening for the evaluation \(f(\alpha_{1}, \dots, \alpha_\mu)\). The protocol can be made non-interactive via the classical Fiat-Shamir transform, where the challenges are obtained by computing hashes from the transcript.
As described in Shlomovits's post, the naive SumCheck algorithm is less hardware-friendly.
We resolve the issue by observing that sumcheck computation is highly local. Suppose there are multiple chips; each chip only needs to read a small portion of the evaluation table at the beginning. After that, it can derive a small digest from the partial table and send the digest back to obtain the next round challenge. After receiving the challenge, it can reuse the values in its local memory to quickly update to the partial table for the next round. Note that there is no need to read/write large tables beyond the input reading at the initial round.
In this section, we describe a hardware-friendly algorithm for SumCheck. We will later adapt it to enable fast proving even if the sum of the chip’s memory is small.
We first set up a few parameters. Suppose the hardware connects \(L=2^{\ell}\) copies of chips with total memory size \(O(N)\) (where \(N=2^\mu\)), that is, each chip has memory size \(O(N/L)\) (e.g., SRAM with capacity \(N/L\)). Moreover, the hardware embeds a gadget with a constant memory size that can be used to compute hashes.
Remark: Sometimes, it can be expensive to embed a hash gadget in the hardware (e.g., a SHA-256 gadget accounts for a large chip area in ASICs). There are two options for resolving this issue: (i) we can use a more hardware-friendly hashing scheme (e.g. Poseidon) that has small chip area cost; or (ii) we can allow CPUs to compute hashes directly; the extra information being read/written from the CPU’s memory per round is only a challenge scalar and a constant-degree univariate polynomial (or its commitment), rather than a large evaluation table.
Next, we describe the algorithm. The workflow is also illustrated in the Figure below.
The \(i\)th \((0\le i<L)\) chip reads and stores the table of evaluations \(\{f(\langle i \rangle_{\ell},\vec{b})\}_{\vec{b} \in \{0,1\}^{\mu-\ell}}\), where \(\langle i\rangle_{\ell}\) denotes the \(\ell\)-bit binary representation of \(i\). Note that the table has a size \(N/L\).
The \(i\)th \((0\le i<L)\) chip also accumulates the sum of the evaluations it stores. In particular, it computes \(r_{\mu, i}(0) := \sum_{\vec{b} \in \{0,1\}^{\mu-\ell-1}} f(\langle i\rangle, \vec{b}, 0)\)and \(r_{\mu, i}(1) := \sum_{\vec{b} \in \{0,1\}^{\mu-\ell-1}} f(\langle i\rangle, \vec{b}, 1)\); it then sends \(r_{\mu, i}(0)\) and \(r_{\mu, i}(1)\) to the hash gadget.
The hash gadget accumulates the r-values it received from other chips. After receiving all the r-values, the hash gadget also obtain the univariate round polynomial \(r_\mu(X)\) (represented by two values \(r_\mu(0):=\sum_{i=0}^{L-1}r_{\mu,i}(0)\) and \(r_\mu(1):=\sum_{i=0}^{L-1}r_{\mu,i}(1)\)). The gadget inject the two values (or its univariate commitment, depending on the instantiation) into the sponged hash state and obtain a challenge scalar \(\alpha_\mu\). It then sends \(\alpha_\mu\) back to all of the \(L\) chips.
The \(i\)th \((0\le i<L)\) chip, after knowing \(\alpha_\mu\), updates the table of evaluations from \(\{f(\langle i \rangle_{\ell},\vec{b})\}_{\vec{b} \in \{0,1\}^{\mu-\ell}}\) to \(\{f(\langle i \rangle_{\ell},\vec{b'}, \alpha_\mu)\}_{\vec{b'} \in \{0,1\}^{\mu-\ell-1}}\) by “linear folding”. Let \(L_i(X)\) be the ith Lagrange polynomial (w.r.t to set \(\{0,1\}\)), for every \(\vec{b'} \in \{0,1\}^{\mu-\ell-1}\), it computes
\[ f(\langle i \rangle_{\ell},\vec{b'}, \alpha_\mu)=f(\langle i \rangle_{\ell},\vec{b'}, 0) \cdot L_0(\alpha_\mu) + f(\langle i \rangle_{\ell},\vec{b'}, 1) \cdot L_1(\alpha_\mu)\,. \]
The problem is now reduced to prove the claim \(\sum_{b \in \{0,1\}^{\mu-1}} f(b,\alpha_\mu) = s'\), where the chips already have the evaluations \(\{f(b,\alpha_\mu)\}\) and thus have no need to read/write from the main memory.
Complexity: We note that the above algorithm can be highly efficient in hardware. All accumulation/folding operations can be pipelined per round. The computation can quickly go from round \(i\) to round \(i+1\), as a single hash on a short input can be blazingly fast (even in CPUs).
Limitation: The above scheme requires each of the \(L\) chips to have memory size \(N/L\). This becomes an issue when a chip cannot equip with moderate amounts of memory, and the hardware cannot pile too many chips. One workaround is to split the original sumcheck claim into multiple smaller sumcheck claims, and the hardware solves the smaller claims one by one and outputs all of the sumcheck proofs. This solution increases the proof size and the verification time by a factor \(c\), where \(c\) is the fraction between \(N\) and the total memory size of the chips.
In this section, we describe a variant of SumCheck that resolves the issue above. The intuition is to quickly reduce the problem size of sumcheck by a factor of \(K \gg 2\) after two passes of the original evaluation table. After that, there is no need for the hardware to read/write values from the main memory anymore.
Similarly, we first set up a few parameters. Suppose the hardware connects \(L=2^{\ell}\) copies of chips with total memory size \(\approx N/K\) (where \(N=2^\mu\) and \(K=2^{\kappa}\)). E.g., \(K\) can be set to \(32\) in practice. Note that now each chip has memory size \(O(N/KL)\) (e.g., SRAM with capacity \(N/KL\)). Like before, the hardware embeds a gadget that can compute hashes (we can remove the gadget if the chip area cost is high).
We now slightly modify the SumCheck claim. The goal is to prove that
\[ \sum_{b \in \{0,1\}^{\mu-\kappa},\,y \in \{0,1,\dots, K-1\}} f(b, y) = s \]
for a multivariate polynomial \(f \in \mathbb{F}[X_1,\dots,X_{\mu-\kappa},Y]\) where the individual degrees of \(X_1,\dots,X_{\mu-\kappa}\) are \(1\) and the individual degree of \(Y\) is \(K\). Note that we can build a polynomial commitment scheme (PCS) that works for polynomials of this form (e.g., by combining the univariate KZG scheme with the multilinear version of KZG [PST13]). The PCS complexity is approximately the same as the multilinear KZG scheme [PST13], as long as \(K\) is moderately small (e.g., K = 32). Hence, we can adapt HyperPlonk to work in this scenario.
Next, we describe the hardware-friendly algorithm for proving the SumCheck claim:
In the initial round;
The \(i\)th \((0\le i<L)\) chip reads in a stream (but do not store) the table of evaluations \(\{f(\langle i \rangle_{\ell},\vec{b}, 0)\}_{\vec{b} \in \{0,1\}^{\mu-\ell-\kappa}}\); \(\{f(\langle i \rangle_{\ell},\vec{b}, 1)\}_{\vec{b} \in \{0,1\}^{\mu-\ell-\kappa}}\); …; \(\{f(\langle i \rangle_{\ell},\vec{b}, K-1)\}_{\vec{b} \in \{0,1\}^{\mu-\ell-\kappa}}\), where \(\langle i\rangle_{\ell}\) denotes the \(\ell\)-bit binary representation of \(i\). Moreover, after reading a value \(f(\langle i\rangle, \vec{b}, y)\) (where \(0\le y < K\)), it immediately accumulates it into a sum \(s_{i}(y)\) and discard the f-value so that it can continue reading the evaluations from the stream. Whenever a sum \(s_{i}(y) := \sum_{\vec{b} \in \{0,1\}^{\mu-\ell-\kappa}} f(\langle i\rangle, \vec{b}, y)\) is readily computed, it sends the s-value to the hash gadget.
The hash gadget accumulates the s-values it received from other chips. After receiving all the s-values, the hash gadget also obtains the univariate round polynomial \(s(Y) := s_0(Y) + \dots s_{L-1}(Y)\) (represented by \(K\) values \(s(0), s(1), \dots, s(K-1)\)). The gadget injects the \(K\) values (or its univariate commitment, depending on the instantiation) into the sponged hash state and obtain a challenge scalar \(\beta\). It then sends \(\beta\) back to all of the \(L\) chips.
The \(i\)th \((0\le i<L)\) chip, after knowing \(\beta\), reads again the stream of evaluations
\[ \{f(\langle i \rangle_{\ell},\vec{b}, 0)\}_{\vec{b} \in \{0,1\}^{\mu-\ell-\kappa}},\quad\dots,\quad\{f(\langle i \rangle_{\ell},\vec{b}, K-1)\}_{\vec{b} \in \{0,1\}^{\mu-\ell-\kappa}} \]
but with a slightly different order. In particular, for every \(b\in\{0,1\}^{\mu-\ell-\kappa}\), it reads \(f(\langle i \rangle_{\ell},\vec{b}, 0),\dots,f(\langle i \rangle_{\ell},\vec{b}, K-1)\) and computes and stores value
\[ f(\langle i \rangle_{\ell},\vec{b}, \beta)=f(\langle i \rangle_{\ell},\vec{b}, 0) \cdot L_0(\beta) + \dots + f(\langle i \rangle_{\ell},\vec{b}, K-1) \cdot L_{K-1}(\beta)\,. \]
Here \(L_i(X)\) are the Lagrange polynomials (w.r.t to set \(\{0,1,\dots, K-1\}\)). Since only \(N/(KL)\) values are stored, the chip only needs to have memory size \(\approx N/(KL)\).
The problem is now reduced to prove the claim \(\sum_{b \in \{0,1\}^{\mu-\kappa}} f(b,\beta) = s'\), where the chips already have the evaluations \(\{f(b,\beta)\}\) and thus have no need to read/write from the main memory. Then the hardware can run the same algorithm as in the previous section.
Complexity: The number of folding/accumulating operations in the first round is approximately \(N\), and the number of folding/accumulating operations in the \((i+1)\)-th round \((1 \le i \le \mu-\kappa)\) is approximately \(\frac{N/K}{2^{i-1}}\). Thus the total proving complexity is approximately \((1+2/K)N \approx N\). The verifier complexity has an additive overhead of \(K\) because in the first round it needs to check an equation of \(K\) addends, that is, \(s(0) + s(1) + \dots + s(K-1)\) equals to the claimed sum.
Similar to the algorithm in the previous section, the above algorithm is hardware-friendly. We can pipeline all of the accumulations/folding operations in a round; the computation can also quickly go from round \(i\) to round \(i+1\) because a single hash is cheap.
In this post, we introduced a few ideas to optimize SumCheck in hardware. We hope that it can encourage and inspire further research on hardware optimizations of SumCheck. We tend to agree with Justin Thaler's view that sum-check-based SNARKs remain the most promising for minimizing total prover work, and that good hardware implementations will follow.
Acknowledgment. We thank Zhenfei Zhang for pointing out that some hashing schemes have large chip-area cost, and for bringing up the idea of using hardware-friendly hashing schemes (e.g. Poseidon).