---
tags: Sum/Zero-Check
---
# Sum-Check protocol notes
> ++Table of contents++
> [TOC]
:::danger
Fix the offsets at which the randomness start.
Check Chiesa's talk to widen the explanations: https://www.youtube.com/watch?v=N1-67VPrsbA
:::
# The Sum-Check protocol
First formalized by Lund, Fortnow, Karloff and Nisan [[LFKN92]](https://arxiv.org/abs/1704.02086).
Suppose we're given a multi-variate polynomial $g$ defined over the Finite Field $\mathbb{F}$.
The main purpose of the Sum-Check is for the Prover to provide the Verifier with the following sum:
$$
H := \sum_{b_1\in \{0,1\}}\sum_{b_2\in \{0,1\}}...\sum_{b_v\in \{0,1\}}g(b_1,..)
$$
## What does Verifier gain with that?
The verifier can compute $g$ on it's own by evaluating $g$ at $2^v$ places (all the inputs in $\{0,1\}^v$).
But this is of course non-acceptable performance-wise. Hence, the sum-check protocol allows the Verifier to actually reduce this problem to evaluate $g$ at a particular random point $r_x \in \mathbb{F}$.
## Protocol description
1. Prover sends a value $C_1$ claimed to be equal to the reslt of the sum defined by $H$.
:::warning
Now a multi-round process stats where we will be repeating until we have gone through all the variables of $g$.
The multi-round process consists on the following:
:::
2. Prover sends to Verifier the univariate polynomial $g_1(X_1)$ claimed to equal: $\sum_{(x_2,..,x_v)\in\{0,1\}^{v-1}}$.
Then, the Verifier checks that actually, $C_1 \texttt{=} g1(0) + g1(1)$ and also ensures that $g1$ is a univariate polynomial with at most $deg(g(X_j)) \texttt{=} deg_j(g)$
This actually translates to:
### Example Round 1
Let $g = (x, y, z) = 2x^3+xz+yz$. Let $H$ be equal to the sum of all $g$'s evaluations over the Boolean Hypercube ($H=12$).
This is referencing to the same eq we saw previously:
$$
H := \sum_{b_1\in \{0,1\}}\sum_{b_2\in \{0,1\}}...\sum_{b_v\in \{0,1\}}g(b_1,..)
$$
When the sum-check protocol is applied to $g$, Prover generates a univariate polynomial $s$ which is based on $g$ and which has only one variable ($x$ in this case).
This polynomial contains the evaluations for all the possible $\{0,1\}$ values of the $n-1$ variables-left of the polynomial (except for ($x$) which is left as the only variable, that's why it is univariate).
:::info
This translates to the following:
$$
g(x,0,0) + g(x,0,1) + g(x,1,0) + g(x,1,1) = \\
(2x^3) + (2x^3+x) + (2x^3) + (2x^3 + x + 1) =\\
(8x^3) + 2x + 1
$$
:::
As it can be seen in the example, Prover left $x$ untouched and actually computed the sum for all the evaluations left of $g$, creating the new univariate polynomial $s_0(x) = (8x^3) + 2x + 1$.
:::info
The Prover sends $s_0$ to the Verifier who actually has 2 tasks:
- Ensure that actually $s_0(0) + s_0(1) = 12$. This proves still the correct sum of $H = 12$.
- Check that indeed the degree of the polynomial $s_0$ is at most the degree of the monomial at the same position as te variable left. ie. $deg(s_0(x)) = deg(g[0](x,y,z))$ -> $deg(8x^3+2x+1) = deg(2x^3)$
:::
---
:::danger
**Wrong probably**
3. Now, the Verifier samples a random element $r_1$ in $\mathbb{F}$ based on Prover's data sent previously. And sends $r_1$ to the Prover.
4. The prover now, sends the Verifier again a univariate polynomial $s_2$ claimed to equal:
$$
\sum_{(x_{j+1},...,x_v)\in\{0,1\}^{v-j}}g(r_1,...,r_{j-1},X_j,x_{j+1},...,x_v)\\
$$
which should make the following statements hold:
$$
g_{j-1}(r_{j-1}) = g_j(0) + g_j(1)\\
deg(g_j) \le deg_j(g)\ \&\&\ is\_univariate(g_j)
$$
:::info
**Or said with the same names as in the previous example:**
$$
s_1(0) + s_1(1) = s_0(r_1)\\
deg(s_1(y)) = deg(g[1](x,y,z))
$$
:::
### Example Round 2..v-1
:::warning
Remember that we ended the first round with: $g_{j=0} = s_0(x) = (8x^3) + 2x + 1$ and knowing that $s_0(0) + s_0(1) = 12$
:::
Now, let's make the numbers with the next round where $j=1$.
Remember that Verified just sampled $r_1$ (which we will suppose that equals 2).
:::info
Now prover constructs a univariate polynomial $s_2$ which now leaves only the second variable of $g$ and leaves the first one with the value $r_1$.
$$
s_1(y) = g(2,y,0) + g(2,y,1) = 16+(16+2+y)=34+y
$$
So as you can see, now we have half of the evaluations to compute and sum up together.
The Prover now sends $s_1$ to the Verifier who actually has 2 tasks:
- Ensure that actually $s_1(0) + s_1(1) = s_0(r_1)$.
- Check that indeed the degree of the polynomial $s_1$ is at most the degree of the monomial at the same position as te variable left. ie. $deg(s_1(x)) = deg(g[1](x,y,z))$ -> $deg(34+y) = deg(xz)$
:::
---
### Final round
5. The process is repeated until we arrive to the latest variable of the polynomial $g$ for which the maths would look the same:
:::info
Supose that the Verifier just sent $r_3$ so that we're at the very last round.
Now the prover will build $g(r_0,r_1,r_2)$ and this polynomial will need to satisfy the same properties for the degrees & being univariate as well as $g(r_0,r_1,r_2) = s_2(r_3)$
**Note that no evaluations are performed in the Boolean Hypercube in the last round**. Rather, Verifier uses the oracle-query capabilities to actually query $g$ at ($r_0,r_1,r_2$) getting $r_3$ in response which is what was sent to the prover.
:::
:::danger
**If the Verifier checked the consistency of all of the $v$ rounds and accepted the responses from the Prover, arrived here, the Verifier is convinced of the correctness of the Sum-Check.**
We will see why in the next section.
:::
---
## A quick recap on what just happened
In the first round, the prover sent $g_0(x)$ is claimed to be equal to the polynomial $s_0(x)$.
The idea of the sum-check is that the Verifier will probabilistically check that this equality between polynomials holds by picking a random $\mathbb{F}$, ($r_0$) confirming: $g_0(r_0) = s_0(r_0)$.
:::warning
Be aware that here, if $g_0 \neq s_0$, then with probability at least $1-deg_0(g)/\mathbb{F}$ over the Verifier's choice of $r_0$, the previous equation fails to hold.
:::
__Note that here, $deg_0(g)$ denotes the degree of the 0th monomial of $g$__
:::info
**The check of the degree of $s$ is done because two different degree $d$ univariate polynomials agree on at most $d$ inputs.
So if $|{\mathbb{F}}| \gg deg_i(g)$ then we can be sure that the probability of a false positive via $1-deg_0(g)/\mathbb{F}$ is very small**
:::
Now it's important to realize that **it would be very costly for the Verifier to actually evaluate $s_0(r_0)$ as it still has to evaluate the sum of the evaluations of $g(x_1,..,x_v)$.** (It still needs to perform $2^{v-1}$ evaluations.
But as we saw, the sum-check protocol is really good at doing specifically this!! If you look at the problem now, we have $s_0$ which is a sum of evaluations of a $(v-1)$-variate polynomial.
So the goal for the Verifier now, is to recursively perform the same process as in Round 1 so that the number of the evaluations that the Verifier needs to perform is indeed lower.
# Soundness & Correctness
## Completeness
Completeness is obvious. As long as the Prover sends on each round $i$:
$$
\forall i \in \{0,v\} g_i(X_i)\
$$
then the Verifier will accept with probability 1 the satisfaction of the statement.
## Soundness
A good proof of Soundness by induction can be seen in [Thaler's book](https://people.cs.georgetown.edu/jthaler/ProofsArgsAndZK.pdf) in pag.37.
This is the capture:
![](https://hackmd.io/_uploads/Hk9WPb2rn.png)
# Sum-Check + Low-Degree Multilinear Extensions.
## Multilinar Extensions
##### Definition:
Let $\mathbb{F}$ be any finite field and let $f: \{0,1\} \rightarrow \mathbb{F}$ be any function mapping the v-dimensional Boolean HyperCube to $\mathbb{F}$.
A $v$-variate polynomial $g$ is said to be an estension of $f$ if $g$ agrees with $f$ at all Boolean-valued inputs. Ie. $\forall i \in\{0,1\}^v g(x)=f(x)$
Any function $f$ mapping $\{0,1\} \rightarrow \mathbb{F}$ has an extension polynomial that is multilinear (a polynomial that has degree at most 1 in each variable).
This actually implies that the total degree of the polynomial is at most $v$ which is logarithmic in respect to the domain ($2^v$).
In contrast, univariate low-degree extensions over a domain of size $n$ have degree $n-1$.
:::info
As with univariate low-degree extensions, one can think of a (low degree) extension $g$ of a function $f: \{0,1\} \rightarrow \mathbb{F}$ as a distance-amplifying encoding of $f$.
This means, that if two functions $f$,$f': \{0,1\}$ disagree at least by one input, then extensions $g$,$g'$ of total degree at most $d$ will differ _almost everywhere_.
:::
### Lagrange interpolation for Multilinear polynomials
Lagrange interpolation is a mathematical technique that allows us to reconstruct a polynomial of degree $(n-1)$ given $n$ data points. By using Lagrange interpolation, the verifier can interpolate the original multilinear polynomial P from a subset of the points claimed by the prover, reducing the number of required evaluations.
The process involves constructing $n$ Lagrange basis polynomials, denoted as $L_1, L_2, \ldots, L_n$, which have the property that $L_i(x_i) = 1$ and $L_i(x_j) = 0$ for $i \neq j$ ($i, j = 1$ to $n$).
These basis polynomials can be defined as: $L_i(x) = \prod_{j=1,\ j\neq i}^n \frac{x_i-x_j}{x-x_j}$
Using these basis polynomials, the verifier can then interpolate the original polynomial P as follows:
$$
P(x)=\sum_{i=1}^n yi⋅Li(x)
$$
Now, by evaluating the reconstructed polynomial P at a subset of points (different from the original claimed points), the verifier can efficiently check whether the prover's claim is valid. The number of evaluations required is reduced because the verifier only needs to evaluate P at a smaller set of points, determined by the Lagrange basis polynomials.
In summary, Lagrange interpolation reduces the number of evaluations needed by the verifier during the sum-check protocol by allowing the reconstruction of the original multilinear polynomial using a subset of claimed points. This technique improves the efficiency of the protocol without compromising its correctness.
:::spoiler **Let's dive deeper into the reasoning behind the reduced number of evaluations required by the verifier when using Lagrange interpolation:**
When using Lagrange interpolation in the sum-check protocol, **the verifier constructs the Lagrange basis polynomials $\{L_1,...,L_n\}$ based on a subset of points claimed by the prover**. These basis polynomials have the property that they evaluate to 1 at one particular point $x_i$ and 0 at all other points $x_j$ for $j \neq i$.
Now, instead of evaluating the original polynomial P at all the claimed points $\{x_1,..,x_n\}$, **the verifier can evaluate the reconstructed polynomial P at a different set of points. This set of points is determined by the Lagrange basis polynomials and is typically smaller** than the total number of claimed points.
The reason for this reduction in the number of evaluations can be understood as follows:
**Reconstruction of P:** By using Lagrange interpolation, the verifier reconstructs the original polynomial P using a subset of the claimed points. The Lagrange basis polynomials form a system that spans the space of polynomials of degree $n-1$, allowing the verifier to uniquely determine $P$.
**Efficient interpolation:** The reconstructed polynomial $P$, obtained through Lagrange interpolation, encapsulates the information about the original polynomial evaluated at the claimed points. This means that by evaluating $P$ at a different set of points, the verifier can indirectly verify the correctness of the prover's claim.
**Smaller evaluation set:** The Lagrange basis polynomials determine the set of evaluation points for $P$. Since these basis polynomials have the property of evaluating to 1 at one particular point and 0 at all other points, the verifier needs to evaluate P only at those specific points to efficiently check the claim.
By evaluating the reconstructed polynomial P at this smaller set of points (we do have as many points as Lagrange basis polynomials), different from the original claimed points, the verifier can efficiently check the validity of the prover's claim. This reduction in the number of evaluations is possible because the verifier leverages the information encoded in the Lagrange basis polynomials to indirectly verify the claimed evaluations.
In summary, by using Lagrange interpolation, the verifier reconstructs the polynomial P using a subset of the claimed points and evaluates it at a different set of points determined by the Lagrange basis polynomials. This approach reduces the number of required evaluations, as the verifier only needs to evaluate P at this smaller set of points to efficiently check the prover's claim.
:::
# Extra topics
- The amount of evaluations the Verifier needs to compute, decreases geometrically for each recursive round of sum-check applied. Is estimated that also, the Prover can compute all of the necessary points for the sum-check in $O(2^v)$.
- The Sum-Check protocol only requires the Verifier to actually know the degree of each of the monomials' variables of $g$ and the ability to evaluate $g$ at a random point $r \in \mathbb{F}$(granted by Schwartz-Zippel). **This means that the Verifier is able to carry on the Sum-Check on $g$ without actually knowing it.**
# The Zero-Check protocol
The zero-check protocol is a technique used for example in [HyperPlonk](https://eprint.iacr.org/2022/1355) explained within the same paper in sec. 3.2.
The zero-check protocol is useful to prove in a multilinear PIOP built on top of sum-check which allows to prove polinomial identities that are 0.
:::info
This is in particular really useful to actually proof/check that a gate identity within a circuit sums up to zero.
:::
## Protocol overview
1. Verifier sends a random vector to Prover with at least as many elements as variables are in the multivariate polynomial $\mathbb{f}(\vec{\boldsymbol{X}})$.
2. After that, we multiply our multivariate polynomial by the `eq` poly. (We'll see later what this is and what it means). Ending up with the claim:
$$
\sum_{\vec{x} \in H^2} f(\vec{x}) \cdot \text{eq}(\vec{x},\vec{r})
$$
4. Run the sum-check protocol over the multivariate polynomial that results from the previous product. This, should convince the Verifier that the poly sums to 0.
## A real world example for an easy multiplication gate
Let's assume we have the following multiplication gate identities:
| a | b | c |
| - | - | - |
| 2 | 3 | 6 |
| 0 | 1 | 0 |
| 2 | 1 | 2 |
| 0 | 0 | 0 |
This is of course checking that $a \cdot b \text{=} c$ which is transformed to be an identity that evaluates to 0 when holds that: $(a \cdot b) \text{-} c \text{=} 0$.
## TODO: Introduce eq poly
$eq(x,y) \text{=} \prod_{i=1}^{\mu} (x_iy_i \text{+} (1 \text{-} x_i)(1 \text{-} y_i))$
Link it to the Lagrange interpolation section.
### Get the multilinear extensions (MLEs) of our polynomials.
The first thing that we need to do if we want to be able to sum-check our identity is to find the multilinear extensions (MLEs) of our polynomials.
To do so, we proceed as follows:
Notice that $a(x)$ has 4 terms. To encode it as a multilinear polinomial we will make use of 2 variables. As 2 bits in the Boolean Hypercube allow us to at least encode 4 different elements (Notice I say encode, not represent).
So, let's do it!
To start, we will call the two variables that form $\vec{\boldsymbol{x}} \rightarrow \{x_1,x_2\}$.
Hence, the operation to uplift our vector as a multilinear polynomial works as follows:
$$
\tilde{a}(\vec{\boldsymbol{x}}) \text{=} \sum_{\vec{\boldsymbol{x}} \in B^{\mu}} a(\vec{\boldsymbol{x}})\cdot eq(x, \vec{\boldsymbol{X}}) \\ \text{where} \\
\vec{\boldsymbol{X}} \text{=} [x_1, x_2]
\text{, } a(\vec{\boldsymbol{x}}) \text{=} a[x_1^2 \text{+} x_2], \\eq(x,y) \text{=} \prod_{i=1}^{\mu} (x_iy_i \text{+} (1 \text{-} x_i)(1 \text{-} y_i))
$$
We can already see the shape that $\tilde{a}(\vec{\boldsymbol{x}})$ will take:
$$
\tilde{a}(\vec{\boldsymbol{x}}) = 2(1-x_1)(1-x_2) + 0 + 2(1-x_1)x_2 + 0 = 2-2x_1
$$
:::info
Notice that what we are doing here is simply encoding into coefficient form a vector in a multilinear polynomial style.
As we said, to encode 4 elements we need 2 bits. Therefore, our $x$ now becomes a vector for which each variable is a bit (remember we're in the boolean hypercube and so variables are ranged between $[0,1]$).
**To formulate it better, we're moving the elements of the vector to being encoded as the vertex of the 2-dimensional boolean hypercube $H^2$.**
This means Vertex(0,0) = 2, Vertex(1,0) = 0, Vertex(0,1) = 0 and Vertex(1,1) = 2.
These are the four vertices of the 2-dimensional HyperCube.
:::
:::success
For instance, you can actually see that $\tilde{a}(\vec{\boldsymbol{x}})$ has two coefficients at 2 and two more at 0. Notice how this maps exactly the actual vector $a(x)$ of our multiplication gate table.
**What we are doing here is set the polynomial in coefficient form similarly to how we do it in the univariate setting using Lagrange basis interpolation with our polynomials.**
:::
Now, we can do the same for the vectors, ending up with:
$$
\tilde{b}(\vec{\boldsymbol{x}}) = 3(1-x_1)(1-x_2) + x_1(1-x_2) + x_2(1-x_1) + 0 = x_2 x_1 - 2 x_1 - 2 x_2 + 3 \\
\tilde{c}(\vec{\boldsymbol{x}}) = 6(1-x_1)(1-x_2) + 0 + + x_2(1-x_1) + 0 = -5 x_2 + x_1 (5 x_2 - 6) + 6
$$
Now, we get from the $\mathbb{V}$erifier our challenge $\vec{\gamma} = (\gamma_1,\gamma_2) = (-1, 1)$
:::info
Notice that the challenge gamma is composed by as many elements as variables we have in our polynomials.
Also, notice that gamma is a random challenge so it can have any value (not restricted to be in $\mathbb{F}^2$).
:::
The last MLE to compute is our $\tilde{eq}(\vec{x},\vec{\gamma}) = (-x_1 + 2(1-x_2))x_2$
## Run the sum-check against our multivariate polynomial
Now that we have computed all of our MLEs, it's time to run the sum-check.
First, notice that we need to have our full polynomial computed.
Since we already have our MLEs, we can simply operate and expand the computation:
$$
\tilde{f}(\vec{x}) = ((\tilde{a}(\vec{x}) \cdot \tilde{b}(\vec{x})) - \tilde{c}(\vec{x})) \cdot \tilde{eq}(\vec{x}, \vec{\gamma})) = \\
-2 x_2 x_1^2 + 4 x_1^2 + x_2 x_1 - 4 x_1 + x_2
$$
### First round
Now, we have computed our $f(x)$ polynomial. We should now run the sum of the non-fixed valiables over the Boolean Hypercube $H^2$.
This means, we have $x_1$ fixed and we compute the sum over the Hypercube over $x_2 \in \{0,1\}$.
$$
S_0(x) := \sum_{x_2\in \{0,1\}} \tilde{f}(x_1,x_2) = S_0(x_1, x_2=0) + S_0(x_1, x_2=1) = \\
\text{where} \\ \ S_0(x_1,0) = 4 x_1^2 - 4 x_1, \space S_0(x_1,1) = 2 x_1^2 - 3 x_1 + 1 \\= 6 x_1^2 - 7 x_1 + 1
$$
The next thing to do now, is for the Verifier to check
:::info
The $\mathbb{P}$rover sends $S_0(x)$ to the $\mathbb{V}$erifier who actually has 2 tasks:
- Ensure that actually $s_0(0) + s_0(1) = 0$. This proves still the correct sum of $H = 0$. **You can actually easily check that this holds**
- Check that indeed the degree of the polynomial $s_0$ is at most the degree of the monomial at the same position as te variable left. ie. $deg(S_0(x)) = deg(f[0](x,y))$ -> $deg(-2 x_2 x_1^2) = deg(6 x_1^2)$
:::
### Second round
After the $\mathbb{V}$erifier checked the correctness of both arguments, it samples yet another random challenge $r_0 = 2$
Knowing that, Prover computes:
$$
S_1(x_2) \stackrel{}{=} \sum_{x_2\in \{0,1\}} \tilde{f}(2,x_2) \stackrel{}{=} 8 - 5 x_2\\
\text{where}:\space S_1(0) = 8, \space S_1(1) = 3
$$
:::info
Now Prover sends this polynomial to the Verifier who needs to ensure:
- $S_1(0) + S_1(1) = S_0(r_0)$.
$$
\sum_{x_2\in \{0,1\}} 8 - 5 x_2 = 11 = S_0(2) = 11
$$
- Check that indeed the degree of the polynomial $s_1$ is at most the degree of the monomial at the same position as te variable left. ie. $deg(s_1(x)) = deg(f[1](x,y))$
:::
### Last round
Now, we ran out of variables.
:::success
What do we do then? We simply ask the Verifier (let's say $r_1 = 5$) and allow him to check that $f(r_0, r_1) = S_1(r_1)$.
Which if we try to, results in:
$$
f(r_0,r_1) = -17 = S_1(r_1) = -17
$$
:::
# The Prod-Check protocol
# Credits
Most of this writeup has been taken from [J. Thaler's book](https://people.cs.georgetown.edu/jthaler/ProofsArgsAndZK.pdf) and re-written with special enphasis on the areas I found more complex to follow.
I also took help from [ChatGPT](https://chat.openai.com/) to find a good example for Multilinear extension inequality based on function inequality at a particular point.
# Questions left
- In the end of the [Example round 2](https://hackmd.io/JOKCtRxGQM-HysukpW_yqQ#Example-Round-2), when we prove $s_1(0) + s_1(1) = r_1$ what are we specifically proving? _We're simply adding rounds to the protocol so that the probability of finding polynomials that satisfy the relations enforced by previous evaluations which weren't correct is almost impossible right?_
- Would be nice to add a section on sumcheck combination (Multiple Sum-checks performed within one using whatever technique to mix them.)
- How to non-interactive sum-check.
- How does the prover know that the polynomial is multilinear if we are in ZK/non-interactive setting (polynomial is not sent, only commitments are).
> We actually will ned `x` random values to be able to bind all initial claims that the prover does about the polynomial. One for each degree of it. This means that if we have 3 claims we can only sumcheck a multivariate polynomial of at most degree 3. Otherwise, prover will be missing values to complete the sum-check protocol
[name=CPerezz]