- each layer of gkr runs [[sumcheck-protocol]] over the following function
- $f(b, c) = \tilde{add}_i(b, c)(w(b) + w(c)) + \tilde{mul}_i(b, c)(w(b) \cdot w(c))$
- at the end of each sumcheck, we have a final oracle check
- i.e all the variables of the polynomial is given some random value and some expected evaluation when we evaluate the polynomial at those random values
- $f(r_b, r_c)$ = E (expected evaluation)
- the verifier needs to compute the following
- $f_g(r_b, r_c) = \tilde{add_i}(g, r_b, r_c)(w(r_b) + w(r_c)) + \tilde{mul_i}(g, r_b, r_c)(w(r_b) \cdot w(r_c))$
- the verifier can compute $\tilde{add_i}(r_b, r_c)$ and $\tilde{mul_i}(r_b, r_c)$ by themselves, because they have access to those polynomials (circuit definition).
- the verifier needs the values for $w(r_b)$ and $w(r_c)$
- note: the verifier doesn't have access to the w polynomials for layers other than the input (when combined with a polynomial commitment scheme the verifier doesn't even have access to the input w poly)
- hence the prover just sends the values for $w(r_b)$ and $w(r_c)$, allowing the verifier to complete the $f(r_b, r_c)$ computation
- now the verifier need to validate the values sent by the prover
- we can assume the prover sent the following values
- $w(r_b) = B$
- $w(r_c) = C$
- recall that the w polynomial can be represented as a sumcheck equation over the w polynomial from the next layer
- so:
- $w_i(r_b) = \sum_{b,c \in \{0,1\}} \tilde{add_{i+1}}(r_b, b, c)(w_{i +1}(b) + w_{i+1}(c)) + \tilde{mul_{i+1}}(r_b, b,c)(w_{i+1}(b) \cdot w_{i+1}(c))$
- $w_i(r_c) = \sum_{b,c \in \{0,1\}} \tilde{add_{i+1}}(r_c, b, c)(w_{i +1}(b) + w_{i+1}(c)) + \tilde{mul_{i+1}}(r_c, b,c)(w_{i+1}(b) \cdot w_{i+1}(c))$
- hence the claims $w(r_b) = B$ and $w(r_c) = C$ can be verified via sumcheck
- the goal is to verify both claims with just one sumcheck as opposed to two.
- [[random-linear-combination]] solution
- sample two random values $\alpha$ and $\beta$
- set the combined claim to the following
- $\alpha \cdot B + \beta \cdot C$ = $\alpha \cdot w(r_b) + \beta \cdot w(r_c)$
- the LHS represents the claimed sum of the RHS poly over the hypercube
- the prover is responsible for generating a valid proof for RHS
- expanding the RHS
- $\alpha \cdot w(r_b)$ = $\sum_{b,c \in \{0,1\}} \alpha \cdot \tilde{add_{i+1}}(r_b, b, c)(w_{i +1}(b) + w_{i+1}(c)) + \alpha \cdot \tilde{mul_{i+1}}(r_b, b,c)(w_{i+1}(b) \cdot w_{i+1}(c))$
- $\beta \cdot w(r_c)$ = $\sum_{b,c \in \{0,1\}} \beta \cdot \tilde{add_{i+1}}(r_c, b, c)(w_{i +1}(b) + w_{i+1}(c)) + \beta \cdot \tilde{mul_{i+1}}(r_c, b,c)(w_{i+1}(b) \cdot w_{i+1}(c))$
- the summation equates to $(\alpha \cdot \tilde {add_{i+1}}(r_b, b, c) + \beta \cdot \tilde {add_{i + 1}}(r_c, b, c)) \cdot (w_{i+1}(b) + w_{i+1}(c)) + (\alpha \cdot \tilde {mul_{i+1}}(r_b, b, c) + \beta \cdot \tilde {mul_{i + 1}}(r_c, b, c)) \cdot (w_{i+1}(b) \cdot w_{i+1}(c))$
- looking at the equation above, we realize that the only thing that really changes is the $add_i$ and $mul_i$ [[multilinear-extension|mle]]s
- algorithm perspective
- the prover wants to prove the following claim as one $w(r_b) = B$ and $w(r_c) = C$
- to do this, the prover should get the following
- $add_{i+1}$ partially evaluated at both $r_b$ and $r_c$ to get $add_{i+1}(r_b, b, c)$ and $add_{i+1}(r_c, b, c)$
- $mul_{i+1}$ partially evaluated at both $r_b$ and $r_c$ to get $mul_{i+1}(r_b, b, c)$ and $mul_{i+1}(r_c, b, c)$
- then construct a new $add_{i+1}$ and $mul_{i+1}$ poly the following way:
- new_add = $\alpha \cdot add_{i+1}(r_b, b, c) + \beta \cdot add_{i+1}(r_c, b, c)$
- new_mul = $\alpha \cdot mul_{i+1}(r_b, b, c) + \beta \cdot mul_{i+1}(r_c, b, c)$
- finally the f(b, c) poly to run sumcheck over is given as follows
- new_add(b, c)$\cdot (w(b) + w(c))$ + new_mul_i(b, c)$\cdot (w(b) \cdot w(c))$