owned this note
owned this note
Published
Linked with GitHub
# Origami -- A Folding Scheme for Halo2 Lookups
[Yan X Zhang](https://twitter.com/krzhang) and [Aard Vark](https://twitter.com/aard_k_vark)
(with thanks to [Yi Sun](https://twitter.com/theyisun), [Jonathan Wang](https://twitter.com/jonathanpwang), [Lev Soukhanov](https://twitter.com/levs57), [Nicolas Mohnblatt](https://github.com/nmohnblatt))
:::warning
This hasn't been checked. Use at your own risk.
:::
Halo2 introduced a ZK [lookup](https://zcash.github.io/halo2/design/proving-system/lookup.html) scheme, building on the ideas of [Plookup](https://eprint.iacr.org/2020/315.pdf). Given two vectors of numbers (actually, elements of some large finite field $\mathbb{F}$) $A = (a_1, \ldots, a_m)$ and $S = (s_1, \ldots, s_n)$, a _lookup argument_ is an argument that
:::success
each $a_i$ is equal to some $s_j$.
:::
$A$ does not have to contain every element of $S$. Also, $A$ can contain duplicates (as can $S$). But every element of $A$ has to appear somewhere in $S$.
[Nova](https://eprint.iacr.org/2021/370) introduced the idea of _folding_: a way to aggregate SNARKs (roughly, by taking random linear combinations of witness vectors) so that a large number of incrimental computations can be verified all at once. Nova described an explicit folding scheme for R1CS systems. [Sangria](https://geometry.xyz/notebook/sangria-a-folding-scheme-for-plonk) gave a folding scheme for PLONKish arithmetizations, but also suggested (under Future Work -- Higher Degree Custom Gates) that the same folding technique could work for any polynomial "custom gates".
The purpose of this note is to describe an explicit folding scheme for a Halo2 lookup argument. This is a special case of a general [folding scheme for polynomial custom gates](https://hackmd.io/@aardvark/Hk5UtwDl2).
## The Setup
Generally, we use capital letters as vectors, and the matching lower case letters refer to their coordinates. For example, a vector $S$ would have its entries labeled $(s_1, \ldots, s_n)$.
Imagine we have $N$ lookup arguments that we want to combine into one. In other words, we have $N$ vectors $A^{(1)}, A^{(2)}, \ldots, A^{(N)}$, and $N$ vectors $S^{(1)}, S^{(2)}, \ldots, S^{(N)}$. We know that each entry of $A^{(1)}$ is equal to some entry of $S^{(1)}$, each entry of $A^{(2)}$ is equal to some entry of $S^{(2)}$, and so forth.
(One common use case is evaluating a function by lookup table. In this case, the vectors $S^{(1)}, S^{(2)}, \ldots, S^{(N)}$ are all equal to some publicly known $S$ which encodes the values of the function; the vectors $A^{(1)}, A^{(2)}, \ldots, A^{(N)}$ encode some claimed evaluations of the function at specific points.)
In any case, we have $N$ lookup arguments that we want to fold together. The main ideas are as follows.
1. [Halo2](https://zcash.github.io/halo2/design/proving-system/lookup.html) tells us how to verify a single lookup, using permutations of $A^{(i)}$ and $S^{(i)}$ and a number of polynomial constraints.
In more detail, for each $i$, the prover finds a permutation $A'^{(i)}$ of $A^{(i)}$ and a permutation $S'^{(i)}$ of $S^{(i)}$ such that, for each $j$, either
* $a'^{(i)}_j = a'^{(i)}_{j-1}$, or
* $a'^{(i)}_j = s'^{(i)}_j$.
2. For each $i$, the prover and verifier carry out a _grand product protocol_ to prove that $A'^{(i)}$ is a permutation of $A^{(i)}$ and $S'^{(i)}$ is a permutation of $S^{(i)}$.
- First, the verifier sends the prover random challenges $\beta^{(i)}$ and $\gamma^{(i)}$ (for each $i$).
- Based on those random challenges, the prover creates two new vectors $W^{(i)}$ and $Z^{(i)}$, the "grand product vectors".
4. In order to verify the lookup, the verifier checks a list of polynomial identities involving the vectors $A^{(i)}$, $A'^{(i)}$, $S^{(i)}$, $S'^{(i)}$, $W^{(i)}$, and $Z^{(i)}$, and the random challenge scalars $\beta^{(i)}$ and $\gamma^{(i)}$.
5. To _fold_ lookups together, we take a random linear combination of all the associated vectors $A^{(i)}, A'^{(i)}, S^{(i)}, S'^{(i)}, W^{(i)}, Z^{(i)}$, as well as the challenges $\beta^{(i)}$ and $\gamma^{(i)}$.
6. To verify the folded lookup requires a bit of algebra with the polynomial constraints, since nonlinear polynomials do not play well with taking linear combinations.
:::success
### An example
Suppose we want to verify that $3, 7, 3, 5$ are odd numbers, while $6, 4, 4, 4$ are even.
|$A^{(1)}$|$S^{(1)}$||$A^{(2)}$|$S^{(2)}$|
|-|-|-|-|-|
|3|1||6|2|
|7|3||4|4|
|3|5||4|6|
|5|7||4|8|
First of all, the prover computes permutations $A'^{(1)}, S'^{(1)}, A'^{(2)}, S'^{(2)}$ such that each entry of $A'^{(1)}$ is equal, either to the previous entry of $A'^{(1)}$, or to the same entry of $S'^{(1)}$. Similarly for $A'^{(2)}$ and $S'^{(2)}$. Here's one way to do this:
|$A'^{(1)}$|$S'^{(1)}$||$A'^{(2)}$|$S'^{(2)}$|
|-|-|-|-|-|
|3|3||4|4|
|3|1||4|8|
|5|5||4|2|
|7|7||6|6|
You can check that:
- $A'^{(1)} = (3, 3, 5, 7)$ is a permutation of $A^{(1)} = (3, 7, 3, 5)$, and similarly for the other three columns.
- Each entry of $A'^{(1)}$ is either the same as the entry of $S'^{(1)}$ next to it, or a repeat of the previous entry of $A'^{(1)}$. The first entry $3$, the $5$ and the $7$ are all the same as the adjacent entries of $S'^{(1)}$, while the second $3$ is a repeat of the first.
- The same holds for $A'^{(2)}$.
Next, the prover sends the verifier commitments to all eight vectors above. The verifier replies with random challenges $\beta^{(1)}, \gamma^{(1)}, \beta^{(2)}, \gamma^{(2)}$.
The prover uses the random challenges to compute _grand products_ $Z^{(1)}, W^{(1)}, Z^{(2)}, W^{(2)}$. Since the algebra is messy, we won't show an example here. What's important to know is just that the lookup problem can be translated into checking some polynomial conditions $$f_i(\beta^{(1)}, \gamma^{(1)}, A^{(1)}, S^{(1)}, A'^{(1)}, S'^{(1)}, W^{(1)}, Z^{(1)}) = 0$$ and $$f_i(\beta^{(2)}, \gamma^{(2)}, A^{(2)}, S^{(2)}, A'^{(2)}, S'^{(2)}, W^{(2)}, Z^{(2)}) = 0.$$
So far we have two lookup arguments: one for the odd numbers, and one for the evens. Now let's fold them together. First, the prover will send the verifier commitments to some _cross-terms_ $B_1, B_2, B_3, B_4$. (These are an algebraic artifact related to the nonlinearity of the polynomials $f_1, f_2, f_3, f_4$.)
Then the verifier sends the prover a random challenge $r$. (In our example, let's imagine $r = 100$.)
Finally, the prover computes the linear combinations $A^{(1)} + r A^{(2)}, S^{(1)} + r S^{(2)}$ and so forth.
|$A_1 + r A^{(2)}$|$S^{(1)} + r S^{(2)}$||$A'^{(1)} + r A'^{(2)}$|$S'^{(1)} + r S'^{(2)}$|
|-|-|-|-|-|
|603|201||403|403|
|407|403||403|801|
|403|605||405|205|
|405|807||607|607|
Something strange happened here: The "lookup" and "permutation" relations are no longer satisfied. The number $603$ appears in $A^{(1)} + r A^{(2)}$ but not in $S^{(1)} + r S^{(2)}$. The vector $A'^{(1)} + r A'^{(2)}$ is not a permutation of $A^{(1)} + r A^{(2)}$. It looks like we've thrown out all the nice properties that made the lookup argument work.
But in fact polynomials save the day! The "folded" vectors will still satisfy a polynomial identity that we'll be able to write down, and that's what the prover will verify.
:::
## The Procedure
### Relaxed AIRs
To start, we introduce the main object, a _relaxed AIR_, in the style of Sangria. The main idea of a relaxed AIR, like the relaxed R1CS analogue from Nova, is a polynomial constraint that involves a "slack term." While we can do this for [general custom gates](https://hackmd.io/@aardvark/Hk5UtwDl2) following the outline of Sangria, for this presentation we will specialize to lookups.
In the Halo2 setup, the prover $\mathcal{P}$ starts with $(A, S)$ and ends up with vectors $(A, S, A', S', Z, W)$, using 2 scalars $\beta$ and $\gamma$ supplied by the verifier $\mathcal{V}$ (or Fiat-Shamir) to construct the grand products $Z$ and $W$. The prover $\mathcal{P}$ then needs to prove that the following 7 equations hold:
1. $(1 - Q^{blind} - Q^{last}) \cdot \left ( Z[-1] (A' + \beta) - Z (A + \beta) \right ) = 0$
3. $(1 - Q^{blind} - Q^{last}) \cdot \left ( W[-1] (S' + \gamma) - W (S + \gamma) \right ) = 0$
6. $Q^{last} \cdot (Z^2 - Z) = 0$
7. $Q^{last} \cdot (W^2 - W) = 0$
5. $(1 - Q^{blind} - Q^{last}) (A' - S') (A' - A'[1]) = 0$
6. $Q^0 \cdot (A' - S') = 0$
7. $Q^0 \cdot (Z - 1) = 0$
8. $Q^0 \cdot (W - 1) = 0$.
Call these equations $f_1 = 0$ through $f_7 = 0$ respectively. In these equations, the $Q^*$ vectors are constant and public. Just like in [Halo2](https://zcash.github.io/halo2/design/proving-system/lookup.html#zero-knowledge-adjustment), they are used to implement zero knowledge.
- Let $t=2$. (In general, $t$ is the maximum number of distinct "shifts" that occur in any of the polynomial constraints. Since we have $Z$ and $Z[-1]$ in the same constraint -- meaning the polynomial $f_{i, j}$ involves both $Z_j$ and $Z_{j-1}$ -- we take $t=2$ for this protocol.)
- To implement zero knowledge, only the first $n-t$ rows of the vectors $A, S, \ldots$ will participate in the lookup. The last $t$ rows will be chosen by the prover $\mathcal{P}$ at random.
- $Q^0$: $q^0_0 = 1$ and $q^0_i = 1$ for all $i \neq 0$. (In the Halo2 writeup, this $Q^0$ is denoted $\ell_0$ and called a _Lagrange basis polynomial_ -- which of course is what $Q^0$ becomes when you encode a column as a polynomial.)
- $Q^{blind}$: $q^{blind}_i = 0$ for $n-t \leq i \leq n-1$, and $1$ otherwise (i.e. $Q^{blind}$ selects the last $t$ rows).
- $Q^{last}$: $q^{last}_i = 1$ for $i = n-t-1$, and $0$ otherwise (i.e. $Q^{last}$ selects the $(n-t-1)$-st row, the last row before the "blind" rows).
This notation is very compressed. A couple of points:
1. Each polynomial equation $f_i = 0$ corresponds to $n$ polynomial equations of the same form
$$f_{i, j} (a_1, \ldots, a_n, a'_1, \ldots, a'_n, \ldots, w_1, \ldots, w_n, \beta, \gamma) = 0,$$
where each capital letter $X \in \{A, A', S, S', Z, W\}$ corresponds to substituting $x_j$ for $f_{i, j}$, $x_j$ being the $j$-th coordinate for the vector $X$. The coordinates of the $Q^*$ vectors, the $q^*_i$'s, do not appear as arguments since they are constants.
2. We use $Z[\pm 1]$ to mean that when we unpack equation $f_{i, j}$, instead of substituting $z_{j}$, we substitute $z_{j \pm 1}$.
3. For each vector $X \in \{A, A', S, S', Z, W\}$, each $f_{i, j}$ will only depend on entries $X_j$, and possibly $X_{j-1}$.
In any single $f_{i, j}$, most of the arguments would not be used. Examples:
1. Consider the third equation $f_3 = Z^2 - Z = 0.$ This is shorthand for $n$ constraints of the form $f_{3, i}(\cdots) = z_i^2 - z_i = 0$ for each $i$, where $z_i$ is the $i$-th coordinate of $Z$.
2. In the first equation, $f_1$ would correspond to $n$ constraints $f_{1, j}$, where
$$f_{i, j} (\cdots) = (1 - Q^{blind}_j - Q^{last}_j)\left ( z_{j-1} (a'_j + \beta) - z_j (a_j + \beta) \right ) = 0.$$
For lookups, we define the following "relaxed versions" of the equations above:
1. $(1 - Q^{blind} - Q^{last}) \cdot \left ( Z([-1]) (A' + \beta) - Z (A + \beta) \right ) = E_1$
2. $(1 - Q^{blind} - Q^{last}) \cdot \left ( W([-1]) (S' + \gamma) - W (S + \gamma) \right ) = E_2$
6. $Q^{last} \cdot (Z^2 - u Z) = E_3$
7. $Q^{last} \cdot (W^2 - u W) = E_4$,
3. $(1 - Q^{blind} - Q^{last}) (A' - S') (A' - A'[1]) = E_5$,
6. $Q^0 \cdot (A' - S') = 0$
4. $Q^0 \cdot (Z - u) = 0$
5. $Q^0 \cdot (W - u) = 0$.
We define a _relaxed lookup instance_ to be a tuple $(u, \beta, \gamma, A, S, A', S', Z, W, E)$ satisfying the above 8 equations (where $E$ is shorthand for the four vectors $E_1, E_2, E_3, E_4, E_5$). (In contrast to a "standard" lookup instance, which contains only the data $(\beta, \gamma, A, S, A', S', Z, W)$, a relaxed lookup instance also contains the scaling factor $u$ and slack vectors $E_i$.)
Whenever we have two instances $(u^{(1)}, \beta^{(1)}, \gamma^{(1)}, A^{(1)}, S^{(1)}, A'^{(1)}, S'^{(1)}, Z^{(1)}, W^{(1)}, E^{(1)})$ and $(u^{(2)}, \beta^{(2)}, \gamma^{(2)}, A^{(2)}, S^{(2)}, A'^{(2)}, S'^{(2)}, Z^{(2)}, W^{(2)}, E^{(2)})$ to fold together, we also define 5 "cross terms"
\begin{eqnarray}
B_{1} & = & (1 - Q^{blind} - Q^{last}) \cdot \\
& & \big( (Z^{(1)}[-1] (A'^{(2)} + \beta^{(2)}) + Z^{(2)}[-1] (A'^{(1)} + \beta^{(1)}) \\
& & - Z^{(1)} (A^{(2)} + \beta^{(2)}) - Z^{(2)} (A^{(1)} + \beta^{(1)}) \big) \\
B_{2} & = & (1 - Q^{blind} - Q^{last}) \cdot \\
& & \big( W^{(1)}[-1] (S'^{(2)} + \beta^{(2)}) + W^{(2)}[-1] (S'^{(1)} + \beta^{(1)}) \\
& & - W^{(1)} (S^{(2)} + \beta^{(2)}) - W^{(2)} (S^{(1)} + \beta^{(1)}) \big) \\
B_{3} & = & Q^{last} \cdot (2 Z^{(1)} Z^{(2)} - u^{(1)} Z^{(2)} - u^{(2)} Z^{(1)} \\
B_{4} & = & Q^{last} \cdot (2 W^{(1)} W^{(2)} - u^{(1)} W^{(2)} - u^{(2)} W^{(1)} \\
B_5 & = & (1 - Q^{blind} - Q^{last}) \cdot \\
& & (2 A'^{(1)} A'{(2)} - S'^{(1)} A'{(2)} - S'^{(2)} A'^{(1)} - A'^{(1)} A'{(2)}[1] - A'^{(2)} A'^{(1)}[1] + \\
& & S'^{(1)} A'^{(2)}[1] + S'^{(2)} A'{(1)}[1])
\end{eqnarray}
corresponding to the first $5$ relaxed equations. (This is because those equations are quadratic and the remainder are linear; we explain this a bit more below)
We now explain the new notation. First, $u$ is a "homogenizing" scalar $u$ considered as an additional input to each $f_j$, so each relaxed $f_j$ is actually $n$ constraints of the form
$$f_{j, i} (a_1, \ldots, a_n, a'_1, \ldots, a'_n, \ldots, w_1, \ldots, w_n, u, \beta, \gamma) = e_{j, i},$$
where:
1. each $E_j$ is a vector $(e_{j, 1}, \ldots, e_{j, n})$;
2. for $j \geq 6$, we have $E_j = (0, \ldots, 0)$.
For example, recall that $f_3$ used to encode $n$ constraints of form
$$f_{3, i} = q_i^{last} (z_i^2 - z_i) = 0;$$
the new relaxed $f_3$ would instead encode $n$ constraints of the form
$$f_{3, i} = q_i^{last} (z_i^2 - z_iu) = e_{3, i}.$$
It remains to understand the roles of the cross terms $B_i$ and the slack terms $E_i$, which is the key insight in Nova / Sangria. The main idea here is that we really want the $f_i$ equations to satisfy constraints of the form
$$f_i(X_1 + rX_2) = f_i(X_1) + r f_i(X_2),$$
where $X_1$ and $X_2$ are shorthand meaning "all the arguments." Concretely, this expression is $n$ equalities of the form
\begin{align} & f_{i, j}(a_{1,1} + ra_{2, 1}, a_{2, 1} + ra_{2,2}, \ldots, z_{n, 1} + rz_{n, 2}, u_1 + ru_2, \beta_1 + r\beta_2, \gamma_1 + r\gamma_2) \\
= & f_{i, j}(a_{1,1}, a_{2, 1}, \ldots, z_{n, 1}, u_1, \beta_1, \gamma_1) + rf_{i, j}(a_{2, 1}, a_{2,2}, \ldots, z_{n, 2}, u_2, \beta_2, \gamma_2),
\end{align}
where the two arguments $X_1$ and $X_2$ encode $(A_1, S_1, \ldots)$ and $(A_2, S_2, \ldots)$ where, for example, $A_1 = (a_{1, 1}, \ldots, a_{n, 1})$ and $A_2 = (a_{1, 2}, \ldots, a_{n, 2})$.
It turns out that the last $3$ polynomials $f_5$ through $f_7$ already satisfy this relationship, because they are linear. Because $f_1$ through $f_4$ are not linear, this relationship does not hold. However, we can introduce the cross terms $B_i$ to get:
$$ f_i(X_1 + rX_2) = f_i(X_1) + r^2 f_i(X_2) + r B_{i}(X_1, X_2).$$
Each $B_i$ corresponds to $n$ equations, and take all the inputs from both $X_1$ and $X_2$. For example, $B_3$ corresponds to, as $j$ runs from $1$ to $n$,
$$ B_{3, j}(a_{1, 1}, \ldots, z_{n, 1}, a_{1, 2}, \ldots, z_{n, 2}, u_1, u_2, \beta_1, \beta_2, \gamma_1, \gamma_2) = q^{last}_j (2z_{j, 1}z_{j, 2} - u_1 z_{j, 2} - u_2 z_{j, 1}).$$
To see this explicitly with $f_3 = Z^2 - uZ,$ we compute
$\begin{align}
f_{3}(X_1 + rX_2) & = (Z_1 + rZ_2)^2 - (u_1 + ru_2) (Z_1 + rZ_2) \\
& = (Z_1^2 + 2r Z_1 Z_2 + r^2Z_2^2) - u_1Z_1 - ru_1Z_2 - ru_2Z_1 - r^2 u_2 Z_2 \\
& = (Z_1^2 - u_1Z_1) + r^2(Z_2^2 - u_2Z_2) + r(2Z_1Z_2 - u_1Z_2 - u_2Z_1) \\
& = f_{3}(X_1) + r^2 f_{3}(X_2) + r B_{3}(X_1, X_2),
\end{align}$
as desired. By doing this, the $B_i$'s are acting as an "error term" due to $f_i$ being nonlinear, and the $E_i$'s act as an accumulated error due to all the $B_i$'s.
### Details of the Protocol (Halo2 lookups)
We first define a "one-round protocol" as follows:
:::success
**Protocol 1**:
INPUT (to $\mathcal{P}$): 2 relaxed instance-witness pairs $I^{(i)} = (u^{(i)}, \beta^{(i)}, \gamma^{(i)}, A^{(i)}, S^{(i)}, (A')^{(i)}, (S')^{(i)}, Z^{(i)}, W^{(i)}, E^{(i)})$, $i \in \{1, 2\}$.
OUTPUT (of $\mathcal{P}$): 1 relaxed instance-witness pair $I = (u, \beta, \gamma, A, S, A', S', Z, W, E)$.
INPUT (to $\mathcal{V}$): 2 relaxed committed instances $\overline{I^{(i)}} = (u^{(i)}, \beta^{(i)}, \gamma^{(i)},\overline{A^{(i)}}, \overline{S^{(i)}}, \overline{(A')^{(i)}}, \overline{(S')^{(i)}}, \overline{Z^{(i)}}, \overline{W^{(i)}}, \overline{E^{(i)}})$, $i \in \{1, 2\}$.
OUTPUT (of $\mathcal{V}$): 1 relaxed committed instance $\overline{I} = (u, \beta, \gamma, \overline{A}, \overline{S}, \overline{A'}, \overline{S'}, \overline{Z}, \overline{W}, \overline{E})$.
1. Let $T = \{Z, W, E\}$. $\mathcal{P}$ computes all commitments $\{\overline{X^{(1)}}, \overline{X^{(2)}}\}_{X \in T}$ and sends them (including opening randomness) to $\mathcal{V}$.
- In detail, $\mathcal{P}$ includes values $\{\rho_{X^{(1)}} \cup \rho_{X^{(2)}}\}_{X \in T}$, where $\rho_{X^{(i)}}$ is the commitment randomness for $\overline{X^{(i)}}$; that is, for all $X \in T$ and $i \in \{1, 2\}$, $\overline{X^{(i)}} = \operatorname{Com}(pp, X^{(i)}, \rho_{X^{(i)}})$.
- **Important**: we assume commitments for $T = {A, S, A', S'}$ have already been given to $\mathcal{V}$ (this will be evident once we get to Protocol 2)
1. $\mathcal{P}$ computes the _cross terms_ $B_{j}$ for $j \in \{1, \ldots, 5\}$, and commits all the $\overline{B_{j}}$ as well.
2. $\mathcal{V}$ samples a random _challenge_ $r \in \mathbb{F}$, and sends $r$ to $\mathcal{P}$.
4. For each $X \in {T \cup \{u\} \backslash \{E\}}$, $\mathcal{P}$ computes the "folded" $X = X^{(1)} + r X^{(2)}.$ For each $j \in \{1, \ldots, 5\}$, $\mathcal{P}$ computes $E_j = E_{j}^{(1)} + r^2 E_{j}^{(2)} + r B_{j},$ obtaining $E = (E_1, E_2, E_3, E_4)$.
- For example, $\beta = \beta^{(1)} + r \beta^{(2)}$ and $Z = Z^{(1)} + rZ^{(2)}$.
- Recall that $E^{(1)} = (E^{(1)}_{1}, \ldots, E^{(i)}_{4})$ and $E_2 = (E^{(2)}_{1}, \ldots, E^{(2)}_{4})$. Each of the $E^{(i)}_j$'s are themselves vectors.
5. $\mathcal{P}$ returns $I = (u, \beta, \gamma, A, S, A', S', Z, W, E)$.
6. For each $X \in {T \cup \{u\}}$, $\mathcal{V}$ computes the "folded" $\overline{X} = \overline{X}^{(1)} + r \overline{X}^{(2)}$. For each $j \in \{1, \ldots, 5\}$, $\mathcal{V}$ computes $\overline{E_j} = \overline{E_{j}^{(1)}} + r^2 \overline{E_{j}^{(2)}} + r \overline{B_{j}},$ obtaining $\overline{E} = (\overline{E_1}, \overline{E_2}, \overline{E_3}, \overline{E_4})$.
7. $\mathcal{V}$ returns $\overline{I} = (u, \beta, \gamma, \overline{A}, \overline{S}, \overline{A'}, \overline{S'}, \overline{Z}, \overline{W}, \overline{E})$.
:::
The above encodes a single folding step. We now give the entire protocol:
:::success
**Protocol 2**
INPUT (to $\mathcal{P}$): $N$ lookup instances $\widetilde{I^{(i)}} = (A^{(i)}, S^{(i)})$, for $i \in \{1, \ldots, N\}$.
OUTPUT (of $\mathcal{P}$): 1 relaxed lookup instance-witness pair $I^{cml} = (u, \beta, \gamma, A, S, A', S', Z, W, E)$.
INPUT (to $\mathcal{V}$): $N$ relaxed committed instances $\overline{I^{(i)}} = (u^{(i)}, \beta^{(i)}, \gamma^{(i)},\overline{A^{(i)}}, \overline{S^{(i)}}, \overline{(A')^{(i)}}, \overline{(S')^{(i)}}, \overline{Z^{(i)}}, \overline{W^{(i)}}, \overline{E^{(i)}})$.
OUTPUT (of $\mathcal{V}$): 1 folded relaxed committed instance $\overline{I^{cml}} = (u, \beta, \gamma,\overline{A}, \overline{S}, \overline{A'}, \overline{S'}, \overline{Z}, \overline{W}, \overline{E})$.
1. To initialize folding, $\mathcal{P}$ initializes a "cumulative lookup instance" $I^{cml}$ where $u, \beta, \gamma, A, \ldots, E$ are all equal to zero.
2. $\mathcal{V}$ initializes a cumulative committed instance $\overline{I^{cml}}$ where $u, \beta, \gamma, \overline{A}, \ldots, \overline{E}$ are all equal to zero.
3. When we fold in a new lookup instance $\widetilde{I^{(i)}} = (A^{(i)}, S^{(i)})$, $\mathcal{P}$ constructs a *relaxed* lookup instance $I^{(i)} = (u^{(i)}, \beta^{(i)}, \gamma^{(i)}, A^{(i)}, S^{(i)}, \ldots, E^{(i)})$ by:
- constructing permutations $(A')^{(i)}$ and $(S')^{(i)}$ as in Halo2,
- sending commitments (including opening randomness) of $A^{(i)}, A'^{(i)}, S^{(i)}, S'^{(i)}$ to $\mathcal{V}$,
- requesting random challenges $\beta^{(i)}$ and $\gamma^{(i)}$ from $\mathcal{V}$ (or Fiat-Shamir) for that round,
- constructing $Z^{(i)}$ and $W^{(i)}$ from $((A')^{(i)}, (S')^{(i)}, \beta^{(i)}, \gamma^{(i)})$ as in Halo2,
- setting $u^{(i)} = 1$ and $E_j^{(i)} = (0, \ldots, 0)$ for all $j$,
4. $\mathcal{P}$ and $\mathcal{V}$ run Protocol 1.
5. $\mathcal{P}$ overwrites
$$ I^{cml} \leftarrow \texttt{Protocol_1}_{\mathcal{P}}(I^{cml}, I^{(i)}),$$
that is: apply Protocol 1, using the current $I^{cml}$ as the first input and the new relaxed instance $I^{(i)}$ as the second input. During each step, we need a new $r = r^{(i)}$ for our "folding randomness".
6. During Protocol 1, $\mathcal{V}$ builds the committed relaxed instance
$\overline{I^{(i)}} = (u^{(i)}, \beta^{(i)}, \gamma^{(i)},\overline{A^{(i)}}, \overline{S^{(i)}}, \overline{(A')^{(i)}}, \overline{(S')^{(i)}}, \overline{Z^{(i)}}, \overline{W^{(i)}}, \overline{E^{(i)}})$
out of commitments sent from $\mathcal{P}$. Using $\overline{I^{(i)}}$, $\mathcal{V}$ overwrites
$$ \overline{I^{cml}} \leftarrow \texttt{Protocol_1}_{\mathcal{V}}(\overline{I^{cml}}, \overline{I^{(i)}}).$$
7. After $N$ steps, the folding is complete. All that remains is for $\mathcal{P}$ to convince $\mathcal{V}$ that the final folded tuple $I^{cml}$ is a legitimate witness to the committed instance $\overline{I^{cml}}$, which is done as in the usual Halo2.
:::
# Comparison with Sangria
## Scope
Sangria had outlined an approach to custom gates. Our lookup protocol is almost, but not quite, a special case of that procedure, mainly because of the roles of $\beta$ and $\gamma$. Instead, what we have here is a special case of "custom gates with verifier randomness," a concept that's a slight generalization of Sangria's approach to custom gates.
For completeness, we make explicit many of the ideas in Sangria's outline in our writeup of the [general folding scheme](https://hackmd.io/@aardvark/Hk5UtwDl2).
## Knowledge soundness
This protocol satisfies _knowledge soundness_: a cheating prover cannot convince the verifier to accept a folded proof unless the prover actually knows $N$ satisfying witnesses. We give a proof of knowledge soundness in our [writeup](https://hackmd.io/@aardvark/Hk5UtwDl2).
In outline, the proof (very similar to as done in Sangria and Nova) is as follows. The idea is to imagine an _extractor_ that interacts with the prover. The extractor is allowed to rewind the prover to a previous state. In practice, this means the extractor (playing the role of verifier) can send the prover different challenges $r$, and see how the prover responds. Like in Nova and Sangria, by testing enough different values $r$ and doing a bit of algebra, the extractor can recover the witnesses $N$ that were folded together. Once the extractor can recover the $N$ folded witnesses, since Halo2 lookups themselves are knowledge sound, we know the prover must know $N$ valid lookup witnesses.
Our proof of knowledge soundness in the general scheme is only a bit more complicated than that of Sangria, partly due to the polynomials of arbitrary degree (although again Sangria had an outline already in Section 3.3) and partly due to the fact that we have "verifier randomness," as stated above. We have to take care that the verifier-provided randomness $\beta$ and $\gamma$ does not mess things up.