# Let's Labrador #### Author: Ashutosh Marwah (ash@ingonyma.com) Labrador [1] is the first practical lattice-based zkSNARK. The protocol generates compact proofs of approximately 50 kB across a wide range of practically relevant R1CS sizes. It has become an important tool in lattice cryptography, already seeing applications in tasks like post-quantum signature aggregation [3]. It also serves as the foundation for Greyhound [2], the most practical lattice-based polynomial commitment scheme to date. This blog explores Labrador's construction from the ground up, starting from a simple baby protocol and progressively building toward the full recursive Labrador protocol. ## Security Before we go deep into lattice crytography, it is important that we understand how norms are defined for vectors over a ring $\mathbb{Z}_q$. For $\vec{v} = (v_1, \cdots, v_n) \in \mathbb{Z}_q^n$, we define the 2-norm of $\vec{v}$ as: $$\Vert \vec{v} \Vert = \left(\sum_{i=1}^n |v_i|^2\right)^{1/2}$$ where we assume that each $v_i \in \left(-\lceil q/2 \rceil,\, \lceil q/2 \rceil \right]$. The operations on the RHS above are performed viewing $v_i$ as real numbers. Similarly, we can define the infinity norm as: $$\Vert \vec{v} \Vert_{\infty} = \max_{1\leq i\leq n}|v_i|. $$ The Labrador protocol relies on the module short integer solutions (MSIS) assumption for its security, which builds on the short integer solution (SIS) assumption. The SIS assumption states that it is computationally hard for a polynomial-time (possibly quantum) adversary to find a solution to the equation $A\vec{s} = 0$ such that $\Vert \vec{s} \Vert \leq \beta$ for some small $\beta$ (compared to $q$) where $A$ is a random matrix in $\mathbb{Z}_q^{m \times n}$. Module SIS generalizes this problem to polynomial rings. Instead of working with $\mathbb{Z}_q$ matrices, MSIS operates over the ring $\mathcal{R}_q := \mathbb{Z}_q[X]/(X^d + 1)$, where matrix entries are degree-$d$ polynomials. The assumption states that for appropriately chosen parameters $(m, n, \beta)$, finding a short solution $\vec{s} \in \mathcal{R}_q^{n}$ to $A\vec{s} = 0$ such that $\Vert \vec{s} \Vert \leq \beta$ remains computationally hard, where $A \in \mathcal{R}_q^{m \times n}$. Here the norm of the $n$-length polynomial vector $\vec{s}$ is computed by viewing it as an $nd$-length vector of $\mathbb{Z}_q$ elements. This assumption can be used to construct one of the most central tools in lattice cryptography and also the Labrador protocol- the Ajtai commitment. Suppose MSIS is hard for the parameters $(m,n, \beta)$, then for vectors $\vec{s} \in \mathcal{R}_q^{n}$ of norm at most $\beta/2$, $t := A\vec{s}$ serves as a binding commitment. If a polynomial algorithm were able to find two vectors $\vec{s}_1 \neq \vec{s}_2$ such that they both produced the commitment $t$, then it would also be able to solve MSIS, since $$ A(\vec{s}_1-\vec{s}_2) = A\vec{s}_1 - A\vec{s}_2 = t - t = 0 $$ and $\vec{s}_1-\vec{s}_2\neq 0$ and $\Vert \vec{s}_1-\vec{s}_2 \Vert \leq \beta$. Though, we have used the 2-norm above for defining the MSIS problem, it is also hard when the norm is measured using the infinity norm for a large parameter range. ## Labrador constraint system The witness for Labrador is assumed to be of the form $(\vec{s}_i)_{i=1}^r$ where each $\vec{s}_i \in \mathcal{R}_q^{n}$ (an $n$-length vector of $d$-degree polynomials). There are two kinds of constraints in Labrador: equality constraints and const-zero constraint. An equality constraint consists of the polynomials $\{a_{ij}: 1\leq i,j\leq r\} \subseteq \mathcal{R}_q$, the polynomial vectors $\{\vec{\varphi}_i : 1\leq i \leq r\} \subseteq \mathcal{R}_q^n$ and the polynomial $b \in \mathcal{R}_q$. The witness satisfies this constraint if $$ \sum_{i,j=1}^r a_{ij} \langle \vec{s}_i, \vec{s}_j\rangle + \sum_{i=1}^r \langle \vec{\varphi}_i, \vec{s}_i\rangle + b =0. $$ Here the inner product between two polynomial vectors is simply the algebraic function: $\langle \vec{p}, \vec{q}\rangle = \sum_{i=1}^n p_i(X)q_i(X)$. Similarly, a const-zero constraint consists of the polynomials $\{a_{ij}: 1\leq i,j\leq r\} \subseteq \mathcal{R}_q$, the polynomial vectors $\{\vec{\varphi}_i : 1\leq i \leq r\} \subseteq \mathcal{R}_q^n$ and the ring element $b \in \mathbb{Z}_q$. The witness satisfies this constraint if $$ \text{const}\left(\sum_{i,j=1}^r a_{ij} \langle \vec{s}_i, \vec{s}_j\rangle + \sum_{i=1}^r \langle \vec{\varphi}_i, \vec{s}_i\rangle + b \right)=0. $$ The $\text{const}$ function above extracts the constant coefficient of a polynomial. The Labrador protocol allows a Prover to show that it holds a witness which satisfies multiple equality and const-zero constraints and also has a small norm, specifically $\sum_{i=1}^r \Vert \vec{s}_i \Vert^2 \leq \beta^2$ for some given $\beta$. Section 6 of the Labrador paper [1] shows that one can reduce binary and $\mathbb{Z}_{2^d +1}$ R1CS constraints to constraints of the above form. We won't go into the details here. For intuition, the fact that the constraints above are quadratic allow for sufficient generality to describe the R1CS constraints. Moreover, the binary nature of the witnesses in the R1CS allows for small norm witnesses for Labrador. ## High level idea The Labrador protocol works recursively. One can divide it into two parts: the base protocol and the recursive reshaping. The base protocol allows the Prover to prove the knowledge of a witness $\{\vec{s}_i : 1\leq i \leq r\} \subseteq \mathcal{R}_q^{n}$ for a collection of equality and const-zero constraints (these can be viewed as the "circuit" for the protocol). If the total witness size of $(\vec{s}_i)_{i=1}^r$ is $N = n r$ polynomials, then the base protocol creates a proof of size $O(N^{2/3})$, which alone isn't particularly small. This is where the recursion comes in. The relations satisfied by the proof (equivalently the Verifier's actions for the base proof) are once again linear and quadratic constraints, so they can be represented using the equality constraints above. One can then view the proof as a witness for another set of equality constraints and reapply the Labrador protocol to the appropriate constraints. If one appropriately reshapes the witness and construct the corresponding problem, then proof size following this recursion is $O(|P|^{2/3})$ where $|P|$ was the size of the proof at the end of the base protocol. So, with one recursion iteration the proof size becomes $O(N^{4/9})$. Each iteration reduces the proof size rapidly. With only $O(\log\log N)$ iterations the proof becomes constant $O(1)$ size. For R1CS sizes relevant in practice, one only needs to recurse the protocol $\sim7$ times to achieve a proof of $\sim 50$ kB. It is also worth noting here that the Verifier needs to do linear work in this protocol. This is one of the major shortcomings of the protocol. ## Baby Labrador We will begin with a simple protocol which is at the heart of Labrador. This will enable the Prover to aggregate and prove the constraints. Then, we will modify and improve it to enable recursion. Let's say the Prover wants to prove that he holds a witness $\{\vec{s}_i : 1\leq i \leq r\} \subseteq \mathcal{R}_q^{n}$ (typically, you can choose $n=N^{2/3}$ and $r = N^{1/3}$, where $N$ is the total witness size) which satisfies the $K$ equality constraints: $$\sum_{i,j=1}^r a^{(k)}_{ij} \langle \vec{s}_i, \vec{s}_j\rangle + \sum_{i=1}^r \langle \vec{\varphi}^{(k)}_i, \vec{s}_i\rangle + b^{(k)} =0$$ for every $1\leq k \leq K$ and the $L$ const-zero constraints: $$\text{const}\left(\sum_{i,j=1}^r a^{(l)}_{ij} \langle \vec{s}_i, \vec{s}_j \rangle + \sum_{i=1}^r \langle \vec{\varphi}^{(l)}_i, \vec{s}_i\rangle + b^{(l)} \right)=0$$ for every $1\leq l \leq L$ and also satisfies the norm constraint: $$\sum_{i=1}^r \Vert \vec{s}_i \Vert^2 \leq \beta^2.$$ The $a_{ij}$, $\varphi_i$ and $b$ terms for all the constraints are public parameters, and, in particular, held by the Verifier. Operating on these directly is what makes the Verifier linear time. The Prover and the Verifier carry out the following interactive procedure to prove this: 1. **Ajtai commitment to the witness:** For every $1\leq i \leq r$, the Prover sends $\vec{t}_i = A\vec{s}_i$ to the Verifier, where $A$ is a $\kappa \times n$ matrix with elements in the polynomial ring $\mathcal{R}_q$. For this simplified base protocol, we will only use the homomorphic property for these commitments. Later on, we will see that the fact that the Ajtai commitment is linear can be used to write the constraint for the recursion instance as equality constraints. Note that the size of the Prover messages in this step is $\kappa r$. $\kappa$ can be chosen to be a constant, so these messages will be sublinear. 1. **Const-zero constraint aggregation:** 1. The Verifier sends $L$ random challenges $(\psi_l)_{l=1}^L \in_r \mathbb{Z}_q$ to the Prover. 2. The Prover aggregates the $L$ const-zero constraints into a single const-zero constraint, simply by taking a random linear combination over them: $$\text{const}\left[\sum_{l=1}^L \psi_l \left( \sum_{i,j=1}^r a^{(l)}_{ij} \langle \vec{s}_i, \vec{s}_j\rangle + \sum_{i=1}^r \langle \vec{\varphi}^{(l)}_i, \vec{s}_i\rangle + b^{(l)} \right)\right]=0$$ Thus, the aggregated const-zero constraint given by $$a^{\text{agg}}_{ij} := \sum_{l=1}^L \psi_l\ a^{(l)}_{ij} \qquad \vec{\varphi}^{\text{agg}}_i := \sum_{l=1}^L \psi_l\ \vec{\varphi}^{(k)}_i \qquad b^{\text{agg}} := \sum_{l=1}^L \psi_l\ b^{(l)}$$ should also be satisfied by the witness: $$\text{const}\left(\sum_{i,j=1}^r a^{\text{agg}}_{ij} \langle \vec{s}_i, \vec{s}_j\rangle + \sum_{i=1}^r \langle \vec{\varphi}^{\text{agg}}_i, \vec{s}_i\rangle + b^{\text{agg}} \right)=0.$$ Note that the Verifier can compute this new aggregated constraint himself. At this point the Prover and Verifier, simply discard the original const-zero constraints. Intuitively, if one of them was not satisfied, then with high probabililty the aggregated constraint too is not satisfied. 3. The Prover then converts this to an equality constraint simply by sending the polynomial: $$B^{\text{agg}} = - \sum_{i,j=1}^r a^{\text{agg}}_{ij} \langle \vec{s}_i, \vec{s}_j\rangle - \sum_{i=1}^r \langle \vec{\varphi}^{\text{agg}}_i , \vec{s}_i\rangle.$$ 4. The Verifier checks that the constant term of this polynomial is correct (equal to $b^{\text{agg}}$). This is why the random challenges were chosen from $\mathbb{Z}_q$ and not $\mathcal{R}_q$. 5. Both the Prover and the Verifier then add the equality constraint $$\sum_{i,j=1}^r a^{\text{agg}}_{ij} \langle \vec{s}_i, \vec{s}_j\rangle + \sum_{i=1}^r \langle \vec{\varphi}^{\text{agg}}_i , \vec{s}_i\rangle + B^{\text{agg}} =0.$$ to their list of equality constraints. They further also discard the newly constructed aggregated const-zero constraint. To see that this maintains the security, it is worth noting that adding a constraint only makes it more difficult for the prover to cheat. Further, since the Verifier checks that the constant term of $B^{\text{agg}}$ is computed correctly, we are guaranteed that the aggregated const-zero constraint was satisfied. While in the above, we aggregate all the const-zero constraints into a single equality constraint, to ensure security one needs to repeat the steps above a small constant number of times. However, we don't do this here for simplicity. 1. **Equality constraint aggregation** Now we only have equality constraints left. We will aggregate all these into a single equality constraint. Let $K'$ be the number of equality constraints ($K' = K+1$ since we added a constraint above). 1. The Verifier randomly samples challenge polynomials $(\alpha_k)_{k=1}^{K'} \in_r \mathcal{R}_q$ and sends them to the Prover. 1. The Prover and Verifier aggregate the equality constraints by forming a random linear combination of the existing constraints. If the constraints are $1\leq k\leq K'$: $$\sum_{i,j=1}^r a^{(k)}_{ij} \langle \vec{s}_i, \vec{s}_j\rangle + \sum_{i=1}^r \langle \vec{\varphi}^{(k)}_i, \vec{s}_i\rangle + b^{(k)} =0$$ then the (final) aggregated equality constraint is given by: $$a^{\text{(fin)}}_{ij} := \sum_{k=1}^{K'} \alpha_k\ a^{(k)}_{ij} \qquad \vec{\varphi}^{\text{(fin)}}_i := \sum_{k=1}^{K'} \alpha_k\ \vec{\varphi}^{(k)}_i \qquad b^{\text{(fin)}} := \sum_{l=1}^L \alpha_k\ b^{(l)}$$ and the following should be satisfied by the witness: $$\sum_{i,j=1}^r a^{\text{(fin)}}_{ij} \langle \vec{s}_i, \vec{s}_j\rangle + \sum_{i=1}^r \langle \vec{\varphi}^{\text{(fin)}}_i, \vec{s}_i\rangle + b^{\text{(fin)}} =0.$$ Similar to the previous step, it is easy to argue that if the witness did not satisfy any of the equality constraints, then it would also not satisfy this constraint. Thus, now the Prover simply needs to prove that his witness satisfies this final equality constraint. 1. **Sending "garbage" polynomials** 1. For $1\leq i, j\leq r$, the Prover sends the polynomials: $$g_{ij} := \langle\vec{s}_i, \vec{s}_j \rangle \qquad h_{ij} := \langle \vec{\varphi}^{\text{(fin)}}_i, \vec{s}_j \rangle$$ to the Verifier. 2. The Verifier checks that $$\sum_{i,j=1}^r a^{\text{(fin)}}_{ij} g_{ij} + \sum_{i=1}^r h_{ii} + b^{\text{(fin)}} =0.$$ Now, observe that the Prover simply needs to prove that $g_{ij}$ and $h_{ij}$ were computed honestly. If he can show that then he has proven that the witnesses satisfy the constraint. Also, note that the Prover sends $2 r^2$ polynomials in this round. This can be reduced by about one half by observing that $g_{ij} = g_{ji}$ and by symmetrising $h_{ij}$, so that it too satisifies a similar equation. However, we ignore these optimisations here. 1. **Amortized opening** The Verifier sends challenge polynomials $(c_i)_{i=1}^r$ to the Prover and asks the Prover to open the commitment $\sum_{i=1}^r c_i \vec{t}_i$ under the Ajtai matrix $A$. The Prover can always open this using the polynomial vector $\sum_{i=1}^r c_i \vec{s}_i$. However, if the challenge polynomials are too "big", a dishonest Prover will also be able to open the commitment to other vectors because the Ajtai commitment is binding only if the input has a small norm. Thus, the Verifier will need to sample challenges from a special subset of $\mathcal{C}\subseteq \mathcal{R}_q$. This subset is specifically chosen so that: 1. $\vert \mathcal{C} \vert$ is exponentially large ($\sim2^{128}$). 2. For any polynomial $c \in \mathcal{C}$ and any polynomial $r \in \mathcal{R}_q$, we have $$\Vert c \cdot r \Vert \leq T \Vert r \Vert$$ for some small $T \in \mathbb{R}^+$ ($15$ for Labrador) Such challenge spaces often appear in lattice cryptography (for example, in Dilithium), as it is necessary to ensure that the vectors combined using challenges retain their small norms. In Labrador, the challenge space $\mathcal{C}$ consists of polynomials with 23 zero coefficients, 31 $\pm 1$ coefficients and 10 $\pm 2$ coefficients. One further needs to use rejection sampling to ensure that the operator norm condition (condition 2) above is satisfied while sampling from this set. Finally, we have the Prover and Verifier perform the following steps: 1. The Verifier randomly samples $(c_i)_{i=1}^r$ from the challenge space $\mathcal{C}$ and sends them to the Prover. 1. The Prover responds with $\vec{z} = \sum_{i=1}^r c_i \vec{s}_i$. 1. The Verifier checks that $$\Vert z \Vert \leq T\beta\sqrt{r} =: \gamma$$ and the following equations are satisfied: $$A \vec{z} = \sum_{i=1}^r c_i t_i \qquad \langle \vec{z}, \vec{z} \rangle = \sum_{i,j=1}^r g_{ij} c_i c_j \qquad \sum_{i=1}^r c_i \langle \vec{\varphi}_i^{\text{(fin)}}, \vec{z} \rangle = \sum_{i,j=1}^r h_{ij} c_i c_j $$ Roughly speaking, the norm bound on $\vec{z}$ and the fact that it opens the commitment to $\sum_{i=1}^r c_i t_i$ implies that $\vec{z}$ was produced honestly by the Prover. The rest of the two equations guarantee that the polynomials $g_{ij}$ and $h_{ij}$ were produced honestly. It is easy to check by plugging in the honest $\vec{z}$ in these equations that the coefficient of $c_i c_j$ on the LHS is the honest value of $g_{ij}$ and the honest value of $h_{ij}$. Since, $c_i$ are chosen randomly over a large space, if the Prover didn't provide the correct $g_{ij}$ or $h_{ij}$ one of these equations would fail with high probability. The protocol stated above can be formalised and used to provide a sublinear size proof that the Prover holds a valid witness. Observe that the Prover's proof consists of the commitments $\{t_i: 1\leq i\leq r\}$, garbage polynomials $\{g_{ij}, h_{ij}: 1\leq i, j \leq r\}$ and the polynomial vector $\vec{z}$. The Verifier checks the conditions in Step 5.3 above for these proof elements. It is not too hard to see that these verification conditions (only quadratic and linear constraints; $c_i$ can be part of the constraint definition since the Verifier knows them already) can be reframed as equality constraints where the witness is the above proof elements. This almost what Labrador does, except, we cannot effectively recurse the protocol at this point because the norm relaxation required for extracting a valid witness from $\vec{z}$ in the above protocol is too large (something like $O(T \beta r)$ for $\Vert (\vec{s}_i)_{i=1}^r \Vert$). To circumvent this issue, in Labrador the Verifier makes use of a random projection called the "JL projection" to estimate the norm of the witnesses. He makes sure that the Prover computes this projection honestly by adding some linear constraints to the set of const-zero constraints. Finally, we also need to ensure that the communication per recursion round is small. For this purpose, the Prover will only Ajtai commit to $(t_i, g_{ij}, h_{ij})$ in each round. ## Labrador base protocol We are now ready to describe the base protocol used in Labrador. The proof produced at the end of this protocol can either be sent to the Verifier as a proof or be used as the witness for the next recursion round. Similar to the protocol above, let's say that the Prover wishes to prove that he holds a witness $\{\vec{s}_i : 1\leq i \leq r\} \subseteq \mathcal{R}_q^{n}$ which satisfies the $K$ equality constraints: $$\sum_{i,j=1}^r a^{(k)}_{ij} \langle \vec{s}_i, \vec{s}_j\rangle + \sum_{i=1}^r \langle \vec{\varphi}^{(k)}_i, \vec{s}_i\rangle + b^{(k)} =0$$ for every $1\leq k \leq K$ and the $L$ const-zero constraints: $$\text{const}\left(\sum_{i,j=1}^r a^{(l)}_{ij} \langle \vec{s}_i, \vec{s}_j \rangle + \sum_{i=1}^r \langle \vec{\varphi}^{(l)}_i, \vec{s}_i\rangle + b^{(l)} \right)=0$$ for every $1\leq l \leq L$ and also satisfies the norm constraint: $$\sum_{i=1}^r \Vert \vec{s}_i \Vert^2 \leq \beta^2.$$ For this the Prover and the Verifier perform the following steps: 1. **Ajtai commit to the witness**: Same as the Baby Labrador protocol above, the Prover computes the commitments to $\vec{s}_i$ as $\vec{t}_i = A \vec{s}_i$. However, as we pointed out above we would like to avoid sending all these $\vec{t}_i$ to the Verifier. So, instead the Prover sends a commitment to these $(\vec{t}_i)_{i=1}^r$. Note that the vectors $\vec{t}_i$ need not have a small norm as is required for the binding property to hold for Ajtai commitments. For this reason, we need to first _decompose_ each $\vec{t}_i$. A decompose operation takes a base $b$ and a polynomial vector $\vec{p}$ and returns $(\vec{p}^{(0)}, \vec{p}^{(1)}, \ \cdots\ , \vec{p}^{(l-1)})$ such that $$ \vec{p} = \sum_{i=0}^{l-1} b^i \vec{p}^{(i)} $$ and $\Vert \vec{p}^{(i)} \Vert_{\infty} \leq b/2$ (similar to a base $b$ decomposition). We will call the inverse of this operation, which combines $(\vec{p}^{(0)}, \vec{p}^{(1)}, \ \cdots\ , \vec{p}^{(l-1)})$ to produce $\vec{p}$ the _recompose_ operation. Note that the recompose operation is linear. Let's say that $$ \vec{{t'}} := \text{decompose}((\vec{t}_i)_{i=1}^r, \text{base} = b_1). $$ Then, $\Vert \vec{{t'}} \Vert_{\infty}\leq b_1/2$. This norm bound is sufficient to ensure binding for the Ajtai commitment. So, the Prover sends the commitment $\vec{u}_1 = B \vec{{t'}}$ to the Verifier. Note that $\vec{u}_1$ has a small constant size. 1. **JL Projection**: In this step, the Verifier uses the Johnson-Lindenstrauss (JL) lemma to ensure that the Prover's witness has a small norm. The JL Lemma roughly says that for a vector $\vec{v}\in \mathbb{Z}^m$ and a random matrix $\Pi$ of dimensions $256 \times m$ with elements chosen randomly from $\{-1,0,1\}$ with probabilities $P(1)=P(-1)=1/4$ and $P(0)=1/2$, the norm of the constant size vector $\Pi v$ concentrates close to $\Vert v \Vert$. Specifically, that $$\text{Pr}\left[\Vert \Pi v \Vert \leq \sqrt{30}\Vert v \Vert\right] \leq \text{negl}.$$ The matrix $\Pi$ is usually referred to as the "JL projection". We briefly note here that this term is a misnomer. This matrix, in general, is not a projection in the linear algebraic sense. Here's how the Prover and the Verifier use this lemma: 1. The Verifier randomly samples a $256 \times rnd$ JL projection $\Pi$ and sends it to the Prover. 1. Let $\vec{v} \in \mathbb{Z}_q^{rnd}$ be the flattened witness $(\vec{s}_i)_{i=1}^r$. The Prover responds with $\vec{p} = \Pi \vec{v}$. 1. The Verifier checks that the norm of $\vec{p}$ is small: $\Vert \vec{p}\Vert \leq \sqrt{128}\beta$. If Prover computed $\vec{p}$ honestly and this check passes, then the Verifier is guaranteed that the entire witness has norm less than $\approx 2\beta$. This is a significant improvement compared to $O(T\beta r)$ in the Baby protocol above. The bound $\sqrt{128}\beta$ is chosen, so that there is sufficient probability that an honest $\vec{p}$ passes this test (Prover and Verifier need to repeat this for multiple $\Pi$ until this test suceeds). 1. Now, we must ensure that the Prover has computed $\vec{p}$ correctly. For this, the Prover and Verifier both add some constraints to those satisfied by the witness. Simply put, the Prover claims that the witness satisfies the $\mathbb{Z}_q$ equation: $$\Pi\cdot\text{ flatten}(\vec{s}_1,\vec{s}_2,\ \cdots\ , \vec{s}_r) = \vec{p}.$$ These are simply $256$ linear constraints on the witness and they can be represented as $256$ const-zero constraints (one for each row) where the $a_{ij} =0$, $\vec{\varphi}_i$ are composed of elements from the rows of $\Pi$ and $b = -\vec{p}_k$. We defer the exact forms of these to the paper. The rest of the protocol is almost the same as the Baby protocol above. 1. **Const-zero constraint aggregation:** The Prover and the Verifier aggregate their const-zero constraints into equality constraints exactly the same as the Baby protocol above. 1. **Equality constraint aggregation:** They also aggregate their equality constraints into a single constraint of the form: $$\sum_{i,j=1}^r a^{\text{(fin)}}_{ij} \langle \vec{s}_i, \vec{s}_j\rangle + \sum_{i=1}^r \langle \vec{\varphi}^{\text{(fin)}}_i, \vec{s}_i\rangle + b^{\text{(fin)}} =0$$ following the procedure in the Baby protocol. 1. **Sending "garbage" polynomials**: As mentioned earlier, for recursive rounds of Labrador, the Prover doesn't send the garbage polynomials. Instead, he commits to them using an Ajtai commitment. Once again, the norms of $g_{ij}$ and $h_{ij}$ can be large, so they have to be decomposed using a suitable bases (say $b_2$ and $b_3$). Let's say that the decomposed versions of these are stored in $\vec{g'}$ and $\vec{h'}$. Then, the Prover commits to these vectors as: $$\vec{u}_2 := C\vec{g'} + D\vec{h'}$$ where $C$ and $D$ are some random public matrices. Since, the Verifier doesn't have $g_{ij}$ and $h_{ij}$ in plain text. He defers the final constraint verification. 1. **"Amortized opening"**: Similar to the Baby protocol, the Verifier sends the challenge polynomials $(c_i)_{i=1}^r$ from $\mathcal{C}$ to the Prover. However, now the Prover doesn't send $\vec{z}$ as before. He also doesn't need to commit to $\vec{z}$ because the Verifier already knows that $A\vec{z} = \sum_{i=1}^r c_i \vec{t}_i$. So, in this sense there is no "opening" yet. ## Recursion At the end of the base protocol above, the Prover claims to have the knowledge of $(\vec{t'}, \vec{g'}, \vec{h'}, \vec{z})$ such that the following are satisfied: 1. Norm conditions $$ \begin{align*} &\Vert \vec{z} \Vert \leq \gamma,\quad \Vert \vec{t'} \Vert_{\infty} \leq b_1/2, \quad \Vert \vec{g'} \Vert_{\infty} \leq b_2/2 \quad \Vert \vec{z} \Vert_{\infty} \leq b_3/2, \end{align*} $$ 1. We can recompose and recover: $$ \begin{align*} & \{\vec{t}_{i}: 1\leq i \leq r\} \leftarrow \text{recompose}(\vec{t'}, \text{base} = b_1) \\ & \{{g}_{ij}: 1\leq i,j \leq r\} \leftarrow \text{recompose}(\vec{g'}, \text{base} = b_2) \\ & \{{h}_{ij}: 1\leq i,j \leq r\} \leftarrow \text{recompose}(\vec{h'}, \text{base} = b_3) \end{align*} $$ which satisfy the commitment equations: $$ \begin{align*} &\vec{u}_1 = B \vec{t'} \qquad \vec{u}_2 = C \vec{g'} + D \vec{h'} \qquad A \vec{z} = \sum_{i=1}^r c_i t_i \end{align*} $$ and the constraints: $$ \begin{align*} &\sum_{i=1}^r c_i \langle \vec{\varphi}_i, \vec{z} \rangle = \sum_{i,j=1}^r h_{ij} c_i c_j \\ &\sum_{i,j=1}^r a^{\text{(fin)}}_{ij} g_{ij} + \sum_{i=1}^r h_{ii} + b^{\text{(fin)}} = 0. \\ &\langle \vec{z}, \vec{z} \rangle = \sum_{i,j=1}^r g_{ij} c_i c_j \end{align*} $$ Our arguments above guarantee that proving the knowledge of $(\vec{t'}, \vec{g'}, \vec{h'}, \vec{z})$ is sufficient for the Prover. If the size of these vectors is small enough, the Prover can send these to the Verifier directly. This is the base case for the recursion. On the other hand, observe that the constraints above can be put into the form of equality constraints. All of them apart from the last one are linear constraints and the last one is also only a quadratic constraint (recompose operation is linear and the constraints can be rewritten directly in terms of $\vec{t'}, \vec{g'}, \vec{h'}$). Further, the witness for these constraints $(\vec{t'}, \vec{g'}, \vec{h'}, \vec{z})$ can be shown to have a small 2-norm (for example, by using that for a dimension $m$ vector $\vec{v}$, $\Vert \vec{v}\Vert \leq \sqrt{m} \Vert \vec{v} \Vert_{\infty}$). Therefore, one can rewrite the problem of verifying Labrador base proof as an instance of the original problem and recurse the protocol. A little more elaborately, one flattens the base proof $(\vec{t'}, \vec{g'}, \vec{h'}, \vec{z})$ into an array of polynomials, which is then reshaped into the witness $\{\vec{s}'_i : 1\leq i\leq r'\}$ where each $\vec{s}'_i \in \mathcal{R}_q^{n'}$ The parameters $r'$ and $n'$ are carefully chosen to maximally reduce the proof size in each iteration (typically $n' = \Theta(r'^2)$). Each of the constraint above is rewritten in terms of $\vec{s}'_i$ and the corresponding equality constraint is stored. We note that the Verifier can perform this action without any help from the Prover. Following this rearrangement, the Prover and Verifier can again perform an iteration of the base protocol. ## Implementation with ICICLE v4 [ICICLE v4](https://github.com/ingonyama-zk/icicle/releases/tag/v4.0.0) introduces support for common lattice primitives. We use these primitives to create a simple and highly readable C++ implementation of the Labrador protocol ([look here](https://github.com/ingonyama-zk/icicle/pull/983)) in a backend agnostic manner (can run on both CPU and GPU). In our Labrador implementation, we choose $q = p_{KB} p_{BB}$ where $p_{KB}$ is the Koala bear prime and the $p_{BB}$ is the Baby bear prime. Hence, we refer to $q$ as the Baby Koala ring. The choice of these primes for defining $q$ ensures efficient arithmetic and negacyclic NTTs. This ring is represented by the type `Zq` in the code. Similarly, we use the type `Rq` for the polynomial ring $\mathcal{R}_q$ and `Tq` for the same polynomial ring represented in the NTT domain. Let's look at the C++ APIs for some of the major operations we mentioned above. 1. **Decomposition**: The following code decomposes the vectors $(t_i)_{i=1}^r$ in our C++ code ```cpp // computes number of digits required to decompose Zq element with base1 size_t l1 = balanced_decomposition::compute_nof_digits<Zq>(base1); std::vector<Rq> T_tilde(l1 * r * kappa); balanced_decomposition::decompose( T.data(), // input r * kappa, // input size base1, // base {}, // default config T_tilde.data(), // output T_tilde.size() // output size ); ``` We note here that if the user knows that the output size required by the decomposition is lesser than that dictated by `compute_nof_digits`, they can use that as the output size. The `balanced_decomposition::recompose` function, which follows the same API as above, can be used for asserting this is the case. 1. **Ajtai commitment**: The following code computes the Ajtai commitment of $\tilde{t}$ using the matrix $B$. We need to transform $\tilde{t}$ to the NTT domain before computing the polynomial matrix product. $B$ matrix is assumed to be stored in the NTT form. ```cpp // T_tilde_hat = NTT(T_tilde) ntt( T_tilde.data(), // input T_tilde.size(), // input size NTTDir::kForward, // NTT direction {}, // default config T_tilde_hat.data() // output ); // compute u1 = T_tilde_hat @ B matmul( T_tilde_hat.data(), // A 1, // A_nof_rows T_tilde_hat.size(), // A_nof_cols B.data(), // B T_tilde_hat.size(), // B_nof_rows kappa1, // B_nof_cols {}, // default config u1.data() // output ); ``` 1. **JL projection**: The JL projection is used in two ways in the Labrador protocol. In the first, the Prover is given a seed and asked to compute the product of the JL projection computed from that seed with a vector. Here is how it looks in code: ```cpp // p = Pi*flatten(S) labrador::jl_projection( reinterpret_cast<const Zq*>( S.data()), // input n * r * d, // input size jl_seed.data(), // seed* jl_seed.size(), // seed length {}, // default config p.data(), // output JL_out // output size ); ``` Second, we also need to be able to extract the rows of the JL matrix. ICICLE has an API for this as well. Further, this API additionally allows you to conjugate and extract the rows as they are required for Labrador. ```cpp // get the jth row of the JL matrix conjugated labrador::get_jl_matrix_rows( jl_seed.data(), // seed* jl_seed.size(), // seed length r * n, // row_size j, // row_index 1, // num_rows true, // conjugate {}, // default config Q_j.data()); // output ``` 1. **Norm bounds**: ICICLE also provides APIs to check whether the 2-norm and the infinity-norm for $\mathbb{Z}_q$ vectors is bounded by some fixed bound or not: ```cpp // JL_check = || p || < sqrt(JL_out / 2) * beta norm::check_norm_bound( p.data(), // input JL_out, // input size eNormType::L2, // norm type L2 or LInfinity uint64_t(sqrt(JL_out / 2) * beta), // norm bound {}, // default config &JL_check // store result ) ``` 1. **Challenge space sampling**: Finally, ICICLE provides an API to sample Labrador-like challenge space polynomials efficiently. ```cpp sample_challenge_space_polynomials( seed, // seed seed_len, // seed length r, // number of polynomials to sample 31, // Number of ±1s 10, // Number of ±2s OP_NORM_BOUND, // Operator norm bound {}, // default config challenge.data() // output ); ``` To improve efficiency, the operator norm check is fused with the sampling. For protocols, which do not require an operator norm bound, one can simply set the norm bound above equal to 0. For an overview of ICICLE APIs in Rust, view our documentation [here](https://dev.ingonyama.com/api/rust-bindings/lattice/lattice-snarks). ## References [1] Beullens, W., Seiler, G. (2023). LaBRADOR: Compact Proofs for R1CS from Module-SIS. In: Handschuh, H., Lysyanskaya, A. (eds) Advances in Cryptology – CRYPTO 2023. CRYPTO 2023. Lecture Notes in Computer Science, vol 14085. Springer, Cham. https://doi.org/10.1007/978-3-031-38554-4_17 [2] Ngoc Khanh Nguyen and Gregor Seiler. 2024. Greyhound: Fast Polynomial Commitments from&nbsp;Lattices. In Advances in Cryptology – CRYPTO 2024: 44th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 18–22, 2024, Proceedings, Part X. Springer-Verlag, Berlin, Heidelberg, 243–275. https://doi.org/10.1007/978-3-031-68403-6_8 [3] Marius A. Aardal and Diego F. Aranha and Katharina Boudgoust and Sebastian Kolby and Akira Takahashi. Aggregating Falcon Signatures with LaBRADOR (2024) https://eprint.iacr.org/2024/311