# Introduction to Latticefold Lattice-based cryptography is a branch of cryptography that builds secure cryptographic primitives using the mathematics of lattices—discrete, grid-like structures in high-dimensional vector spaces. It is particularly notable because it's considered a strong candidate for post-quantum cryptography—i.e., cryptographic systems secure against both classical and quantum computers. Some SNARKs yield the entire proof at once, while others split the task of constructing a proof into small steps and prove each step separately. The latter approach called IVC(incrementally verifiable computation). This approach reduces memory consumption of SNARK. However, it requires to apply a SNARK verifier at the statement being proven at every step and this introduces additional overhead. Folding mitigates this issue by folding SNARK verification work at each step into SNARK verification work for all previous steps at once. There exist many folding schemes such as HyperNova, ProtoStar etc. However, most of them are not quantum secure. They also require a large finite field (256 bit) which makes arithmetic operations quite costly. This post is a gentle introduction into [Latticefold](https://eprint.iacr.org/2024/257.pdf), a novel folding scheme that is quantum secure and can operate over a small field (64 bit). I give a credit to Nethermind research team for Latticefold presentation given at [ZKProof](https://zkproof.org/) conference at March 2025. ## Background ### Folding Folding reduces the task of proving 2 instance-witness to proving 1 instance-witness ### $(w_1,x_1) \in R$ and $(w_2,x_2) \in R$ are folded into $(w_3,x_3) \in R$ **Statement 1**: If a witness-instance pair $(w_3,x_3)$ is valid, than initial instances $(w_1,x_1)$ and $(w_2,x_2)$ are valid as well. * [Commitments](https://hackmd.io/VpgY5zopToi_lkZRlRF0Bg?both) play a crucial role in folding schemes. * Instances $x_1, x_2, x_3$ contain a commitment to $w_1,w_2,w_3$, respectively. $(x_i, w_i)$ = $(x_i', Com(w_i); w_i)$ ### $(x_1,Com(w_1); w_1) \in R$ and $(x_2, Com(w_2); w_2) \in R$ are folded into $(x_3, Com(w_3); w_3) \in R$ Usually, the commitment is **homomorphic** $Com(w_1 + w_2)$ = $Com(w_1) * Com(w_2)$ but in [Arc](https://eprint.iacr.org/2024/1731.pdf) it is not the case --- ### Commitments in folding schemes ### $(x_1,Com(w_1); w_1) \in R$ and $(x_2, Com(w_2); w_2) \in R$ are folded into $(x_3, Com(w_3); w_3) \in R$ Usually, Commitment is Pedersen or KZG scheme (or similar) Drawbacks: * witness lives in a big field >= 250 bits * expensive commitment if a commited vector is large * expensive recursion: proving that folding has been performed correctly is complex due to arithetization in a foreign field ## An alternative: lattice-based commitments A cyclotomic ring has a form: $R = F_q[X] / (f(X))$ = [ all polys of degree < d ] * $F_q[X]$ is a ring of polynomials with coefficients in a field * $f(X)$ is cyclotomic polynomial, $f(X) = X^{2^t} + 1$ #### NTT isomorphism This ring $R$ is isomorphic to a product of $\tau$ field extensions $F_{q^t}$ $R = F_{q^t} * ... \tau .. * F_{q^t}$ $t$, $\tau$ depend on $f(X)$ and $q$ #### Ring packing Relevant for assessing performance of lattice-based folding Goal: pack $\tau$ $a_1....a_{\tau}$ field elements into a ring element $NTT^{-1}(a_1....a_{\tau})$ The NTT is an analogue of the Fast Fourier Transform (FFT) but works over finite fields. It transforms a polynomial into a pointwise representation where multiplication is faster. After multiplication in the NTT domain, one applies the inverse NTT to bring it back. Hence, each ring element encodes $\tau$ field elements ### Ajtai commitment * Setup: random sampled matrix $A \in R^{k*n}$ k is a parameter (number of rows in A) n is size of a commited vector * Commitment: Input $v \in R^n$, Output: $A*v^T \in R^k$ * **Big caveat: only binding if v has a norm $||v||$ < B for certain B = $O(2^{root(k)})$** **Norm of v** is largest coefficient in v, seen as polynomial in R * To commit to a vector v with large norm, decompose it into several chunks, where each chunk <= B $v = v_1 + Bv_2 + .. + B_{t-1}v_t, ||v_i|| <= B$ and commit $v_is$ **Tradeoff between $k$, $B$ and number of chunks** 1. if you have smaller number k (number of rows in A), you get smaller B, you have to decompose v in more chunks 2. If you would like to pay more when you commit, you increase k, you get larger B, and have to decompose v in less chunks **Homomorphism** Aitaj commitment is only homomorphic in some sence $A*(v+u)^T = Av^T * Au^T$ but maybe be $||v+u|| > B$ ### Advantages of Ajtai commitment * $F_q$ has a smaller field 32 or 64 bit (e.g. Goldilocks or Babybear) * Each ring pack $\tau$ field elements * commit computations should be easy arithmetizable over $F_q$ * Post quantum secure * Commitments can be cheap ### Efficiency of Ajtai Configuration: $F_q$ = Goldilocks field (64 bit) $R = F_q[X] / (X^{24} + X^{12} + 1)$ $F_{q^3} *..8..*F_{q^3}$ $\tau$ = 8, $t$ = 3 Set $k$ = 10, $B= 2^8$ Commitment $A*V^T = R^k$ Nethermind implementation $|v| = 2^{16}$ size of the vector The performance findings of Nethermind research team: 1. if v has a small norm, Aitaj is 2x faster MT-Keccak(Merkle tree commitment using Keccak) 2. if v has a large norm, Aitaj is 2-3x time slower than MT-Keccak --- ## Latticefold [Latticefold](https://eprint.iacr.org/2024/257.pdf) (Boneh, Chen, 2024) Folding scheme over Compressed Constrained System(CCS) relations over R (cyclotomic ring). CCS is a compact representation of constraints. * Uses Aitaj commitment * Consists of three parts: * Linearization (analogues to [HyperNova](https://eprint.iacr.org/2023/573.pdf)) * Decomposition * Norm bound enforcement ### Linearization Transform CCS about evaluations of certain multilinear polynomial. Essentially, it runs the [sumcheck protocol](https://hackmd.io/@aiv768/ryFnB1Flgg) on CCS relation to reduce it into the set of claims about evaluations of multilinear polynomials on random points. ### Decomposition Random linear combination is used to fold 2 witnesses $w_1, w_2$ into $w_3$ $w_3$ = $w_1 + \alpha*w_2$, $\alpha$ is random **Problem**: Maybe $||w_1 + \alpha*w_2|| > B$ even if $||w_1||, ||w_2|| <= B$, where $||w_1||$ is a norm(largest coefficient) of $w_1$, $||w_2||$ is a norm of $w_2$ **Solution**: split $w_1, w_2$ into chunks with small norm <= B, commit to them and take random linear combination of these chunks ### Norm bound enforcement To fold two witnesses $w_1, w_2$ one takes a random linear combination $w_1 + \alpha*w_2$ **Idea**: Add range check per chunk to enforce $|| w_1 ||, || w_2 ||$ <= B by running a sumcheck over $F_q$ committed in decomposition step. ## Latticefold+ [Latticefold+](https://eprint.iacr.org/2025/247.pdf) is enhancement of Latticefold scheme. Major contribution is the new way of adding range proof during a folding step which makes the prover significantly faster, the verification circuit simpler, and the folding proofs shorter. ## Overall costs summary Nethermind research team has found out, that Latticefold+ seems competitive to [HyperNova](https://eprint.iacr.org/2023/573.pdf)(Sonobe implementation), but Latticefold is significantly slower than HyperNova.