# Understanding Ligerito: A Fast, Recursive Polynomial Commitment Scheme
This note explains the paper
> Ligerito: A Small and Concretely Fast Polynomial Commitment Scheme
> Andrija Novakovic, Guillermo Angeris, May 2025
> Link: https://eprint.iacr.org/2025/1187
in a way that is suitable for engineers who know basic linear algebra, finite fields, and are at least somewhat familiar with polynomial commitment schemes and sumcheck-type protocols.
Roughly speaking, Ligerito is:
* A polynomial commitment scheme and inner product argument.
* Designed for univariate and multilinear polynomial evaluations.
* Flexible in the choice of code and field.
* Asymptotically small: proof size about
\begin{equation}
O\left(\frac{\log(N)^2}{\log\log(N)}\right)
\end{equation}
where $N$ is the number of coefficients (the input size).
* Fast in practice on consumer hardware.
For example, the paper reports:
* Polynomial with $2^{24}$ coefficients over a 32-bit binary field (about 64 MiB of data).
* Julia implementation running on an M1 MacBook Pro.
* Proving time about 1.3 seconds.
* Proof size about 255 KiB.
The goal is not to reproduce every line of the paper but to give you a clear mental model of what the protocol is doing and why it works.
## Notation and tricks
We follow the paper’s notation, but in a slightly more informal style.
* Let $\mathbb{F}$ be a finite field. Sometimes a subfield $\mathbb{F}' \subseteq \mathbb{F}$ is also used.
* Vectors are column vectors in $\mathbb{F}^n$.
* Matrices are in $\mathbb{F}^{m \times n}$.
We often interpret vectors of length $2^k$ as evaluation tables of multilinear polynomials in $k$ variables.
### vec and Mat
Given a matrix $\tilde{X} \in \mathbb{F}^{m \times n}$:
* $\operatorname{vec}(\tilde{X})$ is the $mn$-vector obtained by stacking the columns of $\tilde{X}$ into a single column.
* Given a vector $v \in \mathbb{F}^{mn}$, $\operatorname{Mat}(v)$ is the matrix obtained by reshaping $v$ into an $m \times n$ matrix in column-major order.
We will often omit explicit dimensions and rely on context. When there is any ambiguity, the paper makes the dimensions explicit; in this note, we will mention them when needed.
:::info
### Example: column-major reshaping
Let
\begin{equation}
\tilde{X} =
\begin{pmatrix}
x_{11} & x_{12} & x_{13} \\
x_{21} & x_{22} & x_{23}
\end{pmatrix}
\in \mathbb{F}^{2 \times 3}.
\end{equation}
Then $\operatorname{vec}(\tilde{X})$ stacks the columns of $\tilde{X}$ into one long vector:
\begin{equation}
\operatorname{vec}(\tilde{X}) =
\begin{pmatrix}
x_{11} \\ x_{21} \\ x_{12} \\ x_{22} \\ x_{13} \\ x_{23}
\end{pmatrix}
\in \mathbb{F}^{6}.
\end{equation}
Conversely, if we start from
\begin{equation}
v =
\begin{pmatrix}
x_{11} \\ x_{21} \\ x_{12} \\ x_{22} \\ x_{13} \\ x_{23}
\end{pmatrix}
\end{equation}
and apply $\operatorname{Mat}(v)$ with shape $2 \times 3$, we recover
\begin{equation}
\operatorname{Mat}(v) =
\begin{pmatrix}
x_{11} & x_{12} & x_{13} \\
x_{21} & x_{22} & x_{23}
\end{pmatrix}.
\end{equation}
So $\operatorname{Mat}(\operatorname{vec}(\tilde{X})) = \tilde{X}$ and $\operatorname{vec}(\operatorname{Mat}(v)) = v$.
:::
### Kronecker product and barred vectors
The Kronecker product of matrices $A \in \mathbb{F}^{m\times n}$ and $B \in \mathbb{F}^{m'\times n'}$ is:
\begin{equation}
A \otimes B =
\begin{bmatrix}
A_{11} B & A_{12} B & \dots & A_{1n} B \\
A_{21} B & A_{22} B & \dots & A_{2n} B \\
\vdots & \vdots & \ddots & \vdots \\
A_{m1} B & A_{m2} B & \dots & A_{mn} B
\end{bmatrix}
\end{equation}
The resulting matrix is of size $mm' \times nn'$.
For a vector $r \in \mathbb{F}^k$, the paper defines the “barred” vector
\begin{equation}
\bar{r} = (1 - r_1, r_1) \otimes \dots \otimes (1 - r_k, r_k),
\end{equation}
which is a vector in $\mathbb{F}^{2^k}$. This will be the core object for multilinear interpolation.
:::info
### Example: small Kronecker products and a barred vector
1. Kronecker product of two $2 \times 2$ matrices.
Let
\begin{equation}
A =
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix},
\quad
B =
\begin{pmatrix}
e & f \\
g & h
\end{pmatrix}.
\end{equation}
Then $A \otimes B$ is a $4 \times 4$ matrix:
\begin{equation}
A \otimes B =
\begin{pmatrix}
aB & bB \\
cB & dB
\end{pmatrix}=
\begin{pmatrix}
a e & a f & b e & b f \\
a g & a h & b g & b h \\
c e & c f & d e & d f \\
c g & c h & d g & d h
\end{pmatrix}.
\end{equation}
So each entry of $A$ scales an entire copy of $B$, and these blocks are placed in a grid.
2. Barred vector for $k = 2$.
Let $r = (r_1, r_2) \in \mathbb{F}^2$. Then
\begin{equation}
\bar{r}
= (1 - r_1, r_1) \otimes (1 - r_2, r_2).
\end{equation}
Writing this out explicitly:
\begin{equation}
\bar{r} =
\begin{pmatrix}
(1 - r_1)(1 - r_2) \\
(1 - r_1) r_2 \\
r_1 (1 - r_2) \\
r_1 r_2
\end{pmatrix}
\in \mathbb{F}^4.
\end{equation}
This 4-entry vector will be used to evaluate a 2-variable multilinear polynomial at the point $(r_1, r_2)$ via a simple inner product with its coefficient vector.
:::
### Interpreting vectors as multilinear polynomials
Let $v \in \mathbb{F}^{2^k}$ be any vector. For $z \in \mathbb{F}^k$, the paper writes
\begin{equation}
v(z) = v^T \bar{z}
= v^T\big((1 - z_1, z_1) \otimes \dots \otimes (1 - z_k, z_k)\big).
\end{equation}
This means:
* We interpret $v$ as the coefficient vector of a multilinear polynomial in $k$ variables.
* Evaluating the polynomial at $z$ is equivalent to taking the inner product $v^T \bar{z}$.
So “writing $v$ as a function” means “interpreting $v$ as a polynomial and evaluating it via the barred vector”.
:::info
### Example: $k = 2$ variables
Let
\begin{equation}
v =
\begin{pmatrix}
v_0 \\ v_1 \\ v_2 \\ v_3
\end{pmatrix}=
\begin{pmatrix}
P(0,0) \\ P(0,1) \\ P(1,0) \\ P(1,1)
\end{pmatrix}
\in \mathbb{F}^4.
\end{equation}
Then for $z = (z_1, z_2) \in \mathbb{F}^2$, the barred vector is
\begin{equation}
\bar{z} =
\begin{pmatrix}
(1 - z_1)(1 - z_2) \\
(1 - z_1) z_2 \\
z_1 (1 - z_2) \\
z_1 z_2
\end{pmatrix}.
\end{equation}
Taking the inner product gives
\begin{aligned}
v(z) &= v^T \bar{z} \\
&= v_0(1 - z_1)(1 - z_2) +v_1(1 - z_1)z_2 +v_2 z_1(1 - z_2)+v_3 z_1 z_2.
\end{aligned}
This is exactly the multilinear interpolation formula for a bivariate polynomial $P(z_1, z_2)$ defined by its values at the Boolean cube $\{0,1\}^2$.
:::
### Partial evaluation as a matrix–vector product
Now we want to fix some of the variables and obtain a smaller polynomial.
Let $v \in \mathbb{F}^{2^k}$, and suppose we want to fix the first $k' \le k$ variables at a point $r \in \mathbb{F}^{k'}$. We define
\begin{equation}
y = \operatorname{Mat}(v)\bar{r}.
\end{equation}
By construction:
* $\bar{r}$ has length $2^{k'}$.
* So $\operatorname{Mat}(v)$ must be a matrix of size $2^{k-k'} \times 2^{k'}$, reshaping $v$ in column-major order.
* Therefore $y \in \mathbb{F}^{2^{k-k'}}$.
The key interpretation is:
\begin{equation}
y(z) = v(r, z) \quad \text{for all } z \in \mathbb{F}^{k-k'}.
\end{equation}
So $y$ is exactly the coefficient vector of the partially evaluated polynomial where the first $k'$ variables are fixed to $r$.
:::info
### Example: $k = 3$ variables, fix the first $k' = 2$
Let $v \in \mathbb{F}^{2^3}$ correspond to a polynomial $P(z_1, z_2, z_3)$ with coefficients ordered as
\begin{equation}
v =
\begin{pmatrix}
v_0 \\ v_1 \\ v_2 \\ v_3 \\ v_4 \\ v_5 \\ v_6 \\ v_7
\end{pmatrix}
\begin{pmatrix}
P(0,0,0) \\ P(0,0,1) \\ P(0,1,0) \\ P(0,1,1) \\ P(1,0,0) \\ P(1,0,1) \\ P(1,1,0) \\ P(1,1,1)
\end{pmatrix}.
\end{equation}
We reshape this vector in column-major order into a $2^{k - k'} \times 2^{k'} = 2 \times 4$ matrix:
\begin{equation}
\operatorname{Mat}(v) =
\begin{pmatrix}
v_0 & v_2 & v_4 & v_6 \\
v_1 & v_3 & v_5 & v_7
\end{pmatrix}.
\end{equation}
Now fix $r = (r_1, r_2) \in \mathbb{F}^2$, and form the barred vector
\begin{equation}
\bar{r} =
\begin{pmatrix}
(1 - r_1)(1 - r_2) \\
(1 - r_1) r_2 \\
r_1 (1 - r_2) \\
r_1 r_2
\end{pmatrix}.
\end{equation}
Then multiply:
\begin{aligned}
y &= \operatorname{Mat}(v)\bar{r} \\ &=
\begin{pmatrix}
v_0(1 - r_1)(1 - r_2) + v_2(1 - r_1)r_2 + v_4 r_1(1 - r_2) + v_6 r_1 r_2 \\
v_1(1 - r_1)(1 - r_2) + v_3(1 - r_1)r_2 + v_5 r_1(1 - r_2) + v_7 r_1 r_2
\end{pmatrix}.
\end{aligned}
This result $y = (y_0, y_1)^T \in \mathbb{F}^2$ is exactly the coefficient vector of the univariate polynomial
\begin{equation}
z_3 \mapsto P(r_1, r_2, z_3).
\end{equation}
So multiplying by $\bar{r}$ corresponds to “plugging in” the first two variables $(z_1,z_2) = (r_1,r_2)$ and leaving only $z_3$ free.
:::
We will use this vector–matrix–polynomial dictionary repeatedly in what follows.
## Tool 1: The Ligero-style Matrix–Vector Product (MVP)
The first building block of Ligerito is a Ligero-style matrix–vector product check.
It allows one party (the prover) to convince another (the verifier) that they know a hidden matrix $\tilde{X}$ such that
\begin{equation}
y_r = \tilde{X} \bar{r},
\end{equation}
without revealing $\tilde{X}$ itself.
At a high level, this is a *consistency check* between a committed matrix and a claimed linear operation on it. The verifier never sees $\tilde{X}$ directly; they only see a cryptographic commitment to it and a few selectively opened rows.
### Setting
We begin with three objects:
1. A matrix $\tilde{X} \in \mathbb{F}^{n \times n'}$ known only to the prover.
2. A linear error-correcting code with generator matrix
\begin{equation}
G \in \mathbb{F}^{m \times n}, \quad d(G) = d > 0,
\end{equation}
which defines how data is *encoded* into a larger redundancy space.
3. A challenge vector $\bar{r} \in \mathbb{F}^{n'}$, typically derived from a random point $r$ via
\begin{equation}
\bar{r} = (1 - r_1, r_1) \otimes \dots \otimes (1 - r_{k'}, r_{k'}).
\end{equation}
Each column of $\tilde{X}$ can be thought of as a small message, and multiplying by $G$ encodes it into a longer, redundant form.
### Encoding and Commitment
To commit to its hidden matrix, the prover *encodes* every column of $\tilde{X}$ using the code $G$:
\begin{equation}
X = G \tilde{X}.
\end{equation}
This $X \in \mathbb{F}^{m \times n'}$ is a *codeword matrix*: every column of $X$ lies in the code generated by $G$.
The prover then commits to all rows of $X$ by constructing a Merkle tree over the rows and sending the root $C$ to the verifier. This ensures the prover cannot later modify individual rows without detection.
### The Random Linear Check
After receiving $C$, the verifier challenges the prover with a random vector $r$ (and hence $\bar{r}$). Intuitively, this challenge defines one random linear combination of the columns of $\tilde{X}$.
The prover computes
\begin{equation}
y_r = \tilde{X} \bar{r}
\end{equation}
and sends $y_r$ back to the verifier.
This $y_r$ is the claimed result of multiplying the hidden matrix $\tilde{X}$ by the known vector $\bar{r}$.
The verifier now wants to be convinced that $y_r$ really was computed from the committed $\tilde{X}$ — without seeing $\tilde{X}$ directly.
### The Core Check
To verify, the verifier samples a random subset $S$ of row indices and asks the prover to open those rows of $X$, along with their Merkle proofs.
From these opened rows, the verifier computes
\begin{equation}
X_S \bar{r}
\end{equation}
and compares it to the corresponding encoded version of $y_r$:
\begin{equation}
G_S y_r.
\end{equation}
The verifier accepts if and only if
\begin{equation}
\underbrace{X_S \bar{r}}_{\substack{\text{Apply the challenge } \bar{r} \\ \text{to opened rows of } X}}=
\underbrace{G_S y_r}_{\substack{\text{Encode the claimed result } y_r \\ \text{using the same code rows}}}.
\end{equation}
### Why This Works
If the prover is honest, both quantities match exactly:
\begin{equation}
\begin{aligned}
X_S\bar{r}
&= (G\tilde{X})_S \bar{r} \\
&= (G_S \tilde{X}) \bar{r} \\
&= G_S (\tilde{X} \bar{r}) \\
&= G_S y_r.
\end{aligned}
\end{equation}
If the prover cheats—by committing to an $X$ that does not correspond to any $\tilde{X}$ under $G$, or by sending an incorrect $y_r$—then the equality is violated for most random choices of $S$ and $r$. Coding theory ensures that the probability of passing the check by accident is negligible.
:::info
### Example: A MVP check with redundancy
Let the prover’s secret matrix be
\begin{equation}
\tilde{X} =
\begin{pmatrix}
x_{11} & x_{12} \\
x_{21} & x_{22}
\end{pmatrix}
\in \mathbb{F}^{2\times 2}.
\end{equation}
We use a small redundant code matrix
\begin{equation}
G =
\begin{pmatrix}
1 & 0 \\
0 & 1 \\
1 & 1
\end{pmatrix}
\in \mathbb{F}^{3\times 2}.
\end{equation}
Each column of $\tilde{X}$ is now encoded into a 3-dimensional codeword, producing
\begin{equation}
X = G \tilde{X} =
\begin{pmatrix}
x_{11} & x_{12} \\
x_{21} & x_{22} \\
x_{11}+x_{21} & x_{12}+x_{22}
\end{pmatrix}
\in \mathbb{F}^{3\times 2}.
\end{equation}
The prover commits to the rows of $X$ (for example via a Merkle tree).
When the verifier samples a random challenge vector
\begin{equation}
\bar{r} =
\begin{pmatrix}
r_1 \\
r_2
\end{pmatrix},
\end{equation}
the prover computes
\begin{equation}
y_r = \tilde{X}\,\bar{r} =
\begin{pmatrix}
x_{11} r_1 + x_{12} r_2 \\
x_{21} r_1 + x_{22} r_2
\end{pmatrix}
\in \mathbb{F}^{2}.
\end{equation}
Now the verifier wants to check the key consistency equation
\begin{equation}
X\,\bar{r} \stackrel{?}{=} G\,y_r.
\end{equation}
Let’s expand both sides.
**Left-hand side (computed from $X$):**
\begin{equation}
X\,\bar{r} =
\begin{pmatrix}
x_{11}r_1 + x_{12}r_2 \\
x_{21}r_1 + x_{22}r_2 \\
(x_{11}+x_{21})r_1 + (x_{12}+x_{22})r_2
\end{pmatrix}.
\end{equation}
**Right-hand side (computed from $G$ and $y_r$):**
\begin{equation}
G\,y_r =
\begin{pmatrix}
1 & 0 \\
0 & 1 \\
1 & 1
\end{pmatrix}
\begin{pmatrix}
x_{11}r_1 + x_{12}r_2 \\
x_{21}r_1 + x_{22}r_2
\end{pmatrix} =
\begin{pmatrix}
x_{11}r_1 + x_{12}r_2 \\
x_{21}r_1 + x_{22}r_2 \\
(x_{11}+x_{21})r_1 + (x_{12}+x_{22})r_2
\end{pmatrix}.
\end{equation}
The two sides match exactly.
But now, because $G$ has more rows than columns, the commitment $X = G\tilde{X}$ includes redundant information.
If the prover tried to cheat by changing one entry of $\tilde{X}$, some subset of the queried rows of $X$ would almost certainly fail the equality check.
This is the core idea of the MVP protocol: the verifier checks a small random subset of rows of $X$ against the expected linear relation, and with a well-chosen code $G$, soundness follows from its distance properties.
:::
### Short summary
The Ligero-style MVP check provides a reusable primitive:
$$
\text{Commit to } X = G\tilde{X} \quad \text{and prove } y_r = \tilde{X}\bar{r}.
$$
It guarantees:
* **Completeness:** If the prover is honest, the verifier always accepts.
* **Soundness:** If the verifier accepts, then (except with small probability) the commitment corresponds to a unique $\tilde{X}$ such that $y_r = \tilde{X}\bar{r}$.
## Tool 2: Partial Sumcheck
The second building block is a variant of the classic sumcheck protocol. The standard sumcheck protocol allows a verifier to be convinced that
\begin{equation}
\sum_{z \in \{0,1\}^k} f(z) = \alpha
\end{equation}
for some low-degree polynomial $f$, known to the prover. In Ligerito, we apply this idea to *inner products* between two multilinear functions.
### From sums to inner products
Let $w(z)$ and $\tilde{X}(z)$ be two multilinear functions defined over $\{0,1\}^k$.
We want to verify that
\begin{equation}
\sum_{z \in \{0,1\}^k} w(z)\tilde{X}(z) = \alpha.
\end{equation}
If we flatten these functions into vectors of length $2^k$:
\begin{equation}
w =
\begin{pmatrix}
w(z_1) \\ \vdots \\ w(z_{2^k})
\end{pmatrix},
\qquad
\operatorname{vec}(\tilde{X}) =
\begin{pmatrix}
\tilde{X}(z_1) \\ \vdots \\ \tilde{X}(z_{2^k})
\end{pmatrix},
\end{equation}
then this sum is exactly the standard inner product
\begin{equation}
w^T \operatorname{vec}(\tilde{X}) = \alpha.
\end{equation}
Hence, sumcheck can be understood as a protocol for verifying a large inner product over the Boolean cube.
:::info
### Example: inner product as a Boolean sum
For $k = 2$, the Boolean hypercube $\{0,1\}^2$ consists of the points
$(0,0), (0,1), (1,0), (1,1)$.
The claim
$$
\sum_{z \in {0,1}^2} w(z)\tilde{X}(z) = \alpha
$$
is simply
\begin{aligned}
&w(0,0)\tilde{X}(0,0) * w(0,1)\tilde{X}(0,1) \\ &* w(1,0)\tilde{X}(1,0) * w(1,1)\tilde{X}(1,1) = \alpha,
\end{aligned}
which is the inner product of two 4-dimensional vectors.
:::
### Classical sumcheck
In the standard sumcheck protocol, the verifier checks the claim
\begin{equation}
\sum_{z \in {0,1}^k} f(z) = \alpha
\end{equation}
through an iterative process that folds one variable at a time.
At round $i$:
1. The prover sends a univariate polynomial $s_i(t)$ defined as
\begin{equation}
s_i(t) =
\sum_{z_{i+1},\dots,z_k \in \{0,1\}}
f(r_1,\dots,r_{i-1}, t, z_{i+1},\dots,z_k),
\end{equation}
where $r_1,\dots,r_{i-1}$ are the random field elements chosen in the previous rounds.
2. The verifier checks the consistency condition
\begin{equation}
s_i(0) + s_i(1) \stackrel{?}{=} s_{i-1}(r_{i-1}),
\end{equation}
where $s_0(t)$ is defined as the constant polynomial $\alpha$.
3. The verifier samples a new random field element $r_i$ and continues to the next round.
After $k$ rounds, all variables are fixed. The verifier is left with a single point check of $f$ at the random point $(r_1,\dots,r_k)$.
If $f$ has degree at most $2$ in each variable, the soundness error is about $k / |\mathbb{F}|$.
### Partial sumcheck: stopping early
In Ligerito, we often do not need to fold all variables. Instead, we start from a sum over $k + k'$ variables and only fold the first $k'$ variables, leaving the last $k$ variables “alive”.
Formally:
\begin{equation}
\underbrace{
\sum_{z \in \{0,1\}^{k+k'}} w(z)\,\tilde{X}(z)
}_{\text{full sum over } 2^{k+k'} \text{ points}}
\;\xrightarrow[\text{fold first } k']{r = (r_1,\dots,r_{k'})}\;
\underbrace{
\sum_{z' \in \{0,1\}^k} w(r,z')\,\tilde{X}(r,z')
}_{\text{reduced sum over } 2^{k} \text{ points}}.
\end{equation}
### How the protocol unfolds
Partial sumcheck proceeds for $k'$ rounds—one per variable to be folded.
At round $i$ ($1 \le i \le k'$):
1. The prover constructs a polynomial
\begin{aligned}
s_i(t) =
\sum_{z_{i+1},\dots,z_{k+k'} \in \{0,1\}^{k+k'-i}}
w(r_1,\dots,r_{i-1}, t, z_{i+1},\dots,z_{k+k'}) \\
\tilde{X}(r_1,\dots,r_{i-1}, t, z_{i+1},\dots,z_{k+k'}).
\end{aligned}
The polynomial $s_i(t)$ encodes the partial sum when the first $i-1$ variables are fixed and the $i$-th is symbolic.
2. The verifier checks the consistency condition
\begin{equation}
s_i(0) + s_i(1) = s_{i-1}(r_{i-1}),
\end{equation}
with $s_0 = \alpha$.
3. The verifier samples a random $r_i \in \mathbb{F}$ and continues.
After $k'$ rounds, all first $k'$ variables have been fixed to $(r_1,\dots,r_{k'})$, and the verifier holds:
* A polynomial $s_{k'}$,
* The challenges $r_1,\dots,r_{k'}$,
* A final equality to check:
\begin{equation}
s_{k'}(r_{k'}) = \alpha',
\end{equation}
where
\begin{equation}
\alpha' = \underbrace{
\sum_{z' \in \{0,1\}^k} w(r,z')\,\tilde{X}(r,z')
}_{\text{reduced sum over } 2^{k} \text{ points}}.
\end{equation}
Hence, the sum over $2^{k+k'}$ Boolean points has been reduced to a smaller sum over only $2^k$ points.
The soundness error is approximately $2k' / |\mathbb{F}|$, which is negligible for large fields.
:::info
### Example: partial sumcheck with $k' = 1$
Suppose we have three variables $z = (z_1, z_2, z_3)$, and we wish to check
$$
\sum_{z_1,z_2,z_3 \in \{0,1\}} w(z),\tilde{X}(z) = \alpha,
$$
but we only want to fold the first variable $z_1$.
1. The prover defines
$$
s_1(t) = \sum_{z_2,z_3 \in \{0,1\}} w(t,z_2,z_3),\tilde{X}(t,z_2,z_3),
$$
which is a degree-2 polynomial.
2. The verifier checks
$$
s_1(0) + s_1(1) \stackrel{?}{=} \alpha.
$$
3. The verifier samples a random $r_1 \in \mathbb{F}$.
Now the remaining claim is
$$
\sum_{z_2,z_3 \in \{0,1\}} w(r_1,z_2,z_3),\tilde{X}(r_1,z_2,z_3) = \alpha',
$$
where $\alpha' = s_1(r_1)$.
We have reduced a sum over 3 Boolean variables to a sum over 2 variables, evaluated at a random $r_1$.
:::
### Matrix viewpoint
Using the vector–matrix notation developed earlier, we can restate the partial sumcheck transformation compactly.
We start with a vectorized claim
\begin{equation}
w^T \operatorname{vec}(\tilde{X}) = \alpha,
\end{equation}
where $w, \operatorname{vec}(\tilde{X}) \in \mathbb{F}^{2^{k+k'}}$.
After folding the first $k'$ variables, we obtain:
$$
\text{Random point } r = (r_1,\dots,r_{k'}) \in \mathbb{F}^{k'}.
$$
We then define the partial evaluations:
\begin{equation}
w_r = \operatorname{Mat}(w)\bar{r}, \qquad y_r = \tilde{X}\bar{r},
\end{equation}
where $\bar{r}$ is the barred vector
\begin{equation}
\bar{r} = (1 - r_1, r_1) \otimes \cdots \otimes (1 - r_{k'}, r_{k'}),
\end{equation}
and $\operatorname{Mat}(w)$ reshapes $w$ into a $2^k \times 2^{k'}$ matrix in column-major order.
Both $w_r$ and $y_r$ lie in $\mathbb{F}^{2^k}$.
The resulting smaller claim is then
\begin{equation}
\alpha' = w_r^T y_r.
\end{equation}
The final equality that the verifier checks is
\begin{equation}
s_{k'}(r_{k'}) = w_r^T y_r.
\end{equation}
Thus, partial sumcheck transforms a large inner product of length $2^{k+k'}$ into a smaller one of length $2^k$, while preserving correctness through a sequence of consistency checks.
### Batching and gluing
The sumcheck protocol is linear in $w$. This enables two optimizations used throughout Ligerito.
#### Batching multiple inner products
Suppose we want to verify several inner product claims simultaneously:
\begin{equation}
w_1^T \operatorname{vec}(\tilde{X}) = \alpha_1, \quad
w_2^T \operatorname{vec}(\tilde{X}) = \alpha_2, \quad
\dots, \quad
w_q^T \operatorname{vec}(\tilde{X}) = \alpha_q.
\end{equation}
We can combine them into a single batched claim using random coefficients $\beta_1,\dots,\beta_q \in \mathbb{F}$:
\begin{equation}
\tilde{w} = \sum_{j=1}^q \beta_j w_j, \qquad
\tilde{\alpha} = \sum_{j=1}^q \beta_j \alpha_j.
\end{equation}
If any one of the original claims is false, the combined claim will also be false except with probability $1/|\mathbb{F}|$. Thus, only one sumcheck run is needed for all $q$ claims.
:::info
### Example.
For two claims $w_1^T x = \alpha_1$ and $w_2^T x = \alpha_2$, the verifier samples a random $\beta$, and the prover proves
\begin{equation}
(w_1 + \beta w_2)^T x = \alpha_1 + \beta \alpha_2.
\end{equation}
If either of the original equalities is false, this new one fails with probability at least $1 - 1/|\mathbb{F}|$.
:::
#### Gluing multiple sumchecks
We can also “glue” multiple sumcheck instances that fold different sets of variables. For example, we might first fold from $(k + k')$ variables down to $k$, then from $k$ to $k''$.
Because sumcheck is linear, these two stages can be merged into a single longer sumcheck that folds all $k + k' - k''$ variables in one run.
This ability to batch and glue sumchecks allows Ligerito to verify many algebraic relations together efficiently, with only negligible loss in soundness.
### Short summary
Partial sumcheck provides a recursive mechanism for compressing large inner products into smaller ones:
$$
\underbrace{w^T \operatorname{vec}(\tilde{X})}_{\text{size }2^{k+k'}}
\quad \Longrightarrow \quad
\underbrace{w_r^T y_r}_{\text{size }2^k}.
$$
Each round folds one variable by interpolation and random evaluation, maintaining a consistency polynomial $s_i(t)$ whose checks ensure correctness.
At the end, the verifier only needs to check
\begin{equation}
s_{k'}(r_{k'}) = w_r^T y_r,
\end{equation}
with negligible soundness error.
Combined with the Ligero-style matrix–vector product protocol, this gives Ligerito its core recursive structure: sumcheck folds the problem down to smaller dimensions, and the MVP check ensures that each folded evaluation is consistent with the committed matrix.
## Combining MVP and partial sumcheck (base protocol)
We now combine the two ingredients—the Ligero-style matrix–vector product (MVP) and the partial sumcheck—into a single protocol.
This base protocol allows a prover to convince a verifier that
\begin{equation}
w^T \operatorname{vec}(\tilde{X}) = \alpha,
\end{equation}
for a matrix $\tilde{X}$ that the prover has committed to, but the verifier never sees directly.
### Intuition
- The prover holds a secret matrix $\tilde{X} \in \mathbb{F}^{n \times n'}$.
- The verifier wants to be convinced that the inner product between $w$ and $\operatorname{vec}(\tilde{X})$ equals $\alpha$.
The protocol combines both tools as follows:
1. **Commit to $\tilde{X}$** using the MVP commitment phase, producing a Merkle root $C$. This binds the prover to all entries of $\tilde{X}$.
2. **Run a partial sumcheck** to reduce the global inner product
$$
w^T \operatorname{vec}(\tilde{X}) = \alpha
$$
to a smaller inner product
$$
w_r^T y_r = \alpha',
$$
where
$$
w_r = \operatorname{Mat}(w)\bar{r}, \qquad
y_r = \tilde{X}\bar{r}.
$$
The vector $w_r$ is public and computable by the verifier,
while $y_r$ is known only to the prover.
3. **Verify $y_r$ via MVP.**
The prover sends $y_r$, and the verifier checks its correctness by querying random rows of the committed encoding $X = G\tilde{X}$:
$$
X_S\bar{r} \stackrel{?}{=} G_S y_r.
$$
4. **Final consistency check.**
Once $y_r$ is verified, the verifier checks that
$$
s_{k'}(r_{k'}) = w_r^T y_r,
$$
where $s_{k'}(t)$ is the last polynomial sent in the sumcheck.
If both the MVP and sumcheck checks pass, the verifier is convinced that the original inner product claim holds.
:::info
### Example: two-variable case
Suppose $\tilde{X}$ depends on two Boolean variables $(z_1, z_2)$, and we want to check that
$$
\sum_{z_1,z_2 \in \{0,1\}} w(z_1,z_2)\tilde{X}(z_1,z_2) = \alpha.
$$
The verifier and prover run a one-round partial sumcheck (folding $z_1$):
1. The prover defines
$$
s_1(t) = \sum_{z_2 \in \{0,1\}} w(t,z_2)\tilde{X}(t,z_2),
$$
sends it to the verifier, and the verifier checks
$$
s_1(0) + s_1(1) = \alpha.
$$
2. The verifier samples $r_1 \in \mathbb{F}$, fixing the first variable.
3. The new claim is
$$
\sum_{z_2 \in \{0,1\}} w(r_1,z_2)\tilde{X}(r_1,z_2) = s_1(r_1),
$$
or equivalently
$$
w_r^T y_r = s_1(r_1),
$$
with $y_r = \tilde{X}\bar{r_1}$.
4. The prover sends $y_r$, and the verifier checks it using MVP and the final equality.
This small case illustrates the general pattern: each round of partial sumcheck produces a new “folded” vector $y_r$, verified by MVP.
:::
### Bottleneck and recursion
The above protocol is sound and complete, but has a major cost:
* The vector $y_r$ has length $2^k$, which can be very large. Sending it directly becomes the communication bottleneck.
Ligerito’s idea is to avoid sending $y_r$ explicitly. Instead, it treats $y_r$ itself as a new matrix, commits to it via MVP, and reuses the same protocol recursively on this smaller instance.
Each recursion level reduces the dimension, eventually producing a logarithmic-depth proof with only small vectors ever transmitted.
This recursive folding–verification structure is what makes Ligerito both succinct and efficient.
## The Ligerito protocol (recursive version)
The base protocol we just saw is sound, but it has a major communication bottleneck. At the end of the partial sumcheck, the prover must send the entire vector $y_r = \tilde{X}\bar{r}$. If we fold $k'$ variables from a total of $k+k'$, this $y_r$ is a vector of length $2^k$. If $k$ is large (e.g., $k=20$, $y_r$ has $\sim 1$ million elements), this is far too expensive.
This is where Ligerito's core recursive idea comes in.
**The Main Idea:** Instead of *sending* the large vector $y_r$, the prover will *commit* to it and start a *new* instance of the protocol.
This creates a chain of smaller and smaller problems:
1. The prover starts with a big matrix $\tilde{X}_1$ and a claim about it.
2. They run one step of the protocol, which reduces the claim to a new, smaller vector $y_{r_1}$.
3. Instead of sending $y_{r_1}$, the prover reshapes it into a new, smaller matrix $\tilde{X}_2 = \operatorname{Mat}(y_{r_1})$.
4. They *commit* to $\tilde{X}_2$ and use the protocol *recursively* to prove two things:
* (a) The original claim, which has now been "folded" onto $\tilde{X}_2$.
* (b) That $\tilde{X}_2$ is *consistent* with the $\tilde{X}_1$ they committed to in the previous step.
5. This process repeats $\ell$ times, creating a chain $\tilde{X}_1 \to \tilde{X}_2 \to \dots \to \tilde{X}_\ell$.
6. The final matrix $\tilde{X}_\ell$ is so small that the prover can just send it directly, ending the recursion.
### The Recursive "Glue": Batching Two Claims
At any step $i$ (for $i \ge 2$), the verifier has just received a commitment to $\tilde{X}_i$. They need to be convinced of two distinct types of claims:
1. **The Sumcheck Claim (from round $i-1$):** The partial sumcheck from the previous round $i-1$ ended with a final check:
$$
s_{i-1}(r_{i-1}) = \underbrace{w_{r_{i-1}}^T y_{r_{i-1}}}_{\text{inner product from round } i-1}
$$
Since $y_{r_{i-1}}$ *is* $\operatorname{vec}(\tilde{X}_i)$ by definition, this is an inner product claim about $\tilde{X}_i$:
$$
\underbrace{(\operatorname{Mat}(\tilde{w}_{i-1})\bar{r}_{i-1})^T}_{\text{call this } \tilde{w}_i^{(\text{sumcheck})}} \operatorname{vec}(\tilde{X}_i) = \underbrace{s_{i-1}(r_{i-1})}_{\text{call this } \tilde{\alpha}_i^{(\text{sumcheck})}}
$$
2. **The MVP Claims (new for round $i$):** The verifier must also check that $\tilde{X}_i$ is *consistent* with the *previous* commitment $X_{i-1}$. They do this by running the MVP check on $X_{i-1}$: they sample a set $S_{i-1}$ of rows and challenge the prover. The prover must prove that for *every* row $j \in S_{i-1}$:
$$
(X_{i-1})_j \bar{r}_{i-1} = (G_{i-1})_j y_{r_{i-1}}
$$
Let $g_j^T$ be the $j$-th row of $G_{i-1}$. This is another inner product claim about $\tilde{X}_i$:
$$
\underbrace{(g_j^T)^T}_{\text{call this } \tilde{w}_{ij}^{(\text{mvp})}} \operatorname{vec}(\tilde{X}_i) = \underbrace{(X_{i-1})_j \bar{r}_{i-1}}_{\text{call this } \tilde{\alpha}_{ij}^{(\text{mvp})}}
$$
The prover sends the claimed values $\tilde{\alpha}_{ij}^{(\text{mvp})}$, which the verifier checks against the $X_{i-1}$ openings.
The prover now has $1 + |S_{i-1}|$ inner product claims to prove about $\tilde{X}_i$. They prove them all at once using batched sumcheck.
The verifier generates random batching coefficients $\beta_j \in \mathbb{F}$ and forms a single, combined claim:
$$
\underbrace{
\left( \tilde{w}_i^{(\text{sumcheck})} + \sum_{j \in S_{i-1}} \beta_j \tilde{w}_{ij}^{(\text{mvp})} \right)^T
}_{\text{This is the new batched vector } \tilde{w}_i^T}
\operatorname{vec}(\tilde{X}_i)
= \underbrace{
\tilde{\alpha}_i^{(\text{sumcheck})} + \sum_{j \in S_{i-1}} \beta_j \tilde{\alpha}_{ij}^{(\text{mvp})}
}_{\text{This is the new batched scalar } \tilde{\alpha}_i}
$$
This new, single claim $\tilde{w}_i^T \operatorname{vec}(\tilde{X}_i) = \tilde{\alpha}_i$ is the invariant that gets passed to the *next* partial sumcheck for round $i$.
:::info
### Example: The Batched Claim
Let's simplify. At round $i$, the prover must prove:
1. **Sumcheck consistency:** $w_A^T \operatorname{vec}(\tilde{X}_i) = \alpha_A$
2. **MVP consistency (row 1):** $w_B^T \operatorname{vec}(\tilde{X}_i) = \alpha_B$
3. **MVP consistency (row 2):** $w_C^T \operatorname{vec}(\tilde{X}_i) = \alpha_C$
...and so on for all $|S_{i-1}|$ sampled rows.
The verifier samples random $\beta_B, \beta_C, \dots \in \mathbb{F}$ and asks the prover to prove the *single* combined claim:
$$
(w_A + \beta_B w_B + \beta_C w_C + \dots)^T \operatorname{vec}(\tilde{X}_i) = \alpha_A + \beta_B \alpha_B + \beta_C \alpha_C + \dots
$$
The prover now runs the partial sumcheck protocol (Tool 2) on *this* batched claim, folding it down to a new, even smaller claim for round $i+1$. This is the "glue" that holds the recursion together.
:::
### The Ligerito Protocol (Formalized)
Let's set up the recursion parameters:
* **Rounds:** $\ell \ge 2$.
* **Dimensions:** A sequence of dimensions $(k_i, k'_i)$ for $i=1, \dots, \ell$.
* The matrix $\tilde{X}_i$ will be $\mathbb{F}^{2^{k_i} \times 2^{k'_i}}$.
* The total number of coefficients is $2^{k_i + k'_i}$.
* The folding step reduces the problem size, so we require $k_i + k'_i = k_{i-1}$.
* **Codes:** A set of generator matrices $G_i \in \mathbb{F}^{m_i \times 2^{k_i}}$ for $i=1, \dots, \ell-1$.
The protocol proceeds in rounds $i = 1, \dots, \ell$.
#### Round $i$ ($1 \le i < \ell$):
1. **Prover: Commit.** The prover holds $\tilde{X}_i$. They compute $X_i = G_i \tilde{X}_i$ and commit to its rows via a Merkle tree, sending the root $C_i$ to the verifier.
2. **Prover/Verifier: Form Batched Claim.**
* **If $i=1$:** The initial claim is just $\tilde{w}_1^T \operatorname{vec}(\tilde{X}_1) = \tilde{\alpha}_1$, where $\tilde{w}_1 = w$ and $\tilde{\alpha}_1 = \alpha$.
* **If $i > 1$:** The prover and verifier perform the "Recursive Glue" step described above. The verifier samples $S_{i-1}$, challenges the prover, and they form the batched claim $\tilde{w}_i^T \operatorname{vec}(\tilde{X}_i) = \tilde{\alpha}_i$.
3. **Prover/Verifier: Partial Sumcheck.**
* They run the partial sumcheck protocol on the claim $\tilde{w}_i^T \operatorname{vec}(\tilde{X}_i) = \tilde{\alpha}_i$ to fold the first $k'_i$ variables.
* This involves the prover sending $k'_i$ polynomials $s_j(t)$.
* The verifier supplies $k'_i$ random challenges, forming the vector $r_i \in \mathbb{F}^{k'_i}$.
* This reduces the claim to a final check:
$$
s_i(r_i) = (\operatorname{Mat}(\tilde{w}_i)\bar{r}_i)^T (\tilde{X}_i \bar{r}_i)
$$
* The prover defines $y_{r_i} = \tilde{X}_i \bar{r}_i$.
4. **Prover: Define Next Problem.** The prover reshapes this resulting vector $y_{r_i}$ (which has length $2^{k_i}$) into the matrix for the next round:
$$
\tilde{X}_{i+1} = \operatorname{Mat}(y_{r_i}) \in \mathbb{F}^{2^{k_{i+1}} \times 2^{k'_{i+1}}}
$$
...and loops back to step 1 for round $i+1$.
#### Final Round ($\ell$):
The recursion "bottoms out". The prover has the final, small matrix $\tilde{X}_\ell$.
1. **Prover: Send Final Vector.** The matrix $\tilde{X}_\ell$ is small enough (e.g., $N^{1/\ell}$, which is $\operatorname{polylog}(N)$) that the prover just sends $y_\ell = \operatorname{vec}(\tilde{X}_\ell)$ *in the clear*. No more commitments.
2. **Verifier: Final Checks.** The verifier now holds the vector $y_\ell$ and can *directly* run the final checks that were "pending" from round $\ell-1$:
* **Final MVP Check:** The verifier samples $S_{\ell-1}$, opens the corresponding rows of $X_{\ell-1}$, and directly verifies:
$$
(X_{\ell-1})_{S_{\ell-1}} \bar{r}_{\ell-1} \stackrel{?}{=} (G_{\ell-1})_{S_{\ell-1}} y_\ell
$$
* **Final Sumcheck Check:** The verifier forms the final batched claim $\tilde{w}_\ell^T \operatorname{vec}(\tilde{X}_\ell) = \tilde{\alpha}_\ell$ (just as in the "Recursive Glue" step). Since $y_\ell = \operatorname{vec}(\tilde{X}_\ell)$, they check:
$$
\tilde{w}_\ell^T y_\ell \stackrel{?}{=} \tilde{\alpha}_\ell
$$
(Note: the paper simplifies this by combining the final sumcheck's $\tilde{\alpha}_\ell$ with the $s_{\ell-1}(r_{\ell-1})$ value, but the logic is the same).
If all checks pass, the verifier accepts.
### The Telescoping Guarantee
Why does this work? Because all the partial evaluations chain together perfectly. The final vector $y_\ell$ that the verifier checks is:
$$
\begin{align}
y_\ell &= \operatorname{vec}(\tilde{X}_\ell) \\
&= y_{r_{\ell-1}} \\
&= \tilde{X}_{\ell-1} \bar{r}_{\ell-1} \\
&= \operatorname{Mat}(y_{r_{\ell-2}}) \bar{r}_{\ell-1} = \operatorname{Mat}(\tilde{X}_{\ell-2} \bar{r}_{\ell-2}) \bar{r}_{\ell-1} \\
&= \dots \\
&= \operatorname{Mat}(\tilde{X}_1) (\bar{r}_1 \otimes \bar{r}_2 \otimes \cdots \otimes \bar{r}_{\ell-1})
\end{align}
$$
The final $y_\ell$ is a partial evaluation of the *original* matrix $\tilde{X}_1$ at a single point $(\bar{r}_1 \otimes \dots \otimes \bar{r}_{\ell-1})$ derived from all the verifier's challenges.
Because the batched sumcheck at each step *proves* that the inner product claim is consistent *and* that the MVP (folding) step was done correctly, the final check on $y_\ell$ transitively guarantees the correctness of the original claim $w^T \operatorname{vec}(\tilde{X}_1) = \alpha$.
## Making the verifier succinct: tensorizable codes
There is one last, problem to solve. Look at the verifier's "Final Check":
$$
\tilde{w}_\ell^T y_\ell \stackrel{?}{=} \tilde{\alpha}_\ell
$$
The vector $y_\ell$ is small (polylog size, $N^{1/\ell}$) and sent by the prover. That's fine.
But what is $\tilde{w}_\ell$?
$\tilde{w}_\ell$ is the result of a long chain of batching and partial evaluations:
* It starts with $w$ (size $N$).
* Then we add in rows from $G_1$ (each size $2^{k_1} \approx N^{1/2}$).
* Then we partially evaluate that, add in rows from $G_2$, and so on.
Naively, just to *compute* the vector $\tilde{w}_\ell$ so they can perform the inner product, the verifier would need to do $O(N)$ work. This would make the verifier "fast" (linear time) but not "succinct" (polylog time).
The solution is to choose codes $G_i$ that have a special structure, allowing for *succinct evaluation*. The verifier will *never* build the full $\tilde{w}_\ell$ vector. Instead, they will compute the inner product $\tilde{w}_\ell^T y_\ell$ "on the fly," entry by entry.
### The Verifier's Final Task (Revisited)
The verifier's final task is to compute the sum:
$$
\tilde{w}_\ell^T y_\ell = \sum_{j=1}^{N^{1/\ell}} \tilde{w}_\ell[j] \cdot y_\ell[j]
$$
The verifier knows $y_\ell$ (it was sent by the prover). To make this check succinct, they need a way to compute any *single entry* $\tilde{w}_\ell[j]$ in polylog time (e.g., $O(\log N)$).
If they can do that, the total time for the final check is:
$$
(\text{Size of } y_\ell) \times (\text{Time to compute one entry of } \tilde{w}_\ell) = O(N^{1/\ell}) \times O(\log N) = O(\operatorname{polylog}(N))
$$
This achieves succinctness. The tool that enables this is tensorizable codes.
### Tensorizable codes
A code matrix $G \in \mathbb{F}^{m \times 2^k}$ is tensorizable if each row $g_i^T$ can be written as a tensor (Kronecker) product of $k$ tiny vectors:
$$
g_i = v_{i1} \otimes v_{i2} \otimes \cdots \otimes v_{ik}, \qquad v_{ij} \in \mathbb{F}^2
$$
Instead of storing $2^k$ entries for the row $g_i$, we only need to store the $2k$ entries of its factors. Many important codes (like Reed-Solomon and Reed-Muller) have this property.
### Fast inner products with tensor products
The mixed-product property of the Kronecker product allows for extremely fast computations. If we have two tensorizable vectors:
$$
v = v_1 \otimes \cdots \otimes v_k, \qquad u = u_1 \otimes \cdots \otimes u_k
$$
Their inner product, naively a sum of $2^k$ terms, becomes a product of $k$ tiny inner products:
$$
\boxed{
v^T u = (v_1^T u_1) \cdot (v_2^T u_2) \cdot \cdots \cdot (v_k^T u_k)
}
$$
This costs $O(k)$ operations, not $O(2^k)$.
:::info
### Example: $k = 2$
Let $v = \begin{pmatrix} a_0 \\ a_1 \end{pmatrix} \otimes \begin{pmatrix} b_0 \\ b_1 \end{pmatrix} = \begin{pmatrix} a_0 b_0 \\ a_0 b_1 \\ a_1 b_0 \\ a_1 b_1 \end{pmatrix}$
and $u = \begin{pmatrix} c_0 \\ c_1 \end{pmatrix} \otimes \begin{pmatrix} d_0 \\ d_1 \end{pmatrix} = \begin{pmatrix} c_0 d_0 \\ c_0 d_1 \\ c_1 d_0 \\ c_1 d_1 \end{pmatrix}$.
**Naive inner product ($O(2^k)$):**
$$
v^T u = (a_0 b_0 c_0 d_0) + (a_0 b_1 c_0 d_1) + (a_1 b_0 c_1 d_0) + (a_1 b_1 c_1 d_1)
$$
**Fast inner product ($O(k)$):**
$$
\begin{align}
v^T u &= \left( \begin{pmatrix} a_0 \\ a_1 \end{pmatrix}^T \begin{pmatrix} c_0 \\ c_1 \end{pmatrix} \right) \cdot \left( \begin{pmatrix} b_0 \\ b_1 \end{pmatrix}^T \begin{pmatrix} d_0 \\ d_1 \end{pmatrix} \right) \\
&= (a_0 c_0 + a_1 c_1) \cdot (b_0 d_0 + b_1 d_1)
\end{align}
$$
Both expressions are identical, but the second one is exponentially faster to compute.
:::
### Fast partial evaluation
This is the most important property. Recall that partially evaluating a vector $v \in \mathbb{F}^{2^k}$ (as a multilinear polynomial) at a point $r = (r_1, \dots, r_{k'})$ is done by:
$$
y = \operatorname{Mat}(v)\bar{r}
$$
If $v$ is tensorizable ($v = v_1 \otimes \cdots \otimes v_k$), this operation is also incredibly fast. The resulting vector $y \in \mathbb{F}^{2^{k-k'}}$ is *also* tensorizable:
$$
\begin{align}
y &= \operatorname{Mat}(v) \left( (1-r_1, r_1) \otimes \cdots \otimes (1-r_{k'}, r_{k'}) \right) \\
&= \underbrace{ \left( (1-r_1, r_1)^T v_1 \right) }_{\text{scalar}} \cdot \cdots \cdot \underbrace{ \left( (1-r_{k'}, r_{k'})^T v_{k'} \right) }_{\text{scalar}} \cdot \underbrace{ \left( v_{k'+1} \otimes \cdots \otimes v_k \right) }_{\text{tensor product}}
\end{align}
$$
The first $k'$ factors are "folded" into scalars, and the remaining $k-k'$ factors form a new, smaller tensor product. This computation takes $O(k)$ time, not $O(2^k)$.
### Putting It All Together: The Succinct Verifier
This is the final piece of the puzzle.
1. The verifier's final task is computing $\tilde{w}_\ell^T y_\ell$.
2. The vector $\tilde{w}_\ell$ is a linear combination of (a) the original $w$ and (b) rows from $G_1, \dots, G_{\ell-1}$, all partially evaluated at the challenge points $r_1, \dots, r_{\ell-1}$.
3. We *require* that the initial $w$ is structured (e.g., sparse or tensorizable) and that all code matrices $G_i$ are tensorizable.
4. Because of the "Fast partial evaluation" property, *every term* that makes up $\tilde{w}_\ell$ can be represented in $O(\log N)$ space and evaluated at any point $j$ in $O(\log N)$ time.
5. Therefore, the verifier can compute any entry $\tilde{w}_\ell[j]$ in $O(\log N)$ time.
6. To compute the full inner product $\sum \tilde{w}_\ell[j] \cdot y_\ell[j]$, the verifier loops from $j=1$ to $N^{1/\ell}$:
* Compute $\tilde{w}_\ell[j]$ (takes $O(\log N)$ time using the tensor properties).
* Get $y_\ell[j]$ (from the prover's message).
* Multiply and add to the sum.
7. The verifier's total work is $O(N^{1/\ell} \cdot \log N)$, which is $O(\operatorname{polylog}(N))$.