---
tags: kzg
---
# Fast Amortized KZG Proofs
This post provides an overview and implementation of the algorithm presented in:
- [Fast Amortized KZG Proofs](https://eprint.iacr.org/2023/033)
This algorithm allows us to compute $n$ opening proofs for a KZG commitment:
- In $O((n+d) \log (n+d))$ group operations if $\{\xi_i\}$ are $n$-th roots of unity.
- In $O(n \log^2 n + d \log d)$ group operations otherwise.
We will cover the first scenario.
An implementation using the [arkworks](https://github.com/arkworks-rs/) library can be found here:
- [Keaki FK PR](https://github.com/brech1/keaki/pull/14)
And another post on the topic:
- [Feist-Khovratovich technique for computing KZG proofs fast](https://alinush.github.io/2021/06/17/Feist-Khovratovich-technique-for-computing-KZG-proofs-fast.html)
## KZG Polynomial Commitment Scheme
The KZG commitment scheme allows for efficient commitment to a polynomial and proof generation for its evaluation at specific points.
For a detailed description of this scheme, I recommend reading:
- [Dankrad Feist's blog post](https://dankradfeist.de/ethereum/2020/06/16/kate-polynomial-commitments.html)
It has four steps:
1. **Setup:** Establish parameters.
2. **Commitment:** Commit to a polynomial.
3. **Open (Proof):** Generate a proof for the polynomial's evaluation.
4. **Verify:** Validate the proof for the committed polynomial. We won't go over this step in this doc.
### Setup
Let $\mathbb{F}$ be a field and let $\mathbb{G}$ be a group with a designated element $g$, called a generator.
- We denote $[a]=a \cdot g$ for integer $a$.
For polynomials of degree up to $d$ a trusted third party selects a secret $s$ and then constructs $d$ elements of $\mathbb{G}$ :
$$
[s],\left[s^{2}\right], \ldots,\left[s^{m}\right]
$$
```rust
pub fn setup(secret: E::ScalarField, max_d: usize) -> Self {
let mut g1_pow = Vec::new();
for i in 0..=max_d {
g1_pow.push(g1_gen::<E>().mul(secret.pow([i as u64])));
}
Self { g1_pow }
}
```
### Commitment
For a polynomial $f(X)$ of degree $d$:
$$
f(X) = \sum_{0 \leq i \leq d} f_i X^i \in \mathbb{F}[X]
$$
- Where $f_{i}$ are the coefficients of polynomial $f(X)$
The commitment to this polynomial is:
$$
C_f = \sum_{0 \leq i \leq d} f_i [s^i]
$$
```rust
pub fn commit(&self, p: &[E::ScalarField]) -> Result<E::G1, KZGError> {
let mut commitment = E::G1::zero();
// commitment = sum(p[i] * g1_powers[i])
for (i, &coeff) in p.iter().enumerate() {
commitment += self.g1_pow[i] * coeff;
}
Ok(commitment)
}
```
We use the pre-calculated elements from the setup to evaluate this polynomial.
### Proof
To prove that $f(y) = z$:
$$
\pi[f(y) = z] = C_{T_y}
$$
Our proof is a commitment to $T_y$, which we call the **quotient** polynomial.
$$
T_{y}(X)=\frac{f(X)-z}{X-y}
$$
```rust
pub fn open(&self,
p: &[E::ScalarField],
point: &E::ScalarField)
-> Result<E::G1, KZGError> {
let value = evaluate_polynomial::<E>(p, point);
// p(x) - p(point)
let numerator = subtract_polynomials::<E>(p, &[value]);
// x - point
let denominator = [-*point, E::ScalarField::ONE];
let quotient = divide_polynomials::<E>(&numerator, &denominator)?;
// Generate the proof by committing to the quotient polynomial
self.commit("ient)
}
```
### Quotient Polynomial
The quotient polynomial:
$$
T_{y}(X)=\frac{f(X)-z}{X-y}
$$
Can be expressed as:
$$
T_{y}(X) = \sum_{0 \leq i \leq d-1} t_{i} X^{i}
$$
Where:
- Since $X - y$ is of degree 1 and $f(X) - z$ is of degree $d$:
$$
t_{d-1} = f_{d}
$$
- And we get a recursive relation for the coefficients from the polynomial division:
$$
t_{j} = f_{j+1} + y \cdot t_{j+1}
$$
Expanding on this expression we get:
$$
\begin{aligned}
T_{y}(X) = & \ f_d X^{d-1} + \\
& \ (f_{d-1} + y f_d) X^{d-2} + \\
& \ (f_{d-2} + y f_{d-1} + y^2 f_d) X^{d-3} + \\
& \ (f_{d-3} + y f_{d-2} + y^2 f_{d-1} + y^3 f_d) X^{d-4} + \\
& \ \cdots + \\
& \ (f_1 + y f_2 + y^2 f_3 + \cdots + y^{d-1} f_d) X^0
\end{aligned}
$$
## Calculating multiple proofs
We can use the last expression to define a proof for a set of field elements $\xi_k$ replacing:
- $X$ by our secret $s$
- $y$ by our evaluation point $\xi_k$
$$
\begin{aligned}
\pi\left[f\left(\xi_{k}\right)=z_{k}\right]=C_{T_{\xi_{k}}}= & \ f_{d}\left[s^{d-1}\right] \\
& +\left(f_{d-1} + \xi_{k} f_{d}\right)\left[s^{d-2}\right] \\
& +\left(f_{d-2} + \xi_{k} f_{d-1} + \xi_{k}^{2} f_{d}\right)\left[s^{d-3}\right] \\
& +\left(f_{d-3} + \xi_{k} f_{d-2} + \xi_{k}^{2} f_{d-1} + \xi_{k}^{3} f_{d}\right)\left[s^{d-4}\right] \\
& +\cdots \\
& +\left(f_{1} + \xi_{k} f_{2} + \xi_{k}^{2} f_{3} + \cdots + \xi_{k}^{d-1} f_{d}\right)
\end{aligned}
$$
We regroup for $\xi_{k}$:
$$
\begin{aligned}
C_{T_{\xi_{k}}}=
& \left(f_{d}\left[s^{d-1}\right]+f_{d-1}\left[s^{d-2}\right]+\cdots+f_{2}[s]+f_{1}\right)+ \\
& +\left(f_{d}\left[s^{d-2}\right]+f_{d-1}\left[s^{d-3}\right]+\cdots+f_{3}[s]+f_{2}\right) \xi_{k}+ \\
& +\left(f_{d}\left[s^{d-3}\right]+f_{d-1}\left[s^{d-4}\right]+\cdots+f_{4}[s]+f_{3}\right) \xi_{k}^{2}+ \\
& +\left(f_{d}\left[s^{d-4}\right]+f_{d-1}\left[s^{d-5}\right]+\cdots+f_{5}[s]+f_{4}\right) \xi_{k}^{3}+ \\
& \cdots \\
& +\left(f_{d}[s]+f_{d-1}\right) \xi_{k}^{d-2}+f_{d} \xi_{k}^{d-1} .
\end{aligned}
$$
And we can express $C_{T_{\xi_{k}}}$ as a polynomial:
$$
C_{T_{\xi_{k}}}=h_{1}+h_{2} \xi_{k}+h_{3} \xi_{k}^{2}+\cdots+h_{d} \xi_{k}^{d-1}
$$
Where the coefficients $h_i$ are:
$$
h_{i}=\left(f_{d}\left[s^{d-i}\right]+f_{d-1}\left[s^{d-i-1}\right]+\cdots+f_{i+1}[s]+f_{i}\right)
$$
Then $\pi\left[f\left(\xi_{k}\right)=z_{k}\right]$ are **evaluations** of polynomial $C_{T_{\xi_{k}}}$ at points $\xi_{1}, \xi_{2}, \ldots, \xi_{n}$.
$$
\mathbf{C}_{T}=\left[C_{T_{\xi_{1}}}, C_{T_{\xi_{2}}}, \ldots, C_{T_{\xi_{n}}}\right]
$$
## Computing $h(X)$
Let's define a polynomial $h(X)$ made of the $h_i$ coefficients:
$$
h(X)=h_{1}+h_{2} X+\ldots+h_{d} X^{d-1}
$$
We can express the coefficients calculation as a matrix-vector multiplication
- $\mathbf{h} = \mathbf{A} \cdot \mathbf{s}$
$$
\left[\begin{array}{c}
h_{1} \\
h_{2} \\
h_{3} \\
\vdots \\
h_{d-1} \\
h_{d}
\end{array}\right]=\left[\begin{array}{cccccc}
f_{d} & f_{d-1} & f_{d-2} & f_{d-3} & \cdots & f_{1} \\
0 & f_{d} & f_{d-1} & f_{d-2} & \cdots & f_{2} \\
0 & 0 & f_{d} & f_{d-1} & \cdots & f_{3} \\
& & \ddots & & & \\
0 & 0 & 0 & 0 & \cdots & f_{d-1} \\
0 & 0 & 0 & 0 & \cdots & f_{d}
\end{array}\right] \cdot\left[\begin{array}{c}
{\left[s^{d-1}\right]} \\
{\left[s^{d-2}\right]} \\
{\left[s^{d-3}\right]} \\
\vdots \\
{[s]} \\
{[1]}
\end{array}\right]
$$
The matrix $\mathbf{A}$ is a [Toeplitz](https://en.wikipedia.org/wiki/Toeplitz_matrix) matrix, this means that the elements on each diagonal (from top left to bottom right) are constant.
### Circulant Matrix
The Toeplitz matrix is useful because we can convert it into a [circulant](https://en.wikipedia.org/wiki/Circulant_matrix) matrix, where each row is a cyclic shift of the previous one.
$$
\mathbf{B}=\left[\begin{array}{cccccc}
b_{n-1} & b_{n-2} & b_{n-3} & b_{n-4} & \cdots & b_{0} \\
b_{0} & b_{n-1} & b_{n-2} & b_{n-3} & \cdots & b_{1} \\
b_{1} & b_{0} & b_{n-1} & b_{n-2} & \cdots & b_{2} \\
& & \cdots & & & \\
b_{n-3} & b_{n-4} & b_{n-5} & b_{n-6} & \cdots & b_{n-2} \\
b_{n-2} & b_{n-3} & b_{n-4} & b_{n-5} & \cdots & b_{n-1}
\end{array}\right]
$$
Circulant matrices are cool because you can express a **matrix-vector** product as a **polynomial multiplication**.
If we have a matrix-vector multiplication of $\mathbf{a} = \mathbf{B} \cdot \mathbf{c}$ where $\mathbf{c} = \left(c_{0}, c_{1}, c_{2}, \ldots, c_{n-1}\right)$
$$
a(X)=\sum_{i} a_{i} X^{i}
$$
$$
b(X)=\sum_{i} b_{i} X^{i}
$$
$$
c(X)=\sum_{i} c_{i} X^{i}
$$
The coefficients of the resulting polynomial $\mathbf{a}$ will be:
$$
a_{i}=\sum_{j+k=i-1(\bmod n)} b_{j} c_{k}
$$
$$
a(X) \equiv X \cdot b(X) \cdot c(X) \quad\left(\bmod X^{n}-1\right)
$$
## Discrete Fourier Transform (DFT)
But why do we even want a polynomial multiplication? Using the coefficient representation, this operation takes $O(n^2)$
```rust
pub fn multiply_polynomials<E: Pairing>(
p: &[E::ScalarField],
q: &[E::ScalarField],
) -> Vec<E::ScalarField> {
let mut result = vec![E::ScalarField::zero(); p.len() + q.len() - 1];
for (i, &coeff_a) in p.iter().enumerate() {
for (j, &coeff_b) in q.iter().enumerate() {
result[i + j] += coeff_a * coeff_b;
}
}
result
}
```
Well, we can use the DFT to convert the polynomials into their point-value form, making polynomial multiplication significantly faster. DFT for vectors in $\mathbb{F}^{n}$ is defined as
$$
\operatorname{DFT}_{n}\left(a_{0}, a_{1}, \ldots, a_{n-1}\right)=\left(b_{0}, b_{1}, \ldots, b_{n-1}\right)
$$
Where:
$$
b_i = \sum_{j=0}^{n-1} a_j \omega^{ij}
$$
If we think of the vector $a$ as a polynomial $a(X)$, then:
$$
a(X)=\sum_{j} a_{j} X^{j}
$$
The DFT gives us evaluations of this polynomial at $\omega^{0}, \omega^{1}, \ldots, \omega^{n-1}$. These points, $\omega^n$, are the **roots of unity**. An $n$-th root of unity, satisfies $\omega^n = 1$.
With this new form, polynomial multiplication is done pointwise , and the inverse DFT (iDFT) recovers the result, reducing the complexity from $O(n^2)$ to $O(n \log n)$.
We will use the Fast Fourier Transform (FFT) to perform the DFT. I won't dive more into this topic, but you can check the resources list.
## Toeplitz Matrix to Circulant Matrix
Let's go back to our matrix, we can pad it to size $2 n \times 2 n$ to convert it to a Circulant Matrix
$$
\mathbf{A}^{\prime}=\left[\begin{array}{cccccccccc}
f_{n-1} & f_{n-2} & f_{n-3} & f_{n-4} & \cdots & f_{0} & 0 & 0 & \cdots & 0 \\
0 & f_{n-1} & f_{n-2} & f_{n-3} & \cdots & f_{1} & f_{0} & 0 & \cdots & 0 \\
0 & 0 & f_{n-1} & f_{n-2} & \cdots & f_{2} & f_{1} & f_{0} & \cdots & 0 \\
& & \vdots & & & & & & & \\
0 & 0 & 0 & 0 & \cdots & f_{n-2} & f_{n-3} & f_{n-4} & \cdots & 0 \\
0 & 0 & 0 & 0 & \cdots & f_{n-1} & f_{n-2} & f_{n-3} & \cdots & 0 \\
0 & 0 & 0 & 0 & \cdots & 0 & f_{n-1} & f_{n-2} & \cdots & f_{0} \\
f_{0} & 0 & 0 & 0 & \cdots & 0 & 0 & f_{n-1} & \cdots & f_{1} \\
f_{1} & f_{0} & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & f_{2} \\
& & \vdots & & & & & & & \\
f_{n-2} & f_{n-3} & f_{n-4} & f_{n-5} & \cdots & 0 & 0 & 0 & \cdots & f_{n-1}
\end{array}\right]
$$
We'll also need to pad the set $\mathbf{s}$:
$$
\mathbf{s}^{\prime}=\left[\begin{array}{c}
{\left[s^{d-1}\right]} \\
{\left[s^{d-2}\right]} \\
{\left[s^{d-3}\right]} \\
\vdots \\
{[s]} \\
{[1]} \\
0 \\
\vdots \\
0
\end{array}\right]
$$
Our result vector will now have length $2d$:
$$
\mathbf{A}^{\prime} \cdot \mathbf{s}^{\prime}=\mathbf{h}^{\prime}=\left[\begin{array}{c}
h_{0} \\
h_{1} \\
h_{2} \\
\vdots \\
h_{d-2} \\
h_{d-1} \\
h_{d} \\
\vdots \\
h_{2d-1}
\end{array}\right]
\quad
$$
## Implementation
We now have all the pieces and we can start with the implementation.
```rust
pub fn set_open(&self, p: &[E::ScalarField]) -> Result<Vec<E::G1>, KZGError> {
// Initialize d
let d = p.len();
// Initialize FFT 2d domain
let domain_2d =
Radix2EvaluationDomain::<<E as Pairing>::ScalarField>::new(2 * d).unwrap();
...
```
#### Create $\mathbf{s}^\prime$ polynomial
$$
\mathbf{s}^\prime = (\left[s^{d-1}\right], \left[s^{d-2}\right], \left[s^{d-3}\right], \cdots, [s], [1], \underbrace{0, 0, \ldots, 0}_{d \text{ neutral elements }})
$$
```rust
let mut s = Vec::new();
for i in (0..d).rev() {
s.push(self.g1_pow()[i]);
}
for _ in 0..d {
s.push(E::G1::zero());
}
```
#### Create $\mathbf{a}^\prime$ polynomial from $\mathbf{A}^\prime$ matrix
$$
\mathbf{a}^\prime = (\underbrace{0, 0, \ldots, 0}_{d \text{ zeroes }}, f_{1}, f_{2}, \ldots, f_{d})
$$
```rust!
let mut a = Vec::new();
for _ in 0..d {
a.push(E::ScalarField::zero());
}
for &coeff in p {
a.push(coeff);
}
```
#### Apply FFT to both polynomials
- Compute the FFT of $\mathbf{s}^\prime$ polynomial:
$$
\widehat{\mathbf{s}^\prime} = \text{FFT}_{2d}(\mathbf{s}^\prime)
$$
```rust
let hat_s = domain_2d.fft(&s);
```
- Compute the FFT of $\mathbf{a}^\prime$ polynomial:
$$
\widehat{\mathbf{a}^\prime} = \text{FFT}_{2d}(\mathbf{a}^\prime )
$$
```rust
let hat_a = domain_2d.fft(&a);
```
#### Multiply Pointwise
In the frequency domain/point representation, polynomial multiplication becomes a simple pointwise multiplication of the transformed vectors.
$$
\widehat{\mathbf{h}^\prime} = \widehat{\mathbf{a}^\prime} \circ \widehat{\mathbf{s}^\prime}
$$
```rust
let mut hat_h: Vec<E::G1> = Vec::with_capacity(2 * d);
for i in 0..2 * d {
hat_h.push(hat_s[i].mul(hat_a[i]));
}
```
#### Apply the Inverse FFT
$$
\mathbf{h}^\prime = \text{iFFT}_{2d}(\widehat{\mathbf{h}^\prime})
$$
```rust
let h_prime = domain_2d.ifft(&hat_h);
```
#### Extract the Resulting Vector $h$
Get the top $d$ elements of $\mathbf{h}^\prime$
$$
h = [h_0, h_1, h_2, \dots, h_{d-1}]
$$
```rust
let h = h_prime[0..p.len()].to_vec();
```
And that's it, now we can construct our $C_{T_{\xi_{k}}}$ polynomial
$$
C_{T_{\xi_{k}}}=h_{1}+h_{2} \xi_{k}+h_{3} \xi_{k}^{2}+\cdots+h_{d} \xi_{k}^{d-1}
$$
With our current code structure, calculating a set of $n$ proofs with these being the $n$ roots of unity, is as simple as creating a new domain of size $d$ and performing an FFT.
```rust!
let domain = Radix2EvaluationDomain::<<E as Pairing>::ScalarField>::new(d).unwrap();
let ct = domain.fft(&h);
```
$C_T$ is an array containing the calculated proofs.
## Resources
- [Fast Amortized KZG Proofs](https://eprint.iacr.org/2023/033)
- [Divide & Conquer: FFT](https://www.youtube.com/watch?v=iTMn0Kt18tg)
- [The Fast Fourier Transform (FFT): Most Ingenious Algorithm Ever?](https://www.youtube.com/watch?v=h7apO7q16V0)
- [Fourier Transforms and the Fast Fourier Transform (FFT) Algorithm](https://www.cs.cmu.edu/afs/andrew/scs/cs/15-463/2001/pub/www/notes/fourier/fourier.pdf)