# Brakedown: The Linear-Time, Field-Agnostic SNARK
Zero-Knowledge Proofs (ZKPs) have been bottlenecked by **Prover Time**. Historically, generating a SNARK for a statement of size $N$ meant relying on Fast Fourier Transforms (FFTs) requiring $O(N \log N)$ time, or multiexponentiations requiring specific elliptic curve groups.
**Brakedown** shatters this barrier. It introduces the first built SNARK with a strictly **$O(N)$ prover time** for Rank-1 Constraint Systems (R1CS). It is transparent, plausibly post-quantum secure, and completely **field-agnostic**—capable of operating over any finite field $\mathbb{F}$ of sufficient size ($|\mathbb{F}| = 2^{\Theta(\lambda)}$).
---
## 1. Arithmetization: The Spartan Polynomial IOP
Brakedown’s foundation is a linear-time Polynomial Interactive Oracle Proof (IOP) adapted from Spartan. We start by formalizing the computation as an R1CS instance.
### Definition: R1CS and Multilinear Extensions
An R1CS instance is defined as $\mathbb{X} = (\mathbb{F}, A, B, C, M, N, \text{io})$, where $A, B, C \in \mathbb{F}^{M \times M}$ are public coefficient matrices, each with at most $N$ non-zero entries. Let $W$ be the purported witness, and $Z = (W, 1, \text{io})$ be the full state vector.
The IOP represents these discrete matrices and vectors as continuous polynomials over a Boolean hypercube using **Multilinear Extensions (MLE)**. For a vector $Z$ of length $M$, its MLE is a polynomial $\widetilde{Z}$ over $s = \log M$ variables:
$$\widetilde{Z}(x_1, \ldots, x_{\log M}) = \sum_{k \in \{0,1\}^{\log M}} Z_k \cdot \widetilde{eq}(x, k)$$
where $\widetilde{eq}(x, e) = \prod_{i=1}^s (x_i e_i + (1-x_i)(1-e_i))$ is the equality polynomial.
### The Core Sum-Check Identity
Checking that $Z$ satisfies the R1CS constraints $(A \cdot Z) \circ (B \cdot Z) = C \cdot Z$ is equivalent (with soundness error $O(\log M / |\mathbb{F}|)$) to checking if the following polynomial identity holds for a random challenge $\tau \in \mathbb{F}^s$ chosen by the verifier:
$$0 \overset{?}{=} \sum_{x \in \{0,1\}^s} \widetilde{eq}(\tau, x) \cdot \left[ \left( \sum_{y \in \{0,1\}^s} \widetilde{A}(x, y) \cdot \widetilde{Z}(y) \right) \cdot \left( \sum_{y \in \{0,1\}^s} \widetilde{B}(x, y) \cdot \widetilde{Z}(y) \right) - \sum_{y \in \{0,1\}^s} \widetilde{C}(x, y) \cdot \widetilde{Z}(y) \right]$$
The prover uses the **Sum-Check Protocol** to reduce this massive sum to a single evaluation check at random points $r_x, r_y \in \mathbb{F}^s$. By the end of the IOP, the verifier only needs to evaluate $\widetilde{Z}(r_y)$. This forces the prover to cryptographically commit to $Z$ using a Polynomial Commitment Scheme.
---
## 2. The Brakedown Polynomial Commitment Scheme (PCS)
Brakedown relies on a matrix-based Polynomial Commitment Scheme for a multilinear polynomial $g$ with $n = m^2$ coefficients. We view its coefficients in the Lagrange basis as an $m \times m$ matrix $u$, with rows $u_i \in \mathbb{F}^m$. Evaluating $g(r)$ is equivalent to a tensor product inner product:
$$g(r) = \langle (q_1 \otimes q_2), u \rangle$$
where $q_1, q_2 \in \mathbb{F}^m$ are computable from the evaluation point $r$.
Let $\text{Enc}: \mathbb{F}^m \to \mathbb{F}^N$ be the encoding function of a linear code with rate $\rho > 0$, where $N = \rho^{-1} m$.
### The PCS Protocol
**1. Commit Phase:**
The prover encodes each row of $u$ to produce a wider matrix $\hat{u} = \{\text{Enc}(u_i)\}_{i \in[m]} \in (\mathbb{F}^N)^m$. The prover then Merkle-commits to the *columns* of $\hat{u}$ and sends the root to the verifier $\mathcal{V}$.
**2. Testing Phase (Proximity Check):**
To ensure $\hat{u}$ is actually composed of valid codewords:
* $\mathcal{V}$ sends a random challenge vector $r \in \mathbb{F}^m$.
* $\mathcal{P}$ replies with $u' = \sum_{i=1}^m r_i \cdot u_i \in \mathbb{F}^m$.
* $\mathcal{V}$ selects a random set of column indices $Q \subseteq [N]$ of size $\ell = \Theta(\lambda)$.
* For each $j \in Q$, $\mathcal{P}$ opens the $j$-th column of $\hat{u}$. $\mathcal{V}$ checks consistency:
$$\text{Enc}(u')_j \overset{?}{=} \sum_{i=1}^m r_i \cdot \hat{u}_{i, j}$$
**3. Evaluation Phase:**
To prove $g(r_{eval}) = v$:
* Calculate $q_1, q_2 \in \mathbb{F}^m$ such that $g(r_{eval}) = \langle (q_1 \otimes q_2), u \rangle$.
* $\mathcal{P}$ sends $u'' = \sum_{i=1}^m q_{1,i} \cdot u_i \in \mathbb{F}^m$.
* $\mathcal{V}$ checks $\langle u'', q_2 \rangle \overset{?}{=} v$.
* $\mathcal{V}$ checks the consistency of $u''$ against a fresh set of randomly opened columns $Q'$ from $\hat{u}$, identically to the Test Phase.
---
## 3. The $O(N)$ Code: Linear-Time Encoding
For the PCS to be linear-time, the code $\text{Enc}$ must encode messages in $O(N)$ time, ruling out FFT-based Reed-Solomon codes. Brakedown introduces a novel, randomized recursive code using sparse matrices.
### Algorithm: $\text{Enc}_n: \mathbb{F}^n \to \mathbb{F}^{rn}$
Let $\alpha, \beta$ be constants where $0 < \alpha < 1$. Let $A^{(n)} \in \mathbb{F}^{n \times \alpha n}$ and $B^{(n)} \in \mathbb{F}^{\alpha r n \times (r-1-r\alpha)n}$ be random sparse matrices generated during public setup with row sparsities $c_n$ and $d_n$.
To encode a message $x \in \mathbb{F}^n$:
1. **Compress**: Compute $y = x \cdot A^{(n)} \in \mathbb{F}^{\alpha n}$.
2. **Recurse**: Recursively encode $y$ to get $z = \text{Enc}_{\alpha n}(y) \in \mathbb{F}^{\alpha r n}$.
3. **Expand**: Compute $v = z \cdot B^{(n)} \in \mathbb{F}^{(r-1-r\alpha)n}$.
4. **Concatenate**: Output the systematic codeword $w = (x, z, v)^T \in \mathbb{F}^{rn}$.
Because $A^{(n)}$ and $B^{(n)}$ are highly sparse (e.g., $c_n \approx 6, d_n \approx 33$), matrix multiplication is strictly $O(N)$. The probability that this randomized construction fails to achieve the minimum relative distance $\delta = \beta/r$ is bounded below $2^{-100}$.
---
## 4. Holography: Dense to Sparse via Memory Checking
If we naively applied the PCS to the R1CS coefficient matrices $\widetilde{A}, \widetilde{B}, \widetilde{C}$, the prover would take $O(M^2)$ time. Since these matrices are highly sparse ($N \ll M^2$ non-zero entries), Brakedown relies on **Holography** utilizing offline memory checking.
A sparse polynomial $D(r_x, r_y)$ can be represented by three dense vectors of length $N$: $\text{row}, \text{col}$, and $\text{val}$. Its evaluation is given by:
$$D(r_x, r_y) = \sum_{k \in \{0,1\}^{\log N}} \text{val}(k) \cdot \widetilde{eq}(\mathbf{b}^{-1}(\text{row}(k)), r_x) \cdot \widetilde{eq}(\mathbf{b}^{-1}(\text{col}(k)), r_y)$$
To prove this sum without requiring the verifier to touch $O(N)$ data, the prover commits to $\text{row}, \text{col},$ and $\text{val}$. To ensure $\text{row}(k)$ accurately maps to the correct hypercube coordinates $\widetilde{eq}(i, r_x)$, Brakedown forces the verifier and prover to conceptually interact with an untrusted memory of size $M$.
### The Memory Checking Invariant
We define three sets of tuples to track memory access:
* **Write Set ($WS$)**: Initial state of memory plus all subsequent writes.
$WS = \{(i, \widetilde{eq}(i, r_x), 0): i \in [M]\} \cup \{(\text{row}(k), E_{rx}(k), \text{write}_{\text{row}}(k)): k \in [N]\}$
* **Read Set ($RS$)**: The values read from memory at each step.
$RS = \{(\text{row}(k), E_{rx}(k), \text{read}_{\text{row}}(k)): k \in [N]\}$
* **Final State ($S$)**: The memory state after all operations.
$S = \{(i, \widetilde{eq}(i, r_x), \text{final}_{\text{row}}(i)): i \in [M]\}$
The core invariant is that memory was handled honestly if and only if **$WS = RS \cup S$**.
The IOP verifies this multiset equality by compressing the sets into elements of $\mathbb{F}$ using a randomized fingerprint. For random challenges $\tau, \gamma \in \mathbb{F}$, we define:
$$\mathcal{H}_{\tau, \gamma}(A) = \prod_{(a,v,t) \in A} (a \cdot \gamma^2 + v \cdot \gamma + t - \tau)$$
Using a grand-product Sum-Check argument, the prover demonstrates that:
$$\mathcal{H}_{\tau, \gamma}(WS) = \mathcal{H}_{\tau, \gamma}(RS) \cdot \mathcal{H}_{\tau, \gamma}(S)$$
This beautiful mechanism bounds the prover's work strictly to $O(N)$ (the number of non-zero entries), completely avoiding the $O(M^2)$ trap.
---
## 5. Conclusion: Brakedown vs. Shockwave
The paper formalizes these constructs into two distinct SNARKs:
1. **Brakedown**: Uses the sparse-matrix recursive code ($\text{Enc}$). It is purely $O(N)$ in prover time, and totally field-agnostic (using no FFTs). Proof sizes scale as $O_\lambda(\sqrt{N})$.
2. **Shockwave**: Replaces $\text{Enc}$ with a standard Reed-Solomon code. It abandons strict $O(N)$ prover time ($O(N \log N)$ due to FFTs) and requires FFT-friendly fields, but achieves significantly shorter proofs while still drastically outperforming prior post-quantum SNARKs.
*This is the note from the Succinct Proofs Reading Group organized by Mark. I would like to thank Mathias, Anders, and Megan for their insights during the discussion, and Yizhou and Aoxuan for explaining Brakedown and related techniques to me.*