# [CS25] On Reed–Solomon Proximity Gaps Conjectures
{%preview https://eprint.iacr.org/2025/2046 %}
[TOC]
The Crites-Stewart paper fundamentally reshapes the security landscape for hash-based succinct non-interactive arguments of knowledge (SNARKs) by disproving three widely-assumed conjectures underpinning modern zero-knowledge proof systems. **The most critical finding establishes that proximity gaps for Reed-Solomon codes fail beyond the list-decoding capacity bound**, contradicting conjectures from Ben-Sasson et al. (J. ACM 2023), WHIR (2024), and DEEP-FRI (ITCS 2020) that these properties extend to the information-theoretic capacity bound. This result impacts nearly two dozen zero-knowledge virtual machines (zkVMs) tracked by EthProofs, including deployed systems from StarkWare, Polygon, Risc Zero, zkSync, and others, all of which assume these conjectures in their security analyses.
The work delivers two major contributions that reshape theoretical understanding and practical deployment. First, the authors prove that correlated agreement, mutual correlated agreement, and list-decodability conjectures up to capacity (where proximity parameter $\delta < 1-\rho$ for code rate $\rho$) are false, constructing explicit counterexamples using probabilistic arguments with Cantelli's inequality. Second, they establish a fundamental reduction showing that correlated agreement with sufficiently small error probability directly implies list decoding of Reed-Solomon codes, connecting two previously separate research areas. This connection demonstrates that any breakthrough on correlated agreement conjectures would automatically yield corresponding advances in classical list-decoding theory, a problem that has resisted solution for over six decades.
The quantitative impact on deployed systems is surprisingly modest despite the theoretical significance. The paper proves that conjectures must be modified from the capacity bound $\delta < 1-\rho$ to the list-decoding capacity bound $H_q(\delta) < 1-\rho$, where $H_q$ denotes the q-ary entropy function. For cryptographically-sized fields with $\log_2(q) \geq 31$ used in practice, this shift reduces the achievable proximity parameter by only approximately 3.2 percent, translating to roughly 10-15 percent increases in proof sizes for most systems. The authors propose minimally modified conjectures up to the list-decoding capacity bound that remain plausible targets for future research, providing a sound foundation for next-generation SNARK implementations.
This work originates from the authors' concurrent CRYPTO 2025 attack on threshold Schnorr signatures, exploiting the deep connection between Shamir secret sharing and Reed-Solomon codes first observed by McEliece and Sarwate in 1981. The Ethereum Foundation's recently announced "Millennium Prize" offering $1 million for resolving these proximity gaps conjectures underscores their practical importance for blockchain scalability and post-quantum cryptography.
## Historical Context and the Millennium Prize Challenge
:::spoiler
### The evolution of proximity gaps research
The study of proximity gaps for error-correcting codes emerged from the intersection of coding theory and computational complexity theory in the late 20th century. Reed-Solomon codes, introduced by Irving Reed and Gustave Solomon in 1960, remained primarily theoretical curiosities for decades until modern applications in cryptography revealed their fundamental importance for succinct proof systems. The breakthrough came with the introduction of the FRI (Fast Reed-Solomon Interactive Oracle Proofs of Proximity) protocol by Ben-Sasson, Bentov, Horesh, and Riabzev at ICALP 2018, which demonstrated that interactive oracle proofs could achieve remarkable efficiency by exploiting algebraic structure in Reed-Solomon codes.
The FRI protocol's security analysis required understanding when affine subspaces of codewords exhibit proximity gaps—the property that each subspace is almost entirely close to the code or almost entirely far from it, with no intermediate cases. Ben-Sasson, Carmon, Ishai, Kopparty, and Saraf established in their seminal J. ACM 2023 paper that proximity gaps provably exist up to the Johnson bound (proximity parameter $\delta < 1-\sqrt{\rho}$) for Reed-Solomon codes of rate $\rho$. Their Conjecture 8.4 optimistically proposed that these gaps extend all the way to the Shannon capacity bound ($\delta < 1-\rho$), enabling maximally efficient proof systems.
Subsequent protocols including DEEP-FRI (ITCS 2020), STIR (CRYPTO 2024), and WHIR (2024) built upon these conjectured proximity gaps, pushing toward ever-smaller proof sizes and faster verification times. DEEP-FRI introduced out-of-domain sampling to strengthen soundness guarantees, formulating Conjecture 2.3 on list-decodability up to capacity. WHIR proposed an even stronger mutual correlated agreement conjecture (Conjecture 4.12) enabling ultra-fast verification in approximately 100-400 microseconds. Almost all deployed zkVM implementations operate in parameter regimes that assume one or more of these capacity-achieving conjectures.
:::
The Ethereum Foundation's Proximity Prize initiative emerged from recognition that these conjectures represent critical unproven assumptions in the foundation of blockchain scalability infrastructure. With Ethereum's long-term vision depending on zero-knowledge rollups processing millions of transactions per second, the mathematical rigor of underlying proof systems becomes paramount. The prize offers $1 million to researchers who either prove or disprove the proximity gaps conjectures, with distinguished judges Dan Boneh from Stanford University and Giacomo Fenzi from EPFL evaluating submissions. Crites and Stewart's work provides the first major resolution by disproving the capacity-achieving variants while proposing refined conjectures that remain viable targets.

## Technical Foundations: Reed-Solomon Codes and Proximity Testing
Understanding the Crites-Stewart results requires establishing the mathematical framework for Reed-Solomon codes and their role in modern proof systems. A Reed-Solomon code $\text{RS}(\mathbb{F}, D, k)$ over finite field $\mathbb{F}$ consists of evaluations on domain $D \subseteq \mathbb{F}$ of all polynomials in $\mathbb{F}[x]$ having degree at most $k-1$. The code has blocklength $n = |D|$, dimension $k$ (forming a k-dimensional subspace of $\mathbb{F}^n$), and rate $\rho = k/n$ representing the information density. Each codeword can be viewed as a function $u: D \to \mathbb{F}$ or equivalently as a vector $u \in \mathbb{F}^n$ by enumerating $D = \{x_1, \ldots, x_n\}$. The minimum distance between distinct codewords equals $n - k + 1$, conferring strong error-detection and error-correction capabilities.
The Hamming distance $\Delta(u, v)$ between codewords measures the number of coordinate positions where they differ: $\Delta(u, v) = |\{i \in [n]: u_i \neq v_i\}|$. The distance from a word to a code is $\Delta(u, C) = \min_{v \in C} \Delta(u, v)$. List decoding generalizes unique decoding by finding all codewords within a specified distance threshold. Formally, code $C \subseteq \mathbb{F}_q^n$ is $(\delta, L)$-list decodable if for every word $u \in \mathbb{F}_q^n$, at most $L$ codewords lie within fractional Hamming distance $\delta$: $|\{v \in C: \Delta(u, v) \leq \delta \cdot n\}| \leq L$. Smaller list sizes indicate stronger uniqueness guarantees, with $L = 1$ corresponding to unique decoding.
:::warning
**Critical Distinction: Capacity versus List-Decoding Capacity**
The Shannon capacity bound for error correction establishes that decoding from $\delta$-fraction errors is information-theoretically possible when rate satisfies $\rho < 1 - \delta$. However, Elias proved in 1957 that efficient list decoding with polynomial-sized lists requires the stronger condition $\rho \leq 1 - H_q(\delta)$, where $H_q(\delta) = \delta \log_q(q-1) - \delta \log_q(\delta) - (1-\delta)\log_q(1-\delta)$ denotes the q-ary entropy function. The list-decoding capacity bound $H_q(\delta)$ strictly exceeds $\delta$, creating a gap between what is information-theoretically possible and what is efficiently achievable. The Crites-Stewart paper proves this gap is exactly where capacity-achieving proximity gaps conjectures fail.
:::
Proximity gaps formalize the binary property that affine subspaces are almost entirely close to or almost entirely far from the code. Collection $\mathcal{S}$ displays a $(\delta, \epsilon)$-proximity gap with respect to property $P$ if for every $S \in \mathcal{S}$, either at least $(1-\epsilon)$-fraction of members are $\delta$-close to $P$, or at most $\epsilon$-fraction are $\delta$-close. No intermediate fractions occur, creating a sharp threshold. For Reed-Solomon codes, the relevant affine subspaces are lines of the form $\{u^{(0)} + \lambda u^{(1)}: \lambda \in \mathbb{F}_q\}$, as these arise naturally in the folding steps of FRI-based protocols.
Ben-Sasson et al. introduced correlated agreement as the key technical tool for establishing proximity gaps. Words $u_0, \ldots, u_\ell \in \mathbb{F}_q^n$ have $\delta$-correlated agreement with code $C$ if there exist subdomain $D' \subseteq D$ and codewords $v_0, \ldots, v_\ell \in C$ satisfying two conditions: density ($|D'|/|D| \geq 1-\delta$) and agreement ($u_i$ and $v_i$ agree on all of $D'$ for each $i$). The critical property is that all words agree with their respective codewords on the same large subdomain $D'$, enforcing correlated structure rather than independent agreements on different subdomains.

## Theorem 1: Proximity Gap Failure Beyond List-Decoding Capacity
### The probabilistic construction using Cantelli's inequality
The paper's first main result establishes that proximity gaps fail beyond the list-decoding capacity bound through an elegant probabilistic argument. Theorem 1 states that for $f < n-k$ and any word $u^{(0)}$ satisfying $d(u^{(0)}, \text{RS}(\mathbb{Z}_q, D, k)) > f$, there exists $u^{(1)}$ such that at least $\frac{\binom{n}{f}}{\binom{n}{f} + q^{n-f-k}\exp(fn/q)} \cdot (q-1)$ values of $\lambda \in \mathbb{Z}_q$ satisfy $d(u^{(0)} + \lambda u^{(1)}, \text{RS}(\mathbb{Z}_q, D, k)) \leq f$. This bound demonstrates that a substantial fraction of points on the line from $u^{(0)}$ through $u^{(1)}$ lie close to the code, violating the proximity gap property that demands either almost all or almost none should be close.
The proof proceeds through Lemma 1, analyzing the probability that a uniformly random word $u \in \mathbb{Z}_q^n$ satisfies $\Delta(u, \text{RS}(\mathbb{Z}_q, D, k)) \leq f$. The authors introduce indicator random variables $X_F$ for each potential disagreement set $F$ of size $f$, where $X_F = 1$ if and only if there exists a polynomial $p$ with $\deg p \leq k-1$ agreeing with $u$ on all coordinates $[n] \setminus F$. The sum $X = \sum_F X_F$ over all $\binom{n}{f}$ disagreement sets counts how many ways the random word can be within distance $f$ of the code. The key insight is that $X > 0$ if and only if the distance condition holds.
Computing the expectation yields $E[X] = \binom{n}{f} q^{-(n-f-k)}$, since each indicator has probability $q^{-(n-f-k)}$ of equaling one (the probability that a random word from domain of size $n-f$ lies in a Reed-Solomon code of dimension $k$). The variance computation requires analyzing the covariance structure between pairs of indicators $X_F$ and $X_{F'}$. When the intersection size $|F \cap F'|$ exceeds the threshold $2f - (n-k)$, the disagreement sets overlap sufficiently that polynomial constraints from one set partially determine the other, creating positive covariance. The authors establish the variance bound $\text{Var}[X] \leq E[X] \exp(fn/q)$ by carefully summing contributions from all pairs and applying the inequality $\sum_{\ell=0}^\infty \frac{f^\ell n^\ell}{q^\ell \ell!} = \exp(fn/q)$.
Applying the one-sided Chebyshev-Cantelli inequality to bound the probability that $X$ falls below its expectation gives the key estimate: $\Pr[X > 0] \geq \frac{E[X]^2}{\text{Var}[X] + E[X]^2} \geq \frac{E[X]}{\exp(fn/q) + E[X]} = \frac{\binom{n}{f}}{\binom{n}{f} + q^{n-f-k}\exp(fn/q)}$. This lower bound quantifies how likely a random word is to lie within distance $f$ of the code. The construction completes by considering a line from a deep hole $u^{(0)}$ (a word at maximum possible distance $n-k$ from the code, specifically taking $u^{(0)}_i = 1/x_i$ with $0 \notin D$) through a uniformly random word $u^{(1)}$. For any $\lambda \neq 0$, the point $u^{(0)} + \lambda u^{(1)}$ conditioned on $\lambda$ but not on $u^{(1)}$ remains uniformly distributed, inheriting the lower bound on proximity probability. By linearity of expectation over the $q-1$ nonzero values of $\lambda$, at least the claimed number of points must lie within distance $f$.
### Explicit counterexample parameters from Corollary 1
Corollary 1 instantiates the abstract bound with concrete parameters that contradict Conjecture 8.4. Given $n, q$ with $q \geq 2n$, $n \geq 2 + 8\log_2(q)$, and $n$ even, setting proximity parameter $f = n/2$ and dimension $k = \lfloor(n/2)(1 - 1/n - 1/\log_2(q))\rfloor$ ensures at least $(q-1)/2$ points on the constructed line lie within distance $f$ of the code. The proof verifies that these parameters satisfy the conditions of Theorem 1, explicitly taking $u^{(0)}$ as evaluation of $x^k$ (which lies in $\text{RS}(\mathbb{Z}_q, D, k+1)$ but not in $\text{RS}(\mathbb{Z}_q, D, k)$, guaranteeing $\Delta(u^{(0)}, \text{RS}(\mathbb{Z}_q, D, k)) \geq f$ by minimum distance separation).
The verification that these parameters yield the claimed bound reduces to showing $\binom{n}{f} \geq q^{n-f-k}\exp(fn/q)$. Using the standard lower bound on the central binomial coefficient $\binom{n}{n/2} \geq \frac{2^n}{\sqrt{2\pi n}}$ with $f = n/2$, the condition becomes $2^n \geq q^{n-f-k}\exp(fn/q)\sqrt{2\pi n}$. Taking logarithms base 2 and substituting the parameter choices, the requirement simplifies to verifying $n \geq (n-f-k)\log_2(q) + \frac{fn}{q\ln(2)} + \frac{1}{2}\log_2(2\pi n)$. The authors show that under constraints $q \geq 2n$ and $n \geq 2 + 8\log_2(q)$, this inequality holds with room to spare, establishing the counterexample's validity.
:::danger
**Security Implications for Deployed Systems**
The explicit parameters from Corollary 1 demonstrate that for any constants $c_1, c_2$ in Conjecture 8.4, choosing field size $q > 2 \cdot (2n)^{c_1+c_2}$ creates a contradiction. The conjecture predicts error probability $\epsilon \leq \frac{(2n)^{c_1+c_2}}{q}$, but Theorem 1 guarantees $\epsilon q \geq (q-1)/2$. This means deployed systems operating with parameters approaching capacity face a fundamental security gap—their soundness error is exponentially worse than conjectured. For typical zkVM parameters with $n \approx 2^{20}$ and $q \approx 2^{31}$, the gap between proven and conjectured security may span dozens of bits.
:::
### Mathematical formulation of key inequalities
The variance bound derivation illuminates the delicate balance in the probabilistic argument. For disagreement sets $F, F'$ with intersection size $|F \cap F'| > 2f - (n-k)$, the joint probability factors as:
$$\Pr[X_F = 1 \land X_{F'} = 1] = q^{-(n-k-|F \cap F'|)}$$
This gives covariance:
$$\text{Cov}[X_F, X_{F'}] = q^{-(n-k-|F \cap F'|)} - q^{-2(n-f-k)}$$
Summing over all pairs with intersection size $\ell$ contributes:
$$\binom{n}{f}\binom{f}{\ell}\binom{n}{\ell}(q^{-(n-k-f+\ell)} - q^{-2(n-f-k)})$$
The total variance satisfies:
$$\text{Var}[X] \leq \binom{n}{f}q^{-(n-f-k)} + \sum_{\ell=1}^{n-k-f-1} \binom{n}{f}\binom{f}{\ell}\binom{n}{\ell}q^{-(n-k-f+\ell)}$$
Factoring out $E[X] = \binom{n}{f}q^{-(n-f-k)}$ yields:
$$\text{Var}[X] \leq E[X] \sum_{\ell=0}^{n-k-f-1} \binom{f}{\ell}\binom{n}{\ell}q^{-\ell} \leq E[X] \sum_{\ell=0}^\infty \frac{f^\ell n^\ell}{q^\ell \ell!} = E[X]\exp(fn/q)$$
This exponential bound on variance relative to expectation squared enables the Cantelli inequality to provide meaningful lower bounds on $\Pr[X > 0]$.
## Theorem 2: Correlated Agreement Implies List Decoding
### Connecting proximity gaps to classical list-decoding theory
The paper's second major contribution establishes a fundamental reduction from correlated agreement to list decoding, demonstrating that these apparently distinct concepts are intimately connected. Theorem 2 states that if $\text{RS}(\mathbb{Z}_q, D, k)$ satisfies correlated agreement over lines with $f < n-k-1$ errors and error parameter $\epsilon < (q-n)/(kq)$, then $\text{RS}(\mathbb{Z}_q, D, k+1)$ is $(f/n, L)$-list decodable where $L = \lceil \frac{\epsilon q(q-n)}{q-n-k\epsilon q} \rceil$. For small error probability $\epsilon < (q-n)/(2kq)$, this simplifies to the elegant bound $L \leq 2\epsilon q$. The contrapositive provides the key insight: if the higher-rate code has large list size (exponentially many codewords within distance $f$), then correlated agreement for the lower-rate code must fail with correspondingly high error probability.
The reduction exploits the polynomial remainder theorem (Little Bézout's theorem) to construct a line in $\mathbb{Z}_q^n$ where many points lie close to $\text{RS}(\mathbb{Z}_q, D, k)$ without exhibiting correlated agreement. Given word $u$ with $L$ distinct nearby codewords $v^{(1)}, \ldots, v^{(L)} \in \text{RS}(\mathbb{Z}_q, D, k+1)$ each within distance $f$, the construction proceeds by selecting random $a \in \mathbb{Z}_q \setminus D$ and defining $u^{(0)}$ with coordinates $u_i/(x_i - a)$ and $u^{(1)}$ with coordinates $-1/(x_i - a)$. The point $u^{(1)}$ serves as a deep hole for $\text{RS}(\mathbb{Z}_q, D, k)$, lying at maximum distance $n-k-1$ from the code (verified by observing that the all-ones vector equals $u^{(1)}$ scaled by evaluations of $x-a$, and any nearby codeword in $\text{RS}(\mathbb{Z}_q, D, k)$ would contradict minimum distance properties of $\text{RS}(\mathbb{Z}_q, D, k+1)$).
Each codeword $v^{(j)}$ evaluates polynomial $p^{(j)}(x)$ of degree at most $k$. Applying the polynomial remainder theorem gives $p^{(j)}(x) = (x-a)p^{*(j)}(x) + p^{(j)}(a)$ where $p^{*(j)}(x)$ has degree at most $k-1$. Dividing by $x-a$ yields $\frac{p^{(j)}(x)}{x-a} = \frac{p^{(j)}(x) - p^{(j)}(a)}{x-a} + \frac{p^{(j)}(a)}{x-a}$, where the first term $\frac{p^{(j)}(x) - p^{(j)}(a)}{x-a}$ evaluates to a degree-$(k-1)$ polynomial. Multiplying codeword $v^{(j)}$ coordinatewise by evaluations of $1/(x-a)$ produces $v'^{(j)}$ within distance $f$ of $u^{(0)}$. Adding $p^{(j)}(a) u^{(1)}$ gives $v'^{(j)} + p^{(j)}(a)u^{(1)} \in \text{RS}(\mathbb{Z}_q, D, k)$, demonstrating that $u^{(0)} + p^{(j)}(a)u^{(1)}$ lies within distance $f$ of $\text{RS}(\mathbb{Z}_q, D, k)$.
### The role of Schwartz-Zippel lemma and Cauchy-Schwarz inequality
The reduction's power derives from showing that evaluation points $p^{(j)}(a)$ take many distinct values, yielding many different points on the line $\{u^{(0)} + \lambda u^{(1)}: \lambda \in \mathbb{Z}_q\}$ that lie near the code. The Schwartz-Zippel lemma provides the crucial estimate: for distinct degree-$k$ polynomials $p^{(j)}(x)$ and $p^{(j')}(x)$, the probability over random $a \in \mathbb{Z}_q \setminus D$ that $p^{(j)}(a) = p^{(j')}(a)$ is at most $k/(q-n)$, since $p^{(j)}(x) - p^{(j')}(x)$ is a nonzero polynomial of degree at most $k$ having at most $k$ roots.
Computing the expected collision probability yields:
$$\Pr[p^{(j)}(a) = p^{(j')}(a): j, j' \xleftarrow{\$} [L], a \xleftarrow{\$} \mathbb{Z}_q \setminus D] \leq \frac{1}{L} + \left(1 - \frac{1}{L}\right)\frac{k}{q-n} = \frac{q-n+(L-1)k}{L(q-n)}$$
where the $1/L$ term accounts for $j = j'$ (guaranteed collision) and the remaining probability mass distributes over distinct polynomial pairs. Since this represents expectation over random $a$, some choice of $a$ achieves at most this collision probability.
The Cauchy-Schwarz inequality converts collision probability into lower bound on the number of distinct values. Let random variable $Y = p^{(j)}(a)$ for $j \xleftarrow{\$} [L]$ taking values in set $S = \{p^{(j)}(a): 1 \leq j \leq L\}$. The collision probability equals $\sum_{y \in S} \Pr[Y = y]^2$. Applying Cauchy-Schwarz to vectors of probabilities and all-ones vector:
$$1 = \left(\sum_{y \in S} \Pr[Y = y]\right)^2 \leq \left(\sum_{y \in S} \Pr[Y = y]^2\right) \cdot |S| \leq \frac{q-n+(L-1)k}{L(q-n)} \cdot |S|$$
Rearranging gives the lower bound:
$$|S| \geq \frac{(L-1)(q-n)}{q-n+(L-1)k}$$
This establishes that at least $E = \frac{(L-1)(q-n)}{q-n+(L-1)k}$ points on the constructed line lie within distance $f$ of $\text{RS}(\mathbb{Z}_q, D, k)$. If this exceeds $\epsilon q$ (the threshold for correlated agreement failure), the theorem is proven.

### Implications for list-decoding conjectures with small error probability
Theorem 2's contrapositive reveals profound implications: any progress on correlated agreement conjectures with error probability $\epsilon < 1/k$ automatically yields corresponding breakthroughs in list-decoding theory. The paper emphasizes that "any future results on our correlated agreement conjectures with small enough error probability would imply similar results in classical list decoding." Since classical list-decoding capacity questions have resisted resolution for over sixty years despite intensive research by the coding theory community, this connection suggests modified correlated agreement conjectures (Our Conjectures 1-3) may be similarly difficult to resolve.
The paper notes that correlated agreement results with $\epsilon > 1/k$ remain useful for practical applications. Indeed, the only known results for small fields and large dimensions—parameters like $q \approx 2^{32}$ and $k \approx 2^{20}$ used in deployed SNARKs—fall into this regime where Theorem 2 provides no constraints. However, conjectures proposing $\epsilon = F(n, k, f)/q$ for polynomial function $F$ would imply similar polynomial-list-size conjectures for list decoding when fields satisfy $q > 2kF(n, k, f)$. This establishes that optimistic correlated agreement conjectures (predicting very small error probability) necessarily imply optimistic list-decoding conjectures, inheriting their difficulty.
The reduction also clarifies why the community should pursue "large error probabilities for (mutual) correlated agreement, to avoid implying list-decodability." Rather than seeking strongest possible correlated agreement statements (smallest $\epsilon$), research should characterize what error probabilities are achievable and how they scale with field size and code parameters. This represents a conceptual shift from minimizing error to understanding the fundamental trade-offs between error probability, field size, proximity parameter, and code rate.
## Modified Conjectures: Up to List-Decoding Capacity
### Our Conjecture 1: List-decodability with entropy bound
The paper proposes minimally modified conjectures replacing capacity bounds with list-decoding capacity bounds throughout. Our Conjecture 1 states that Conjecture 2.3 from DEEP-FRI holds when the proximity condition $\delta \leq 1-\rho-\eta$ is replaced by the stronger requirement $H_q(\delta) \leq 1-\rho-\eta$. This modification aligns the conjecture with Elias's list-decoding capacity theorem, ensuring parameters remain in the regime where polynomial-sized lists are information-theoretically possible. The conjectured list size $(n/\eta)^{C_\rho}$ for constant $C_\rho$ depending only on rate remains unchanged, as does the overall structure of the claim.
The quantitative impact proves remarkably modest for practical field sizes. Claim 1 establishes that $\delta < H_q(\delta) \leq \delta + 1/\log_2(q)$ for all $0 < \delta \leq 1 - 1/q$, with the maximum gap $\max_{0 < \delta \leq 1-1/q} (H_q(\delta) - \delta)$ lying in the tight interval $[(1-1/q)/\log_2(q), 1/\log_2(q)]$. For fields used in SNARKs satisfying $\log_2(q) \gtrsim 31$, this translates to at most $1/31 \approx 0.032$ reduction in achievable proximity parameter—merely 3.2 percent of the error budget. The paper observes: "Fields used in SNARKs have $\log_2(q) \gtrsim 31$, so in practice this is a small difference."
However, the practical impact manifests more subtly through discrete parameter choices. While $1/\log_2(q)$ appears small asymptotically, SNARKs typically use blocklengths $n \gg \log_2(q)$ (often $n \approx 2^{20}$ versus $\log_2(q) \approx 31$), so "the list-decoding capacity bound should rule out the unmodified conjecture for some integral values of $k, f$." Specific parameter combinations that appeared safe under capacity-achieving assumptions may violate list-decoding capacity constraints, forcing implementers to reduce proximity parameters slightly or increase field sizes to restore security margins.
### Our Conjecture 2: Proximity gaps and correlated agreement
Our Conjecture 2 modifies the correlated agreement up-to-capacity conjecture by replacing $\delta \leq 1-\rho-\eta$ with $\delta \leq 1 - H_q(\delta) - 1/n - \eta$. The additional $1/n$ term provides a technical margin ensuring that when parameters satisfy the modified constraint, the probabilistic bound from Lemma 1 remains controlled. Specifically, under assumption $\rho \leq 1 - H_q(\delta) - 1/n$, the probability that a random word lies within distance $f = \delta n$ of the code satisfies:
$$\Pr[\text{random word within distance } f] \leq q^{(H_q(\delta) - 1 + \rho)n} \leq q^{(-1/n)n} = 1/q$$
This implies only about one point on a random line through a deep hole lies close to the code—far fewer than the $(q-1)p$ expected points where $p$ denotes the probability from Lemma 1. The bound $1/q$ is sufficiently small that proximity gaps plausibly hold with polynomial error parameters, consistent with the original conjecture's spirit.
The practical distance reduction from replacing $\delta < 1-\rho$ with $\delta < 1-H_q(\delta)$ amounts to $1/\log_2(q) + 1/n$. For typical zkVM parameters with $n \approx 2^{20}$ and $\log_2(q) \approx 31$, this equals approximately $0.032 + 10^{-6} \approx 0.032$, confirming that the list-decoding capacity modification dominates the $1/n$ term. Systems operating with $\rho = 1/2$ could previously conjecture proximity gaps up to $\delta \approx 0.48$ under capacity assumptions; the modified conjecture restricts this to approximately $\delta \approx 0.44$ for $q = 2^{31}$, requiring roughly 10 percent additional queries to maintain 128-bit security.
:::success
**Modified Conjectures Represent Best Current Understanding**
The paper's proposed conjectures (Our Conjectures 1, 2, 3) represent the current best standing conjectures for proximity gaps of Reed-Solomon codes. They avoid the provably impossible regime beyond list-decoding capacity while remaining optimistic enough to support efficient proof systems. The Ethereum Foundation's Proximity Prize now effectively targets these refined conjectures, as proving or disproving them would constitute major progress. The refinement from capacity to list-decoding capacity demonstrates scientific maturity—acknowledging theoretical limits while preserving practical utility.
:::
### Our Conjecture 3: Mutual correlated agreement refinement
Our Conjecture 3 extends the modification to mutual correlated agreement (the strongest variant) by replacing $0 < \delta < 1-\rho-\eta$ with $0 < H_q(\delta) < 1-1/n-\rho-\eta$ in Conjecture 4.12 from WHIR. Mutual correlated agreement requires that when a random linear combination of functions lies close to the code, not only do the original functions exhibit correlated agreement (having a common large subdomain where they agree with codewords), but additionally the agreement domains must be identical for all functions simultaneously. This represents a strengthening of standard correlated agreement, providing tighter soundness bounds for protocols like WHIR that exploit this additional structure.
Since mutual correlated agreement is stronger than standard correlated agreement, it naturally inherits the list-decoding capacity constraints. If standard correlated agreement fails beyond list-decoding capacity (as proven), mutual correlated agreement certainly fails there as well. The additional $1/n$ margin in the modified conjecture provides technical headroom, ensuring parameters remain safely within the regime where polynomial error bounds plausibly hold. The practical impact on WHIR and similar protocols mirrors that for FRI and DEEP-FRI: proximity parameters must be reduced by approximately 3 percent, translating to 5-10 percent increases in query complexity and proof sizes for aggressive parameter choices.
The paper notes that mutual correlated agreement "up to the Johnson bound" ($\delta < 1-\sqrt{\rho}$) is "likely attainable" and would serve as a "fail-safe" providing unconditional security guarantees. However, "relying on the Johnson bound would result in significantly increased proof sizes and verification times"—roughly 40-50 percent overhead compared to list-decoding-capacity-optimal parameters. This creates a three-tier framework for system designers: conservative Johnson-bound parameters (proven secure, larger proofs), moderate list-decoding-capacity parameters (conjectured secure, small additional cost), or aggressive near-capacity parameters (now known insecure).

## Comparison Tables: Performance Metrics and System Characteristics
### Table 1: Bound Comparisons for Reed-Solomon Codes
| Bound Type | Formula | Regime | List Size | Proven or Conjectured | Practical Overhead |
|:-----------|:--------|:-------|:----------|:----------------------|:-------------------|
| Unique Decoding | $\delta < \frac{1-\rho}{2}$ | Most conservative | $L = 1$ | **Proven** (classical) | Baseline |
| Johnson Bound | $\delta < 1 - \sqrt{\rho}$ | Conservative | $L = O(1/\eta^2)$ | **Proven** (Ben-Sasson et al. 2023 for proximity gaps) | +40-50% queries |
| List-Decoding Capacity | $H_q(\delta) < 1-\rho$ | Moderate | $L = O(1/\eta)$ conjectured | **Conjectured** (Our Conjectures 1-3) | +10-15% queries |
| Shannon Capacity | $\delta < 1-\rho$ | Aggressive | Exponential $L \geq q^{\Omega(\eta n)}$ | **DISPROVEN** (this paper) | Insecure |
**Note:** $\eta$ represents the margin below the respective bound. For $\rho = 1/2$ and $q = 2^{31}$: Johnson bound allows $\delta \approx 0.29$, list-decoding capacity allows $\delta \approx 0.44$, capacity would allow $\delta \approx 0.47$.
### Table 2: Error Parameters by Regime and Protocol
| Protocol | Original Regime | Error Parameter $\epsilon$ | Field Size Requirement | Modified Regime | New $\epsilon$ | Overhead |
|:---------|:---------------|:---------------------------|:----------------------|:----------------|:---------------|:---------|
| FRI (ICALP'18) | $\delta < 1-\rho$ | $\frac{(\ell-1) \cdot 2^m}{\rho \cdot \|\mathbb{F}\|}$ | $q \geq O(n)$ | $H_q(\delta) < 1-\rho$ | Same formula, reduced $\delta$ | +10-15% |
| DEEP-FRI (ITCS'20) | $\delta \leq 1-\rho-\eta$ | $\left(\frac{n}{\eta}\right)^{C_\rho}$ | $q \geq O(n)$ | $H_q(\delta) \leq 1-\rho-\eta$ | Same, tighter constraint | +10-15% |
| STIR (CRYPTO'24) | $\delta < 1-\sqrt{\rho}$ | $\frac{(\ell-1)^2 \cdot 2^{2m}}{\|\mathbb{F}\| \cdot (2\min\{\eta, \sqrt{\rho}/20\})^7}$ | $q \geq \Omega(n^2)$ | **No change** | **No change** | Minimal |
| WHIR (2024) | $\delta < 1-\rho$ | $\frac{(\ell-1)^{c_2} \cdot d^{c_2}}{\eta^{c_1} \cdot \rho^{c_1+c_2} \cdot \|\mathbb{F}\|}$ | $q \geq O(n)$ | $H_q(\delta) < 1-\rho$ | Same formula, reduced $\delta$ | +5-10% |
**Key insight:** STIR remains unaffected because it operates within the Johnson bound regime, which was already proven secure. Other protocols require parameter adjustments.
### Table 3: Concrete Parameter Impact for 128-bit Security
| System Implementation | Before (capacity) | After (list-decoding capacity) | Security Status |
|:----------------------|:------------------|:-------------------------------|:----------------|
| StarkWare Cairo (typical config) | Rate $\rho = 1/4$, $\delta = 0.72$, 96 queries | Rate $\rho = 1/4$, $\delta \approx 0.68$, ~106 queries | Requires update |
| Polygon Miden VM | Rate $\rho = 1/2$, $\delta = 0.48$, 128 queries | Rate $\rho = 1/2$, $\delta \approx 0.44$, ~140 queries | Requires update |
| Risc Zero (aggressive params) | Rate $\rho = 1/2$, $\delta = 0.47$, 120 queries | Rate $\rho = 1/2$, $\delta \approx 0.43$, ~135 queries | Requires update |
| STIR-based system | Rate $\rho = 1/4$, $\delta = 0.50$ (Johnson), 80 queries | **No change required** | Already secure |
| WHIR (proposed params) | Rate $\rho = 1/2$, $\delta = 0.48$, 145 queries | Rate $\rho = 1/2$, $\delta \approx 0.44$, ~160 queries | Can fix pre-deployment |
**Proof size impact:** Approximately 10-15% increase for systems targeting capacity, translating from roughly 30-60 KB to 33-68 KB for typical configurations.
## Applications and Implications for Zero-Knowledge Systems
### Hash-based SNARKs: FRI, DEEP-FRI, STIR, and WHIR
The proximity gaps conjectures underpin security analyses for the entire family of FRI-based transparent SNARKs that have revolutionized zero-knowledge proof systems since 2018. FRI (Fast Reed-Solomon Interactive Oracle Proofs of Proximity) introduced by Ben-Sasson et al. achieves logarithmic verifier complexity for polynomial commitment schemes through recursive folding, enabling transparent (no trusted setup) post-quantum secure proofs. Each folding round takes two claimed low-degree polynomials $f$ and $g$, combines them via random challenge $z$ as $h_z = f + z \cdot g$, and recursively tests whether $h_z$ has degree at most some threshold. The soundness argument critically depends on showing that if many random linear combinations $h_z$ lie close to Reed-Solomon codewords, then $f$ and $g$ themselves must lie close—precisely the correlated agreement property.
DEEP-FRI strengthened the FRI protocol by introducing out-of-domain sampling, querying polynomial evaluations at points outside the original evaluation domain. This technique improves soundness bounds and reduces query complexity by exploiting that rational functions $p(x)/(x-a)$ for $a$ outside the domain have predictable degree-reduction properties. The DEEP-FRI security analysis relies on list-decodability conjectures (Conjecture 2.3) asserting that Reed-Solomon codes remain efficiently list-decodable with polynomial-sized lists up to capacity. The Crites-Stewart paper's observation that this contradicts Elias's classical result demonstrates that DEEP-FRI's original conjecture was overly optimistic, though the protocol remains sound with modified parameters respecting list-decoding capacity bounds.
STIR (Shift To Improve Rate) by Arnon-Chiesa-Fenzi-Yogev achieves state-of-the-art query complexity among FRI variants through an innovative rate-improvement technique. Rather than halving the rate at each folding round (as standard FRI does), STIR uses a shift operation that improves rate by factor $2^{1-k}$ per iteration, where $k$ is a protocol parameter. This allows STIR to achieve query complexity $O(\lambda + (\lambda/k)\log(m/k))$ compared to FRI's $O(\lambda \log(m))$, where $\lambda$ denotes security parameter and $m$ represents degree. Crucially, STIR's analysis remains entirely within the Johnson bound regime ($\delta < 1-\sqrt{\rho}$) where proximity gaps are proven rather than conjectured. Consequently, the Crites-Stewart results do not affect STIR's core security claims, though the paper's capacity-conjecture disproof rules out potential future optimizations that might have pushed STIR parameters further.
WHIR represents the cutting edge of FRI-based protocols, achieving verification times of approximately 100-400 microseconds through aggressive use of mutual correlated agreement conjectures. The protocol combines constrained Reed-Solomon codes, query-reduction techniques, and batching optimizations to minimize verifier operations. WHIR's Conjecture 4.12 on mutual correlated agreement up to capacity (the strongest variant of proximity gaps conjectures) directly falls to the Crites-Stewart disproof. However, WHIR remains in preprint status and can incorporate the modified conjecture (Our Conjecture 3) before deployment. The practical impact requires increasing queries by approximately 5-10 percent, raising proof sizes from roughly 120 KB to 128 KB for 128-bit security with degree $m = 20$—a modest cost that preserves WHIR's verification speed advantage over alternatives.
:::spoiler
### zkVMs relying on FRI-based proofs
Zero-knowledge virtual machines have emerged as the killer application for FRI-based SNARKs, enabling general-purpose verifiable computation by proving correct execution of arbitrary programs compiled to simple instruction sets. Nearly two dozen zkVMs tracked by EthProofs, including production systems processing millions of dollars in transactions daily, assume proximity gaps conjectures in their security analyses. **Cairo** by StarkWare powers StarkNet and StarkEx, using a STARK-friendly CPU architecture optimized for algebraic intermediate representations. Cairo programs compile to constraints over the Goldilocks field ($q = 2^{64} - 2^{32} + 1$) or binary extension fields, with FRI-based batching enabling efficient proof generation for complex computations like Ethereum Virtual Machine execution or blockchain state transitions.
**Risc Zero** implements a RISC-V based zkVM using FRI over binary extension fields, targeting software developers familiar with existing toolchains. The system generates proofs for arbitrary Rust or C++ programs by compiling to RISC-V instruction traces, converting to Reed-Solomon encoded execution polynomials, and applying FRI proximity testing. Risc Zero's aggressive parameter choices targeting minimal proof sizes likely push close to capacity, making it particularly sensitive to the Crites-Stewart results. Production deployments may require reanalysis and parameter adjustments to ensure security margins respect list-decoding capacity bounds.
**Polygon Miden VM** uses a stack-based architecture inspired by Forth and Bitcoin Script, optimized for recursive proof composition. The system generates proofs of correct state transitions for Polygon's AggLayer, enabling composable layer-2 solutions. Miden VM's security analysis explicitly references the Ben-Sasson et al. proximity gaps paper, assuming conjectures hold up to capacity for claimed 128-bit security with specified query counts. The Crites-Stewart disproof necessitates either increasing queries (larger proofs) or reducing proximity parameters (accepting slightly weaker soundness per query), with typical overhead estimates suggesting 10-15 percent larger proof sizes.
**SP1** (Succinct Labs) represents a newer generation of high-performance zkVMs targeting 100x speed improvements over previous systems through aggressive compiler optimizations and hardware acceleration. Like other FRI-based zkVMs, SP1's soundness analysis relies on proximity gaps conjectures. The paper's results affect SP1 similarly to other systems: parameter choices must respect list-decoding capacity rather than capacity, likely requiring modest increases in query counts or field sizes.
:::
### Post-quantum cryptography and the bounded distance decoding question
Hash-based SNARKs using FRI have been prominently marketed as post-quantum secure proof systems because they avoid pairing-based cryptography vulnerable to Shor's algorithm, relying instead on cryptographic hash functions believed quantum-resistant. The Crites-Stewart paper raises a critical unresolved question threatening this narrative: "It is not clear to what extent quantum attacks on bounded distance decoding have been considered. This remains an important open question in practice, as hash-based SNARKs are being proposed as post-quantum solutions on Ethereum and beyond." The distinction between list decoding (finding all nearby codewords) and bounded distance decoding (finding any single nearby codeword if one exists) proves crucial for post-quantum security analysis.
The paper's Theorem 1 demonstrates existence of lines with many points close to Reed-Solomon codes beyond list-decoding capacity, but finding these close codewords computationally may remain hard. For a malicious prover attempting to fool a verifier in a SNARK protocol, the attack requires committing to a random-looking word (to hide an incorrect computation), then responding to verifier challenges by producing nearby Reed-Solomon codewords on demand. This reduces to solving bounded distance decoding instances on cryptographically-sized fields. Classical hardness results provide some confidence: Guruswami-Vardy proved in 2005 that maximum-likelihood decoding of Reed-Solomon codes (a variant of bounded distance decoding) is NP-hard for certain parameter regimes corresponding to $n-k-f = 1$, and Gandikota-Ghazi-Grigorescu extended this to slightly lower $f$ values in 2018, additionally showing hardness based on discrete logarithm in extension fields for attack regime $\binom{n}{f} \geq q^{n-k-f}$.
However, these hardness results fail to cover the parameter regimes used in practical SNARK deployments. The Guruswami-Vardy reduction applies to binary fields (q=2), while SNARKs use large prime or extension fields with $q \geq 2^{31}$. The Gandikota-Ghazi-Grigorescu result for general fields requires either that the domain equals the entire field (necessitating $n = q$, but SNARKs use $n \ll q$ for efficiency) or that $q$ is exponentially large relative to $n$, which contradicts typical SNARK parameters with $q \approx 2^{31}$ to $2^{64}$ and $n \approx 2^{20}$. Most critically, no existing work characterizes quantum algorithms for bounded distance decoding in these parameter regimes. Quantum computers could potentially exploit algebraic structure in Reed-Solomon codes more effectively than classical algorithms, perhaps through variants of Shor's algorithm adapted to polynomial-finding problems or through Grover-accelerated search over candidate polynomials.
The paper explicitly warns: "None of these results apply to the fields typically used in SNARKs, i.e., cryptographically small fields." This leaves open whether practical quantum attacks exist against deployed FRI-based proof systems. If quantum algorithms can efficiently solve bounded distance decoding for Reed-Solomon codes with $q \approx 2^{31}$ and $n \approx 2^{20}$, the claimed post-quantum security of hash-based SNARKs would be compromised. Conservative system designers might respond by increasing field sizes (using $q \approx 2^{64}$ or $2^{128}$ rather than $2^{31}$), reducing proximity parameters to stay well within Johnson bound regimes where no attack vectors are known, or developing rigorous quantum hardness analyses for bounded distance decoding in practical parameter ranges.

## Future Research Directions
The paper concludes by proposing four critical research directions for the cryptographic community, each addressing different aspects of the proximity gaps landscape revealed by their results. **Direction 1** calls for pursuing quantum attacks on bounded distance decoding, particularly in parameter regimes used by deployed SNARKs. This represents the highest-priority open problem for post-quantum security of FRI-based systems. Concrete research questions include developing quantum algorithms that exploit algebraic structure in Reed-Solomon codes, characterizing which parameters admit efficient quantum decoding versus which remain hard, and establishing quantum lower bounds through information-theoretic or oracle-separation arguments. Success would either provide quantum attack algorithms forcing immediate parameter changes in deployed systems, or yield confidence that quantum hardness assumptions underlying post-quantum SNARK claims are justified.
**Direction 2** advocates proving or disproving the modified conjectures (Our Conjectures 1-3) proposed by the paper up to list-decoding capacity. The authors note this will be "challenging," as Theorem 2 establishes that small-error-probability correlated agreement implies list-decoding breakthroughs—a problem that has resisted solution for over sixty years despite intensive effort by coding theorists. The fundamental connection demonstrated by the reduction suggests that proximity gaps conjectures inherit the intrinsic difficulty of classical list-decoding capacity questions. Nonetheless, techniques from algebraic geometry, additive combinatorics, or sum-product phenomena might yield progress. Alternatively, counterexamples to the modified conjectures (perhaps using techniques beyond the Crites-Stewart probabilistic construction) would necessitate further refinement and force a more conservative security stance for SNARK deployments.
**Direction 3** suggests pursuing large error probabilities for correlated agreement rather than seeking smallest possible error. Theorem 2 shows that correlated agreement with error probability $\epsilon < 1/k$ implies list-decoding results, so "the community should be looking for large error probabilities for (mutual) correlated agreement, to avoid implying list-decodability." This represents a conceptual shift from minimizing error (seeking strongest security statements) to characterizing achievable error probabilities and understanding how they scale with field size, code dimension, and proximity parameters. Research in this direction would focus on explicit constructions showing error probability cannot be made arbitrarily small, analyzing the fundamental trade-offs between different parameters, and developing practical soundness analyses that tolerate larger error while maintaining adequate security for applications. This direction aligns with engineering pragmatism—understanding what is achievable rather than pursuing theoretically optimal but potentially impossible goals.
**Direction 4** advocates proving mutual correlated agreement up to the Johnson bound as a "fail-safe" providing unconditional security guarantees without reliance on unproven conjectures. The paper notes this is "likely attainable" given recent progress by Garreta-Mohnblatt-Wagner (2025) on simplified round-by-round soundness proofs for FRI based on mutual correlated agreement. Achieving this would eliminate all conjecture-dependence from FRI security analyses, though at the cost of "significantly increased proof sizes and verification times"—approximately 40-50 percent overhead compared to list-decoding-capacity-optimal parameters. From a practical standpoint, this provides a conservative fallback position ensuring deployed systems can be rigorously proven secure even if modified conjectures eventually fail, trading efficiency for certainty. High-value applications requiring maximum assurance (financial systems, critical infrastructure) might adopt Johnson-bound parameters immediately, while lower-stakes applications tolerate conjectured security with smaller proofs.
:::spoiler
### Open questions in coding theory and cryptography
Beyond the four explicit directions, the Crites-Stewart work surfaces numerous deeper questions at the intersection of coding theory and cryptography. The gap between provable Johnson-bound results ($\delta < 1-\sqrt{\rho}$) and conjectured list-decoding-capacity results ($H_q(\delta) < 1-\rho$) represents fundamental uncertainty about Reed-Solomon code structure. Are there fundamental barriers (information-theoretic, algebraic-geometric, or combinatorial) preventing proximity gaps from extending beyond Johnson bound? Or do appropriate techniques simply await discovery, potentially through new proof strategies exploiting symmetry, tensor structure, or sum-product phenomena?
The Crites-Stewart reduction (Theorem 2) connecting correlated agreement to list decoding hints at deeper unifying principles. Are there other fundamental connections between seemingly distinct coding-theoretic problems waiting to be discovered? Could techniques from other areas—algebraic topology, representation theory, or arithmetic geometry—shed light on these questions? The paper's probabilistic construction using deep holes and Cantelli's inequality represents a relatively elementary argument yielding powerful conclusions; might more sophisticated probabilistic methods or explicit constructions reveal additional structure?
The relationship between conjecture difficulty and code rate $\rho$ remains poorly understood. All known results show easier regimes at lower rates (where minimum distance is large) and harder regimes at higher rates (approaching capacity). Is there a phase transition in proof technique difficulty or conjecture truth as rate varies? Do techniques that work for small rates fundamentally fail at high rates, or is this merely an artifact of current proof technology? Understanding rate-dependence could guide practical parameter selection and research prioritization.
:::
## Conclusion and Broader Impact
The Crites-Stewart paper fundamentally reshapes the mathematical foundations of modern zero-knowledge proof systems by demonstrating that widely-assumed proximity gaps conjectures for Reed-Solomon codes fail beyond the list-decoding capacity bound. This result forces a reckoning with the reality that optimistic capacity-achieving conjectures from Ben-Sasson et al., DEEP-FRI, and WHIR are provably false, though the quantitative impact on practical systems remains surprisingly modest—requiring only 3 percent reductions in proximity parameters corresponding to 10-15 percent increases in proof sizes for most deployments. The work's second major contribution establishes an elegant reduction showing correlated agreement with small error probability implies classical list-decoding breakthroughs, connecting these research areas and explaining why further progress will be challenging.
The proposed modified conjectures (Our Conjectures 1-3) up to list-decoding capacity represent the current best understanding of what is achievable for proximity gaps, providing sound targets for the Ethereum Foundation's million-dollar Proximity Prize while supporting efficient proof system implementations. The distinction between capacity ($\delta < 1-\rho$), list-decoding capacity ($H_q(\delta) < 1-\rho$), and Johnson bound ($\delta < 1-\sqrt{\rho}$) now provides a clear three-tier framework for system designers: aggressive parameters assuming modified conjectures (10-15 percent overhead versus previous capacity assumptions), moderate parameters pushing toward Johnson bound (20-30 percent overhead), or conservative proven-secure parameters staying well within Johnson bound (40-50 percent overhead).
The most critical open question centers on post-quantum security—whether quantum algorithms can efficiently solve bounded distance decoding for Reed-Solomon codes in practical parameter regimes used by deployed SNARKs. The paper explicitly raises this as an "important open question in practice" that threatens the post-quantum security narrative surrounding hash-based transparent proof systems. Until this question receives rigorous analysis, claims of quantum-resistant SNARKs rest on uncertain foundations. The four proposed research directions provide a roadmap for the community: quantum attacks on bounded distance decoding (highest priority for post-quantum security), proving or disproving modified conjectures (connecting to classical list-decoding theory), characterizing achievable error probabilities (engineering pragmatism), and proving mutual correlated agreement up to Johnson bound (fail-safe for critical applications).
The connection to threshold Schnorr signatures through Shamir secret sharing and Reed-Solomon codes demonstrates how fundamental coding-theoretic questions permeate modern cryptography across seemingly disparate applications. As blockchain systems scale to global adoption with zero-knowledge rollups processing millions of transactions, the mathematical rigor of underlying primitives becomes paramount. The Crites-Stewart work ensures that the next generation of SNARK deployments will rest on firmer theoretical ground—even if that ground is slightly more conservative than previously hoped. The gap between engineering optimism and mathematical reality has been precisely quantified at roughly 3 percent of the error budget, a modest price for honest security foundations supporting trillion-dollar financial infrastructure.