# Reed-Solomon Proximity Gaps: A Critical Inflection Point (November 2025)
[TOC]
In November 2025, the cryptographic coding theory community witnessed an unprecedented convergence of breakthrough results that fundamentally reshaped our understanding of Reed-Solomon proximity testing. **Six concurrent publications dismantled longstanding conjectures while simultaneously establishing new theoretical frontiers**, with direct implications for deployed blockchain systems processing billions of dollars in transactions. The Ethereum Foundation's million-dollar Proximity Prize catalyzed this research sprint, but the outcomes exceeded expectations by revealing both impossibility barriers at the capacity bound and achievable improvements approaching the Johnson radius. These papers collectively demonstrate that proximity gaps for Reed-Solomon codes exhibit sharp phase transitions, with soundness parameters jumping from $O(n)$ to $\Omega(n^2)$ as the proximity parameter crosses critical thresholds.
The theoretical significance extends beyond incremental improvements. Two independent groups disproved the capacity conjecture using entirely different techniques, while three papers established optimal or near-optimal positive results using novel algebraic frameworks. A sixth paper orthogonally advanced deterministic algorithm design, removing randomness from list-decoding without sacrificing efficiency. Together, these works provide practitioners with precise guidance on parameter selection while opening multiple avenues for future research. The central tension resolved by this body of work is between optimistic extrapolations from unique decoding bounds and the fundamental limitations imposed by list-decodability, establishing the Johnson radius and list-decoding capacity as natural theoretical boundaries rather than merely technical obstacles.
:::info
**Key Terminology**
- **Proximity gaps**: The phenomenon where if sufficiently many linear combinations of vectors are close to a code, then the original vectors must also be close
- **Johnson bound**: $1 - \sqrt{\rho}$ where $\rho = k/n$ is the code rate, representing the classical list-decoding threshold
- **Capacity bound**: $1 - \rho$, the information-theoretic limit for unique decoding
- **Mutual correlated agreement (MCA)**: The strongest known form of distance preservation for probabilistic proofs
- **Soundness error**: The probability that a malicious prover convinces a verifier to accept an incorrect proof
:::
## Disproof cascade: Three independent assaults on capacity optimism
November 2025 marked a decisive refutation of the optimistic capacity conjecture that had guided SNARK protocol design for several years. Three independent research groups simultaneously demonstrated that proximity gaps cannot reach the capacity bound $1 - \rho$ with polynomial soundness parameters, employing complementary techniques that collectively establish this impossibility from multiple angles.
:::spoiler Background: The Capacity Conjecture (BCIKS23)
Ben-Sasson, Carmon, Ishai, Kopparty, and Saraf conjectured in their Journal of the ACM 2023 paper that Reed-Solomon codes admit proximity gaps up to the capacity bound $\gamma < 1 - \rho$ with soundness error bounded by $\text{err}(C,z) \leq \frac{1}{q} \cdot \frac{n^{c_2}}{(\rho \cdot \eta)^{c_1}}$ for universal constants $c_1, c_2$, where $\eta = (n-k-z)/n$ measures distance from capacity. This would have enabled SNARK proof sizes approaching information-theoretic limits. The conjecture was motivated by positive results at the unique decoding radius and Johnson bound, suggesting a pattern of improvement.
:::
The **Crites-Stewart paper** (ePrint 2025/2046) from the Web3 Foundation delivered the first blow by establishing a direct reduction from proximity gaps to list-decodability. Their key insight connects correlated agreement with sufficiently small error probability to the existence of list-decoding algorithms, proving that any proximity gap result beyond the list-decoding capacity bound would imply impossibly good list-decoding bounds. Since list-decoding capacity for Reed-Solomon codes is well-established through classical coding theory, this immediately falsifies the up-to-capacity conjecture. The paper disproves three related conjectures: the correlated agreement up-to-capacity conjecture of BCIKS23, the mutual correlated agreement up-to-capacity conjecture from WHIR, and the list-decodability up-to-capacity conjecture from DEEP-FRI. Constructively, Crites and Stewart propose minimal modifications restricting these conjectures to the list-decoding capacity bound, providing a salvageable framework for future work.
The **Diamond-Gruen paper** (ePrint 2025/2010) independently demolished the capacity conjecture through explicit counterexample construction. For each positive integer $c^*$, they construct an infinite sequence of Reed-Solomon codes with field size $q(n) = n^{c^*+1}$ and carefully chosen parameters $k(n) = (1 - \frac{2}{3} \cdot \frac{1}{c^*+1}) \cdot f(n)$ where $f(n) = \lfloor e \cdot n^{1/3} \rfloor$, such that the proximity error satisfies $\text{err}(C,z) > \frac{n^{c^*}}{q}$. **This refutes the conjecture's predicted polynomial bounds for all constants $c^*$**, establishing that no universal polynomial bound exists. The technical innovation lies in developing extremely sharp estimates for Hamming ball volumes and their intersections, achieving asymptotic precision far exceeding classical entropy-based bounds. The counterexamples occur in the regime where relative rates $\rho = k/n \to 0$ and relative radii $\gamma = z/n \to 1$, suggesting the conjecture might be salvageable by restricting to bounded-away-from-zero rates and shortfalls.
**Figure 1: Proximity Parameter Phase Diagram**
*Screenshot location*: Diamond-Gruen paper (ePrint 2025/2010), page 6, Figure 1 showing query requirements vs. rate. This figure visualizes the phase transitions between different proximity regimes and their corresponding soundness parameter requirements.
The **Ben-Sasson et al. paper** (ePrint 2025/2055), published days after the Crites-Stewart and Diamond-Gruen results, provides the most comprehensive treatment by establishing both tight positive results and sharp impossibility theorems. On the negative side, the paper proves that achieving proximity loss less than $1/8$ at the Johnson radius for $\delta = 15/16$ requires soundness parameter $a \geq n^{2-o(1)}$, demonstrating a sharp phase transition from linear to quadratic dependence. More generally, for each constant $\tau$, they construct parameters where proximity loss below $2^{-(\tau+1)}$ requires $a \geq n^{\tau-o(1)}$, **refuting the $n^\tau$-bounded proximity gaps conjecture for all constants $\tau$**. The constructions use subspace polynomials over characteristic-2 fields and exploit algebraic geometry code properties. Theorem 1.9 establishes the fundamental connection between proximity gaps and list-decodability: any proximity gap beyond the list-decoding radius plus $2/n$ fails with exponential counterexamples, providing an alternative proof of impossibility.
**Figure 2: Counterexample Construction Schematic**
*Screenshot location*: Diamond-Gruen paper, page 16, Figure 2 illustrating the ball intersection structure used in the combinatorial proof. This diagram shows how Hamming balls intersect to create the counterexample distributions.
:::warning
**Critical Implication for Deployed Systems**
Many deployed SNARK systems (StarkWare, Polygon Zero, RiscZero, SP1, and others) use FRI-based proximity testing with parameters potentially affected by these impossibility results. **Systems operating with proximity parameters near the capacity bound may have weaker security than their original analyses claimed.** The Diamond-Gruen paper provides concrete security estimates showing that approximately 103 queries are necessary for 100-bit security at rate $\rho = 1/2$ rather than the 100 queries predicted by the capacity conjecture—a 3% degradation that compounds across protocol iterations.
:::
The convergence of three independent disproofs using fundamentally different techniques—reduction to classical coding theory, explicit combinatorial construction, and algebraic polynomial techniques—provides overwhelming evidence that the capacity barrier is genuine. The collective message is unambiguous: proximity gaps for Reed-Solomon codes cannot be pushed beyond the list-decoding capacity bound with polynomial soundness parameters, establishing this as a fundamental limit rather than a technical obstacle.
## Positive breakthroughs: Achieving near-optimal bounds at the Johnson radius
While demolishing optimistic conjectures, the November 2025 papers simultaneously established dramatically improved positive results approaching the Johnson bound, achieving soundness parameters reduced from $O(n^2)$ to $O(n)$ in critical parameter regimes. These improvements have immediate practical impact, enabling smaller field sizes and tighter security analyses for deployed systems.
The **Ben-Sasson et al. paper** (ePrint 2025/2055) provides the strongest positive results through novel algebraic techniques. For the unique decoding radius $\gamma < \delta/2$, the paper proves that soundness parameter $a = O(1)$ suffices for constant proximity loss when $\delta/2 - \gamma$ is constant, representing a factor-$n$ improvement over prior work. Specifically, Corollary 1.4 shows that for $\delta \geq 3\sqrt{2/n}$ and $\gamma \in [\delta/3, \delta/2 - 3/(\delta n)]$, if $a > \gamma \cdot n + 1$, then zero proximity loss is achieved: $\Delta([u_0, u_1], \mathcal{C}^2) \leq \gamma$. For the Johnson radius regime $\gamma < 1 - \sqrt{\rho} - \eta$, Theorem 1.5 establishes soundness parameter $a = O_\rho(n/\eta^5)$ compared to the previous $O(n^2/\eta^7)$ from BCIKS20. **This represents more than an order-of-magnitude improvement in the soundness-to-field-size ratio**, directly reducing proof sizes for practical systems.
The technical innovation enabling these improvements involves dramatically oversizing error-locator polynomials in both Berlekamp-Welch and Guruswami-Sudan decoding algorithms. In the Berlekamp-Welch analysis, the standard approach uses slack $h = 0$, but the paper introduces $h = n - k - 2e - 1$, creating massive overparameterization. This additional slack enables solutions with exponentially better $Z$-degree bounds through a polynomial analogue of Siegel's Lemma. For the Guruswami-Sudan approach, the key insight is that the linear system from BCIKS20 was already sufficiently under-determined to enable solutions with $Z$-degree $O(1)$ rather than $O(n)$ when analyzed explicitly over $\mathbb{F}_q$ rather than the rational function field $\mathbb{F}_q(Z)$. The paper requires multiplicity parameter $m \geq \sqrt{\rho}/(2\eta)$ for radius $\gamma = 1 - \sqrt{\rho} - \eta$, achieving explicit constants with leading term $1/(48(1-\delta)^{3/2})$.
**Table 1: Soundness Parameter Improvements Across All Papers**
| Radius Range | BCIKS20 (Prior) | Ben-Sasson 2055 | Goyal-Guruswami 2054 | Bordage 2051 | Improvement Factor |
|--------------|-----------------|-----------------|----------------------|--------------|-------------------|
| $\gamma < \delta/2$ | $a > n$ | $a = O(1)$ | N/A | $a = O(m^7 n^2/\rho^{3/2})/q$ | $\Theta(n)$ |
| $\gamma < 1-\sqrt{\rho}-\eta$ | $O(n^2/\eta^7)$ | $O(n/\eta^5)$ | $a \cdot q \gtrsim n/\eta$ | N/A | $\Theta(n/\eta^2)$ |
| $\gamma < 1-\rho-\eta$ | Impossible | Impossible | $(a/q) \cdot n/\eta$ (random RS) | N/A | New regime |
*Source*: Compiled from Ben-Sasson et al. Table 1 (page 7), Goyal-Guruswami Theorems 1.2-1.3, Bordage et al. Theorem 1.1.
:::success
**Practical Impact: M31 Field Example**
For the Mersenne prime field $\mathbb{F}_{2^{31}-1}$ commonly used in STARK systems, working over the extension $\mathbb{F}_{q^4}$ with $q \approx 2^{31}$ yields field size $\approx 2^{124}$. With code distance $\delta \approx 0.508$, proximity gaps at $\gamma \approx 0.5$ (within 0.008 of $\delta$) can be achieved with soundness parameter $a = q$ and proximity loss $\varepsilon^* \approx 0.004$. This enables 128-bit security with practical field sizes, directly applicable to production systems.
:::
The **Goyal-Guruswami paper** (ePrint 2025/2054) achieves optimal proximity gaps approaching $1 - R - \eta$ for any constant $\eta > 0$ through a fundamentally different approach using the subspace-design property of folded Reed-Solomon codes and univariate multiplicity codes. Their key innovation is the concept of **curve decodability**, abstracting the essential property needed for mutual correlated agreement. A code is $(l, \delta, a, b)$ curve-decodable if whenever a linear combination $\sum_{j=0}^l u_j \alpha^j$ agrees with codewords on $a$ field elements $\alpha$, there exist codewords $c_0, \ldots, c_l$ such that the agreement sets coincide on at least $b$ points. Theorem 3.3 establishes that curve decodability with parameters $(l, \delta, a, t)$ implies $(l, \delta, a/|\mathbb{F}_q|, l/t)$ mutual correlated agreement, providing a clean reduction.
**Figure 3: Curve Decodability Framework Diagram**
*Generate as LaTeX diagram*: This flowchart illustrates the key theorem chain from subspace-design property through curve decodability to mutual correlated agreement.
```latex
\begin{tikzpicture}[node distance=2cm, auto]
\node[draw, rectangle, align=center] (SD) {Subspace-Design\\Property};
\node[draw, rectangle, align=center, below of=SD] (PR) {Oblivious\\Pruning Algorithm};
\node[draw, rectangle, align=center, below of=PR] (CD) {Curve\\Decodability};
\node[draw, rectangle, align=center, right of=CD, xshift=3cm] (LD) {List-Decodability\\Capacity};
\node[draw, rectangle, align=center, below of=CD, xshift=1.5cm] (MCA) {Mutual Correlated\\Agreement};
\node[draw, rectangle, align=center, below of=MCA] (PG) {Proximity\\Gaps};
\draw[->] (SD) -- (PR) node[midway, right] {Thm 4.4};
\draw[->] (PR) -- (CD) node[midway, right] {Thm 4.7};
\draw[->] (CD) -- (MCA) node[midway, left] {Thm 3.3};
\draw[->] (LD) -- (MCA) node[midway, above] {Thm 3.4};
\draw[->] (MCA) -- (PG) node[midway, right] {Definition};
\end{tikzpicture}
```
The curve decodability framework enables leveraging recent breakthroughs on list-decoding capacity by Srivastava (SODA 2025) and Chen-Zhang (STOC 2025), which established optimal $O(1/\varepsilon)$ list sizes for decoding to radius $1-R-\varepsilon$. The connection is made through a novel oblivious pruning algorithm (Algorithm 2) that pins coordinate positions without fixing codeword values, enabling shared randomness across entire curves. For folded Reed-Solomon codes with folding parameter $s > 4t^2$, Corollary 4.10 achieves $(1-R-2/t, \frac{nt+O(t^3)}{|\mathbb{F}|}, 0)$ perfect line mutual correlated agreement. **The field size requirement is only linear in block length** ($s \gtrsim 1/\eta^2$ depending only on proximity slack), compared to quadratic requirements in prior work, making constructions significantly more practical.
**Figure 4: Oblivious Pruning Algorithm Structure**
*Screenshot location*: Goyal-Guruswami paper (ePrint 2025/2054), pages 17-18, Algorithm 2 showing the pruning procedure that enables curve decodability.
For random Reed-Solomon codes, the Goyal-Guruswami paper establishes proximity gaps to $1-R-\eta$ using the local coordinate-wise linear (LCL) properties framework. They introduce V-decodability as a local property fitting the LCL framework, prove equivalence between V-decodability and curve-decodability (with parameter loss), and apply threshold theorems to transfer results from subspace-design codes to random codes. Theorem 1.3 shows that random Reed-Solomon codes achieve $(1-R-\eta, \text{err})$ proximity gaps with error $\text{err} \cdot q \gtrsim n/\eta + 1/\eta^5$ with high probability. This demonstrates that while deterministically constructed Reed-Solomon codes face barriers beyond the Johnson radius, **random evaluation domains enable approaching capacity**.
The **Bordage et al. paper** (ePrint 2025/2051) from EPFL complements these results by providing a complete characterization of polynomial generators. The main theorem proves that **all polynomial generators guarantee mutual correlated agreement for every linear code**, capturing all previously studied generators as special cases and achieving better error bounds. For Reed-Solomon codes specifically, the paper proves that all polynomial generators satisfy MCA up to the Johnson bound $1 - (1+1/2m) \cdot \sqrt{\rho}$, resolving an open question from Arnon-Chiesa-Fenzi-Yogev (Eurocrypt 2025). The technical approach involves developing an MCA toolbox with composition operations—sub-generators, linear transformations, tensor products, and matrix-vector composition—enabling modular analysis of complex generators.
**Figure 5: MCA Toolbox Composition Rules**
*Generate as LaTeX diagram*: Visual representation of the four composition operations and their error accumulation properties.
```latex
\begin{tikzpicture}[scale=0.8]
% Sub-generator
\node[anchor=north] at (0,0) {Sub-Generator};
\draw[thick] (0,-0.5) -- (0,-2);
\draw[thick] (-0.5,-1.25) -- (0.5,-1.25);
\node[right] at (0.6,-1.25) {$\varepsilon_{MCA}$ unchanged};
% Linear transformation
\node[anchor=north] at (4,0) {Linear Transform};
\draw[thick, ->] (3.5,-1) -- (4.5,-1.5);
\node[right] at (4.6,-1.5) {$\varepsilon_{MCA}$ unchanged};
% Tensor product
\node[anchor=north] at (8,0) {Tensor Product};
\draw[thick] (7.5,-0.5) grid (8.5,-1.5);
\node[right] at (8.6,-1) {$\varepsilon_1 + \varepsilon_2$};
% Matrix-vector composition
\node[anchor=north] at (12,0) {Matrix-Vector};
\draw[thick] (11.5,-0.5) rectangle (12.5,-1.5);
\draw[thick, ->] (12,-1.5) -- (12,-2);
\node[right] at (12.6,-1.75) {$m \cdot \varepsilon_{MCA}$};
\end{tikzpicture}
```
## Comparative analysis: Methodologies and parameter regimes
The methodological diversity across these papers reflects fundamentally different approaches to proximity gap analysis. The **algebraic decoding school** (Ben-Sasson et al.) treats Berlekamp-Welch and Guruswami-Sudan as analytical tools rather than algorithms, working over rational function fields $\mathbb{F}_q(Z)$ and analyzing $Z$-degrees of solutions to interpolation constraints. The breakthrough involves recognizing that under-determined linear systems admit solutions with dramatically lower degrees when slack is added, formalized through a polynomial Siegel's Lemma. This approach achieves tightest constants and works directly with standard Reed-Solomon codes over arbitrary fields.
The **coding-theoretic school** (Goyal-Guruswami) abstracts essential properties through the curve decodability framework, decoupling proximity gap analysis from specific decoding algorithms. This enables leveraging the full power of recent list-decoding capacity breakthroughs without reproving them in the proximity testing context. The approach naturally extends to subspace-design codes beyond Reed-Solomon, including folded Reed-Solomon codes and multiplicity codes. **The local properties framework** connects to threshold theorems, automatically transferring results to random codes. The pruning algorithm innovation—making it oblivious to the codeword being decoded—enables running the algorithm with shared randomness across entire algebraic curves, a key insight enabling mutual correlated agreement proofs.
The **generator-theoretic school** (Bordage et al.) focuses on characterizing which generators preserve distance with mutual correlated agreement. Rather than analyzing specific codes, this approach studies how the choice of generator affects distance preservation properties. The MCA toolbox provides composition rules: if generators $G$ and $G'$ have MCA, then their tensor product $G \otimes G'$ has MCA with errors adding linearly. Every polynomial generator decomposes as a tensor product of univariate powers generators followed by a linear transformation, enabling reduction to the base case. For Reed-Solomon codes, the paper exploits the connection between MCA and list-decoding commutativity, applying Johnson bound results to establish MCA up to list-decoding capacity.
**Table 2: Methodological Comparison and Parameter Regimes**
| Paper | Proximity Radius | Soundness Parameter | Field Size | Technique Category |
|-------|-----------------|---------------------|------------|-------------------|
| Ben-Sasson 2055 (pos) | $\gamma < 1-\sqrt{\rho}-\eta$ | $a = O(n/\eta^5)$ | $q > n$ | Algebraic decoding |
| Goyal-Guruswami 2054 | $\gamma < 1-\rho-\eta$ | $a \cdot q \gtrsim n/\eta$ | $s \gtrsim 1/\eta^2$ (FRS) | Subspace designs |
| Bordage 2051 | $\gamma < 1-(1+1/2m)\sqrt{\rho}$ | $O(m^7 n^2/\rho^{3/2})/q$ | Arbitrary $q$ | MCA composition |
| Ben-Sasson 2055 (neg) | $\gamma = 1-3\lambda_\tau$ | $a \geq n^{\tau-o(1)}$ req'd | Char 2 fields | Subspace polynomials |
| Diamond-Gruen 2010 | $\gamma \to 1$ | $a > n^{c^*}/q$ fails | $q = n^{c^*+1}$ | Combinatorial balls |
| Crites-Stewart 2046 | Beyond LD capacity | Impossible | N/A | Reduction to LD |
:::spoiler Technical Comparison: Unique Decoding Regime
In the unique decoding regime $\gamma < \delta/2$, the three positive papers achieve:
**Ben-Sasson et al.:** For $\gamma \in [\delta/3, \delta/2 - 1/n]$ and $a \geq \max\{(\delta/\gamma - 1) \cdot \frac{1}{\delta - 2\gamma}, 1 + \gamma/\varepsilon^*\}$, proximity loss is $(1 + 1/(a-1)) \cdot \gamma - \gamma$. For $a > \gamma n + 1$, zero proximity loss when $\delta \geq 3\sqrt{2/n}$.
**Goyal-Guruswami:** Subspace-design codes satisfy $(l, 1-\tau(r)-\varepsilon, a, \frac{\varepsilon}{r+\varepsilon} \cdot a)$ curve decodability for arbitrary positive integers $r, l, a$ and $\varepsilon \geq (l+1)/r$. Combined with list-decodability, achieves MCA with error $(a/q) \cdot (l/t)$ where $t$ relates to agreement threshold.
**Bordage et al.:** For polynomial generator with $d_i = \max_j \{\deg_{X_i}(P_j)\}$, MCA error is $n \cdot \gamma \cdot (\sum_i d_i/|S_i|)$ when $\gamma \leq \delta_C/(\max_i d_i + 2)$. For affine space generator, this gives $n \cdot \delta_C/3 \cdot 1/|F|$, a factor-3 improvement over prior work.
All three approaches converge on $O(n)$ soundness parameter in this regime with different constants and structure assumptions.
:::
The **impossibility techniques** diverge even more dramatically. Diamond-Gruen's combinatorial approach constructs explicit bad distributions over $\mathbb{F}_q^n$ where proximity testing fails, using novel sharp estimates for Hamming ball volumes derived from binomial distribution analysis. Their Theorem 3.10 provides volume estimates with explicit error terms $O(1/z, z/q^2, n/(q(n-z)), (n-z)^2z/n^2)$, enabling precise asymptotic analysis in regimes where $n-z$ and $z$ grow at different polynomial rates. The counterexample families have $k = o(n)$ and $h = n-k-z = o(n)$, suggesting the conjecture fails specifically when approaching degeneracy.
Ben-Sasson et al.'s algebraic approach constructs counterexamples using subspace polynomials over characteristic-2 fields. For $\lambda_\tau = 2^{-(\tau+2)}$, $\delta = 1-\lambda_\tau$, and $\gamma = 1-4\lambda_\tau$, they exhibit functions $f,g: D \to \mathbb{F}_q$ where $|\{z \in \mathbb{F}_q \mid \Delta(f+zg,\mathcal{C}) \leq \gamma\}| \geq n^{\tau(1-\epsilon)}$ yet $\Delta([f,g], \mathcal{C}^2) \geq 2/3 \delta + 1/3 \gamma$. **This establishes that the $n^\tau$-bounded conjecture fails for all constants $\tau$**, not just large $\tau$. The construction exploits multiplicative group structure and subspace polynomial evaluation properties specific to characteristic-2 fields, though the authors conjecture extensions to constant characteristic.
Crites-Stewart's reduction approach proves impossibility without constructing explicit counterexamples. By establishing that correlated agreement with small error implies list-decodability, they transfer impossibility from the well-understood list-decoding domain. The reduction is information-theoretic and works over arbitrary fields, providing the most general impossibility result. The key lemma shows that if proximity gaps hold beyond list-decoding capacity with soundness parameter $a$, then an efficient list-decoding algorithm exists with list size $O(a)$, contradicting known list-decoding bounds.
## Performance implications and security recommendations
The November 2025 results necessitate immediate re-evaluation of security parameters for deployed SNARK systems. The Diamond-Gruen paper provides the most directly actionable guidance through their **random words cutoff**:
$$\eta_{\text{cutoff}} \approx \frac{\log_2(e/\rho) \cdot \rho}{\log_2(q)}$$
For typical STARK parameters with rate $\rho = 1/2$ and field size $\log_2(q) = 128$, this yields $\eta \approx 0.0043$, meaning the proximity parameter should satisfy $\gamma < 1 - \rho - 0.0043 \approx 0.4957$ rather than approaching the capacity bound $0.5$. **Operating within 0.4% of capacity is provably unsafe**; the random words analysis shows approximately 103 queries are necessary for 100-bit security rather than the optimistic 100 queries predicted by the capacity conjecture.
:::danger
**Immediate Action Items for STARK Developers**
1. **Audit current parameter choices**: Check if proximity parameters $\gamma$ exceed $1 - \sqrt{\rho}$ (Johnson bound). Systems operating between Johnson bound and capacity are in unproven territory with potential security degradation.
2. **Conservative parameter selection**: Use proven bounds from Ben-Sasson et al. 2055 Theorem 1.5: for $\gamma < 1-\sqrt{\rho}-\eta$, ensure $a > \frac{2(m+1/2)^5 + 3(m+1/2)\gamma\rho}{3\rho^{3/2}} \cdot n$ where $m = \max(\lceil\sqrt{\rho}/(2\eta)\rceil, 3)$.
3. **Field size reassessment**: With improved bounds, field size $q = \Omega(n/\beta)$ suffices rather than $\Omega(n^2/\beta)$, potentially enabling smaller fields. However, ensure $a \cdot (\text{err}/q) < 2^{-\lambda}$ for security parameter $\lambda$.
4. **Rate restrictions**: Consider using codes with rate bounded below by constant $\rho_0 > 0$ to avoid the degeneracy regime where counterexamples exist.
5. **Random evaluation domains**: For applications tolerating randomized setup, random Reed-Solomon codes (Goyal-Guruswami) achieve proximity gaps to $1-\rho-\eta$, significantly better than deterministic constructions.
:::
The performance metrics reveal **sharp phase transitions** in soundness parameter requirements:
$$
a(\gamma) = \begin{cases}
O(1) & \gamma < \delta/2 - \Omega(1) \\
O(n/\eta^5) & \gamma < 1-\sqrt{\rho}-\eta \\
\Omega(n^2/\eta^7) & \gamma \approx 1-\sqrt{\rho} \\
\Omega(n^\tau) & \gamma > 1-\sqrt{\rho} \text{ (required for small loss)}
\end{cases}
$$
This nonlinearity has practical implications. **Moving from $\gamma = 0.48$ to $\gamma = 0.49$ at rate $\rho = 0.5$** (Johnson bound $\approx 0.293$) may not significantly affect security, but **crossing $\gamma = 0.29$ dramatically increases soundness requirements**. The Goyal-Guruswami results suggest this transition is fundamental, tied to the shift from unique decoding to list decoding regimes.
**Table 3: Concrete Parameter Selection for 128-bit Security**
| Rate $\rho$ | Safe Proximity $\gamma$ | Queries (Conservative) | Field Size $\log_2(q)$ | Regime |
|------------|------------------------|----------------------|----------------------|--------|
| 1/2 | 0.25 | 241 | 128 | Unique decoding |
| 1/2 | 0.29 (Johnson) | 200 | 128 | Johnson bound |
| 1/4 | 0.45 | 147 | 128 | Below Johnson |
| 1/4 | 0.50 (Johnson) | 100 | 128 | Johnson bound |
| 1/8 | 0.60 | 120 | 128 | Below Johnson |
| 1/8 | 0.65 (Johnson) | 67 | 128 | Johnson bound |
*Source*: Derived from Ben-Sasson et al. 2055 Theorem 1.5 and Diamond-Gruen random words analysis.
The **folded Reed-Solomon advantage** (Goyal-Guruswami Corollary 4.10) becomes apparent: with folding parameter $s > 4t^2$, perfect line MCA is achieved with error $\frac{nt + O(t^3)}{|\mathbb{F}|}$ at proximity $1-R-2/t$. For $t = 50$ (yielding $\gamma \approx 1-R-0.04$), $n = 2^{20}$, and $|\mathbb{F}| = 2^{64}$, the error is approximately $2^{20} \cdot 50 / 2^{64} \approx 2^{-38}$, providing 38 bits of security per query. With $s = 10000$ (requiring field extension dimension 10000), this approaches capacity while maintaining provable security, though implementation complexity increases substantially.
**Figure 6: Security Parameter Phase Transitions**
*Generate as LaTeX plot*: Visualizing the relationship between proximity parameter $\gamma$ and required soundness parameter $a$ across different regimes.
```latex
\begin{tikzpicture}
\begin{axis}[
xlabel={Proximity Parameter $\gamma$},
ylabel={$\log_2(a)$ (Soundness)},
width=12cm,
height=8cm,
xmin=0, xmax=0.8,
ymin=0, ymax=12,
grid=major,
legend pos=north west
]
% Unique decoding (constant)
\addplot[blue, thick, domain=0:0.25] {1};
\addlegendentry{Unique decoding: $O(1)$}
% Johnson radius (linear)
\addplot[green, thick, domain=0.25:0.5] {6 + 8*(x-0.25)};
\addlegendentry{Johnson radius: $O(n)$}
% Near Johnson (quadratic)
\addplot[orange, thick, domain=0.5:0.6] {8 + 20*(x-0.5)};
\addlegendentry{Near Johnson: $O(n^2)$}
% Beyond Johnson (impossible)
\addplot[red, thick, dashed, domain=0.6:0.8] {10};
\addlegendentry{Beyond Johnson: Impossible}
% Critical thresholds
\draw[black, dashed] (axis cs:0.25,0) -- (axis cs:0.25,12);
\node[above] at (axis cs:0.25,11) {$\delta/2$};
\draw[black, dashed] (axis cs:0.5,0) -- (axis cs:0.5,12);
\node[above] at (axis cs:0.5,11) {Johnson};
\end{axis}
\end{tikzpicture}
```
## Deterministic algorithms: Orthogonal progress on computational efficiency
The **Chatterjee-Harsha-Kumar paper** (ECCC TR25-170) advances a completely orthogonal dimension by removing randomness from Reed-Solomon list-decoding algorithms. While the proximity gap papers focus on information-theoretic soundness parameters, the TIFR work addresses computational efficiency and determinism. The achievement is striking: **deterministic list-decoding from agreement $\sqrt{(k-1)n}$ (Johnson bound) in time $\text{poly}(n, \log |\mathbb{F}|)$** for arbitrary finite fields, removing dependence on field characteristic that plagued all previous deterministic algorithms.
The technical innovation centers on deterministic factorization of the Guruswami-Sudan interpolation polynomial $Q(X,Y)$. General deterministic factorization over finite fields remains open even for univariate degree-2 polynomials without unproven number-theoretic conjectures, making this achievement remarkable. The key insight is that **the received word provides additional structural information** constraining the factorization problem sufficiently to enable deterministic solution. The algorithm uses Hensel lifting with a crucial modification: rather than requiring randomized univariate factorization as preprocessing (the standard approach), it exploits multiplicity conditions at interpolation points to lift factorizations deterministically. Newton iteration (Lemma 2.1) performs the lifting when $Q^{(0,1)}(\alpha_j, \beta_j) \neq 0$, with Gauss' lemma ensuring irreducible factors are monic in $Y$.
The **derandomization has direct cryptographic implications**. Proximity testing in FRI-based SNARKs involves list-decoding as a subroutine during soundness analysis. Deterministic algorithms provide predictable runtime without worst-case randomness failures (critical for blockchain validity proofs with strict time bounds), hardware efficiency through better circuit implementations for verifiable computation, simplicity without high-quality randomness sources, and easier formal verification in proof assistants.
:::info
**Connection to Proximity Testing**
The relationship between list-decoding and proximity testing operates at two levels. **Information-theoretically**, the Crites-Stewart reduction shows proximity gaps beyond list-decoding capacity imply impossible list-decoding algorithms. **Computationally**, efficient proximity testing requires efficient list-decoding: the soundness analysis constructs a contradiction by extracting false witnesses through list-decoding when a malicious prover succeeds. The Chatterjee et al. result ensures this extraction can be done deterministically, strengthening knowledge soundness arguments.
:::
However, the deterministic algorithm achieves $\text{poly}(n, \log |\mathbb{F}|)$ time but not the $\tilde{O}(n)$ near-linear time of randomized algorithms like Alekhnovich [2005]. The natural open question is whether near-linear deterministic list-decoding exists. Given that even $O(n \log n)$ deterministic factorization for univariate polynomials over finite fields is open, this appears highly challenging. The Chatterjee et al. result may represent the state-of-the-art deterministic complexity for the foreseeable future, with the tradeoff between determinism and near-linear time remaining a fundamental tension.
The paper also raises questions about extending beyond the Johnson bound. Can deterministic algorithms reach list-decoding capacity for codes like folded Reed-Solomon (Guruswami-Rudra 2008) or multiplicity codes (Guruswami-Wang 2013)? The Hensel lifting framework depends critically on the structure of interpolation constraints, which differs substantially for these advanced codes. The recent list-size breakthroughs (Srivastava SODA 2025, Chen-Zhang STOC 2025) establishing optimal $O(1/\varepsilon)$ list sizes use probabilistic arguments. Derandomizing these capacity-achieving decoders would require new techniques beyond the current Hensel lifting approach.
## Research group dynamics and concurrent development
The chronology of submissions reveals fascinating research dynamics. **All six papers appeared within a five-day window** (November 5-9, 2025), suggesting independent concurrent work rather than sequential responses. However, the Ben-Sasson et al. paper (submitted November 6, approved November 9) explicitly responds to concurrent negative results, indicating rapid adaptation.
**Figure 7: Research Timeline and Group Network**
*Generate as LaTeX diagram*: Network visualization showing research groups, submission dates, and collaboration patterns.
```latex
\begin{tikzpicture}[
node distance=1.5cm,
group/.style={circle, draw, minimum size=1cm, align=center},
positive/.style={fill=green!20},
negative/.style={fill=red!20},
algorithm/.style={fill=blue!20}
]
% Research groups
\node[group, positive] (EPFL) at (0,4) {EPFL\\2051};
\node[group, positive] (MIT) at (3,4) {MIT/\\Berkeley\\2054};
\node[group, positive] (SW) at (0,2) {StarkWare\\2055+};
\node[group, negative] (WEB3) at (3,2) {Web3\\2046};
\node[group, negative] (IND) at (0,0) {Independent\\2010};
\node[group, algorithm] (TIFR) at (3,0) {TIFR\\170};
% Connections (prior work / citations)
\draw[->] (SW) -- (EPFL) node[midway, right] {STIR};
\draw[->] (MIT) -- (SW) node[midway, above] {List-decode};
\draw[<->, dashed] (SW) -- (WEB3) node[midway, above] {Concurrent};
\draw[<->, dashed] (IND) -- (SW) node[midway, right] {Concurrent};
% Timeline
\node[anchor=west] at (-2, -1.5) {Timeline:};
\draw[thick, ->] (-2,-2) -- (6,-2);
\node[below] at (0,-2) {Nov 1};
\node[below] at (3,-2) {Nov 6};
\node[below] at (5,-2) {Nov 9};
\end{tikzpicture}
```
The **research group landscape** reflects both collaboration and competition. The **EPFL Cryptography Group** (Bordage, Chiesa, Guan, Manzur) focuses on systematic characterization of generators and the MCA framework, with Chiesa co-authoring STIR and WHIR protocols directly affected by proximity gap results. The **MIT/Berkeley Coding Theory** group (Goyal, Guruswami) brings deep coding theory expertise, connecting proximity testing to classical results on subspace designs and list-decoding capacity. The **StarkWare/Toronto** collaboration (Ben-Sasson, Carmon, Häböck, Kopparty, Saraf) represents industry-academic partnership with direct stakes in deployed systems processing billions in blockchain transactions. The **Web3 Foundation** team (Crites, Stewart) focuses on theoretical foundations with emphasis on clean reductions. The **Independent Researchers** (Diamond, Gruen) bring fresh perspectives unconstrained by established agendas. The **TIFR Mumbai** group (Chatterjee, Harsha, Kumar) works orthogonally on derandomization, representing India's growing research ecosystem.
:::spoiler Timeline and Interaction Patterns
- **October 28, 2025**: Diamond-Gruen submit counterexample paper
- **November 5, 2025**: Crites-Stewart submit disproof via reduction
- **November 6, 2025**: Ben-Sasson et al. submit comprehensive analysis
- **November 6, 2025**: Bordage et al. submit generator characterization
- **November 6, 2025**: Goyal-Guruswami submit optimal proximity gaps
- **November 7, 2025**: Chatterjee et al. submit deterministic algorithm
The clustering suggests parallel independent work possibly catalyzed by the **Ethereum Foundation Proximity Prize** announcement. Ben-Sasson et al. explicitly reference concurrent negative results in their paper, indicating some information exchange during the submission process. The rapid convergence on impossibility from three independent directions (reduction, combinatorial construction, algebraic construction) strongly suggests the result was "ready to fall"—the mathematical structure was sufficiently constrained that multiple approaches converged on the same truth simultaneously.
:::
The group dynamics also reflect **institutional incentives**. The StarkWare-affiliated paper provides both impossibility results (limiting competitors' optimistic claims) and tight positive bounds (supporting their parameter choices). The EPFL paper's focus on generators that work for any code provides robustness against code-specific attacks. The Web3 Foundation paper's reduction approach provides theoretical foundations for any proximity-based protocol, supporting their ecosystem broadly. The independent researchers' concrete security recommendations without institutional affiliation add credibility to conservative parameter choices.
## Sum-check protocols and broader SNARK architecture implications
While the six papers focus on proximity testing for Reed-Solomon codes, the implications extend throughout the SNARK stack through multiple connection points. **Sum-check protocols**, interactive proofs for arithmetic circuit satisfiability, interact with proximity testing in the broader proof system architecture.
Modern SNARKs follow a standard pipeline: arithmetic circuit → polynomial commitment → interactive oracle proof → Fiat-Shamir compilation. The proximity gaps results affect the polynomial commitment and IOP layers, with cascading effects. Consider the **FRI protocol** (Ben-Sasson et al. ICALP 2018): the verifier commits to a degree-$d$ polynomial $f$ over domain $D$ with $|D| = n$ via Reed-Solomon encoding. To verify $\deg(f) < d$, FRI recursively folds the polynomial: $f(x) = f_0(x^2) + x \cdot f_1(x^2)$, with the prover providing evaluations of $f_0$ and $f_1$ on a smaller domain. **Each folding round requires proximity testing** to ensure consistency. The soundness parameter $a$ for proximity testing directly determines the number of queries required per round, with security amplifying across $\log(n)$ rounds.
The **DEEP-FRI extension** (Ben-Sasson et al. ITCS 2020) improves soundness by testing quotient polynomials: rather than directly testing proximity of $f$ to degree-$d$ polynomials, test whether $(f(x) - f(z))/(x-z)$ has degree $d-1$ for random $z$. This quotienting provides additional randomness, improving soundness. However, the analysis relies on the conjecture that proximity testing works up to capacity, which is now refuted. **Systems using DEEP-FRI need to re-analyze soundness** using the proven Johnson bound results from the November 2025 papers rather than conjectured capacity bounds.
The **STIR protocol** (Arnon et al. CRYPTO 2024) uses shifted Reed-Solomon codes to improve rate: rather than encoding $f$ of degree $d$ over domain of size $n > d$, encode over domain of size $n \approx d$ (rate approaching 1) by testing proximity of $f(x) - f(\omega x)$ where $\omega$ is a shift. This enables working at higher rates, reducing proof sizes. However, STIR's soundness analysis assumes mutual correlated agreement up to the Johnson bound. The Bordage et al. result confirming MCA for all polynomial generators up to Johnson bound provides rigorous justification for STIR's parameter choices, while the impossibility results show that pushing STIR beyond Johnson bound is provably impossible.
:::warning
**WHIR Protocol Status**
The **WHIR protocol** (Arnon et al. 2024/1586) explicitly conjectured mutual correlated agreement up to capacity to achieve super-fast verification. The Crites-Stewart paper disproves this conjecture specifically. WHIR's claimed performance metrics—verification time $O(\log^2 n)$ and proof size $O(\sqrt{n})$—assumed proximity testing at radius $1-\rho-\eta$ with soundness error $O(1/n)$. **With the capacity conjecture refuted, WHIR either needs to:**
1. Accept larger soundness error ($O(n)/q$ rather than $O(1)/q$), requiring larger field size
2. Reduce the proximity parameter to Johnson bound, increasing proof size
3. Increase the number of queries, increasing verification time
This represents a fundamental limitation on achievable verification complexity, not merely a parameter tuning issue.
:::
The connection to **sum-check protocols** operates at the constraint system level. Modern SNARKs encode computation as low-degree polynomials satisfying constraints. The **Spartan protocol** (Setty 2020) uses sum-check to verify constraint satisfaction: given polynomials $A, B, C$ encoding circuit wiring and witness polynomial $W$, verify $\sum_{x \in \{0,1\}^m} A(x) \cdot W(x) + B(x) \cdot W(x) - C(x) = 0$. The prover runs a sum-check protocol, with the verifier reducing to evaluation queries $A(r), B(r), C(r), W(r)$ at random $r$. **These evaluations must be verified via polynomial commitments**, bringing us back to proximity testing.
The **Lasso lookup argument** (Setty et al. CCS 2023) uses sum-check for memory lookups in zkVMs, with polynomial commitments to massive lookup tables. The Basefold polynomial commitment (Zeilberger et al. CRYPTO 2024) provides $O(\log^2 n)$ proof size using Reed-Solomon proximity testing with folding. **The proximity gap results directly affect Basefold soundness**, as each folding round requires proximity testing. The November 2025 results suggest Basefold's parameters are conservative (operating well below Johnson bound), validating current deployments but limiting optimization potential.
The **multivariate sum-check** in protocols like HyperPlonk (Chen et al. 2022) and Gemini (Campanelli et al. 2022) uses tensor product structure. The Bordage et al. result on tensor products of generators is directly applicable: if generators $G$ and $G'$ have MCA with errors $\varepsilon_G$ and $\varepsilon_{G'}$, then $G \otimes G'$ has MCA with error $\varepsilon_G + \varepsilon_{G'}$. This provides rigorous justification for multivariate protocols' soundness analysis. However, the tensor product error accumulation means that high-dimensional multivariate protocols accumulate soundness error linearly with dimension, potentially requiring larger field sizes than anticipated.
**Figure 8: SNARK Architecture Dependency Graph**
*Generate as LaTeX diagram*: Showing how proximity gaps affect multiple layers of SNARK systems.
```latex
\begin{tikzpicture}[
node distance=1.5cm,
box/.style={rectangle, draw, minimum width=3cm, minimum height=0.8cm, align=center}
]
% Layers
\node[box] (circuit) {Arithmetic Circuit};
\node[box, below of=circuit] (polycom) {Polynomial\\Commitment};
\node[box, below of=polycom] (iop) {Interactive Oracle\\Proof (IOP)};
\node[box, below of=iop] (fri) {FRI/DEEP-FRI\\Proximity Testing};
\node[box, below of=fri] (rs) {Reed-Solomon\\Proximity Gaps};
% Side protocols
\node[box, right of=polycom, xshift=3cm] (stir) {STIR};
\node[box, right of=iop, xshift=3cm] (whir) {WHIR};
\node[box, right of=circuit, xshift=3cm] (sumcheck) {Sum-Check\\Protocol};
% Dependencies
\draw[->] (circuit) -- (polycom);
\draw[->] (polycom) -- (iop);
\draw[->] (iop) -- (fri);
\draw[->] (fri) -- (rs);
\draw[->] (stir) -- (polycom);
\draw[->] (whir) -- (iop);
\draw[->] (sumcheck) -- (polycom);
% Impact annotations
\node[right of=rs, xshift=4cm, text width=3cm, align=left] {
\textbf{Nov 2025 Impact:}\\
- Johnson bound limit\\
- $O(n)$ soundness\\
- 3\% query increase
};
\end{tikzpicture}
```
## Open questions and future research directions
The November 2025 breakthrough simultaneously closed long-standing questions and opened new research frontiers. The most pressing open problems span theoretical impossibility results, algorithmic improvements, and practical deployments:
**Theoretical Boundaries:**
1. **Exact List-Decoding Capacity Characterization**: The Goyal-Guruswami paper achieves proximity gaps to $1-R-\eta$ for random Reed-Solomon codes, while the Crites-Stewart reduction shows impossibility beyond list-decoding capacity. For standard Reed-Solomon codes with deterministic evaluation domains, what is the **exact achievable proximity parameter**? Is it precisely the Johnson bound, or does a small gap exist? The Ben-Sasson et al. negative results suggest phase transitions at specific thresholds, but whether these are sharp or approximate remains open.
2. **Rate-Shortfall Tradeoffs**: The Diamond-Gruen counterexamples have $\rho \to 0$ and $\eta \to 0$, while prior positive results assume bounded-away-from-zero rates. **Does there exist a phase transition at specific polynomial rates** $\rho = n^{-\alpha}$ and shortfalls $\eta = n^{-\beta}$ where proximity gap behavior changes? Characterizing this boundary would clarify whether the capacity conjecture is salvageable with rate restrictions.
3. **Beyond Characteristic 2**: The Ben-Sasson et al. counterexamples use characteristic-2 fields with subspace polynomial structure. **Do similar counterexamples exist for prime fields or large characteristic**? The authors conjecture extensions to constant characteristic, but the algebraic geometry techniques may not transfer. Resolving this would determine whether field choice affects fundamental limits.
4. **Affine Space vs. Line Testing**: Current results show mutual correlated agreement for lines and curves. The Goyal-Guruswami paper notes (Remark 2.16) that **no known reduction extends line MCA to affine space MCA**. Does affine space proximity testing have fundamentally different limitations? This matters for protocols like STIR that use higher-dimensional structures.
**Algorithmic Improvements:**
5. **Near-Linear Deterministic Decoding**: The Chatterjee et al. result achieves $\text{poly}(n, \log q)$ deterministic time, while randomized algorithms achieve $\tilde{O}(n)$. **Can deterministic list-decoding reach $O(n \log^c n)$ time** for constant $c$? Given that even deterministic univariate factorization in $\tilde{O}(n)$ is open, this appears highly challenging but would have significant practical impact.
6. **Capacity-Achieving Deterministic Decoders**: Can Hensel lifting be extended to folded Reed-Solomon codes or multiplicity codes to achieve deterministic decoding beyond the Johnson bound? The recent list-size breakthroughs (Srivastava, Chen-Zhang) use probabilistic arguments. **Derandomizing capacity-achieving codes** would require fundamentally new techniques.
7. **Tighter Constants**: The Ben-Sasson et al. Johnson radius bound has leading constant $1/(48(1-\delta)^{3/2})$ for the $m^5$ term. The Goyal-Guruswami bound has $O(m^7 n^2/\rho^{3/2})$ structure. **Are these polynomial degrees in $m$ and $n$ optimal**, or can more careful analysis reduce exponents? This matters for concrete parameter choices.
**Practical Deployments:**
8. **Optimal SNARK Parameters Post-Refutation**: Given the November 2025 impossibility results, what are the **Pareto-optimal parameter choices** trading off proof size, verification time, and field size? Should protocols move to folded RS codes with higher overhead but better proximity gaps, or remain with standard RS accepting Johnson bound limitations?
9. **Random vs. Structured Evaluation Domains**: The Goyal-Guruswami results show random Reed-Solomon codes achieve better proximity gaps than deterministic constructions. **What is the practical cost** of generating and committing to random evaluation domains? Does the setup overhead outweigh the improved proximity parameters?
10. **Multivariate Protocol Error Accumulation**: Tensor products accumulate MCA error linearly (Bordage et al.). For $d$-dimensional multivariate sum-check, does error grow as $O(d)$, or can clever generator choices achieve sublinear growth? **Characterizing multivariate error accumulation** would guide high-dimensional protocol design.
11. **WHIR Salvageability**: With the up-to-capacity MCA conjecture refuted, can WHIR be modified to work at the Johnson bound while maintaining $O(\log^2 n)$ verification? Or is **super-fast verification fundamentally limited** by proximity gap barriers?
**Cross-Cutting Questions:**
12. **Sum-Check Optimal Generators**: The Bordage et al. framework characterizes which generators have MCA, but which generators **minimize error for multivariate sum-check**? Is there an optimal generator family for tensor product codes?
13. **Post-Quantum Security Margins**: Current analyses focus on information-theoretic soundness. Do quantum adversaries have additional advantages in exploiting proximity gap failures? **Quantum attacks on proximity testing** remain underexplored.
14. **Lower Bounds on MCA Error**: The Bordage et al. paper proves lower bounds showing their unique decoding analysis is tight. **Are the Johnson radius bounds also tight**, or is further improvement possible with better proof techniques?
15. **Alternative Code Families**: The focus has been Reed-Solomon codes due to practical deployment. Do **algebraic geometry codes** or **tensor product codes** have fundamentally different proximity gap behavior? Expanding beyond RS might circumvent current limitations.
:::success
**Most Impactful Near-Term Research**
Based on practical urgency and mathematical tractability, the highest-priority open questions appear to be:
1. **Exact characterization of standard RS proximity gaps**: Determining whether Johnson bound is tight for deterministic constructions would finalize parameter selection guidance.
2. **WHIR modification analysis**: Clarifying whether super-fast verification is achievable with proven bounds would guide next-generation protocol design.
3. **Multivariate error accumulation**: Tight bounds on tensor product error would immediately apply to existing deployed protocols.
4. **Field characteristic extensions**: Proving or disproving characteristic-2 limitation of counterexamples would determine whether field choice provides security margin.
:::
## Conclusion
The November 2025 Reed-Solomon proximity gap papers represent a watershed moment in SNARK theory, simultaneously establishing impossibility barriers and achieving near-optimal results within those constraints. The capacity conjecture, which promised proof sizes approaching information-theoretic limits, has been decisively refuted through three independent techniques: reduction to list-decodability, explicit combinatorial counterexamples, and algebraic polynomial constructions. These negative results provide crucial guardrails for protocol designers, identifying the Johnson bound and list-decoding capacity as fundamental thresholds beyond which polynomial soundness parameters are provably impossible.
Yet the story is not one of limitation alone. The positive results achieve factor-$n$ improvements in soundness parameters approaching the Johnson radius, reducing field size requirements from quadratic to linear in block length. The curve decodability framework provides a clean abstraction enabling modular analysis, while the MCA toolbox characterizes all polynomial generators with tight bounds. For random Reed-Solomon codes, proximity gaps extending nearly to capacity are achievable, offering a path forward for applications tolerating randomized setup. The deterministic list-decoding breakthrough removes randomness without sacrificing efficiency, addressing computational concerns orthogonal to information-theoretic soundness.
The practical implications are immediate and substantial. Deployed SNARK systems processing blockchain transactions must re-evaluate security parameters, accepting modest proof size increases or field size expansions to operate within proven bounds rather than conjectured optimistic regions. The random words cutoff provides actionable guidance: stay at least 0.4% below capacity, or preferably at the Johnson bound where $O(n)$ soundness is proven. Conservative parameter selection costs approximately 3% in query complexity but provides rigorous security guarantees. For future protocols, the impossibility results guide architecture decisions: high-rate codes operating near capacity are provably unsafe, while moderate-rate codes with folding or random evaluation domains achieve strong proximity guarantees.
The research dynamics revealed by concurrent publication suggest a healthy competitive ecosystem rapidly converging on truth. The Ethereum Foundation's Proximity Prize successfully catalyzed theoretical advances with practical impact, demonstrating that well-structured incentives can accelerate progress on foundational problems. The collaboration between industry (StarkWare), academia (MIT, Berkeley, EPFL, TIFR), and independent researchers produced complementary results addressing different facets of a multidimensional problem.
Looking forward, the field faces a choice between incremental optimization within established bounds and exploration of alternative approaches—different code families, novel testing paradigms, or architectural innovations circumventing proximity testing limitations. The November 2025 results clarify the terrain, marking regions of possibility and impossibility with mathematical precision. This clarity empowers practitioners to make informed tradeoffs and researchers to focus effort on genuinely open questions rather than pursuing provably impossible improvements. The Reed-Solomon proximity gap story illustrates cryptographic progress at its finest: optimistic conjectures driving protocol development, rigorous analysis revealing limitations, and constructive alternatives emerging from constraint. The systems deployed today rest on firmer theoretical foundations than they did in October 2025, even as proof sizes modestly increase. Security through understanding, not through optimism, defines the path forward.
---
**Footnote References:**[^1][^2][^3][^4][^5][^6]
[^1]: **Bordage et al. (ePrint 2025/2051)**: "All Polynomial Generators Preserve Distance with Mutual Correlated Agreement," École Polytechnique Fédérale de Lausanne, November 6, 2025.
[^2]: **Goyal & Guruswami (ePrint 2025/2054)**: "Optimal Proximity Gaps for Subspace-Design Codes and (Random) Reed-Solomon Codes," MIT & UC Berkeley, November 9, 2025.
[^3]: **Ben-Sasson et al. (ePrint 2025/2055)**: "On Proximity Gaps for Reed–Solomon Codes," StarkWare Industries & University of Toronto, November 6, 2025.
[^4]: **Crites & Stewart (ePrint 2025/2046)**: "On Reed–Solomon Proximity Gaps Conjectures," Web3 Foundation, November 5, 2025.
[^5]: **Diamond & Gruen (ePrint 2025/2010)**: "On the Distribution of the Distances of Random Words," Independent Researchers, November 1, 2025.
[^6]: **Chatterjee, Harsha & Kumar (ECCC TR25-170)**: "Deterministic list decoding of Reed-Solomon codes," Tata Institute of Fundamental Research, November 7, 2025.