[TOC] # TensorSwitch and the Reed-Solomon Proximity Gap Revolution The November 2025 cryptographic research landscape witnessed a convergence of breakthrough results that fundamentally reshaped our understanding of Reed-Solomon code capabilities and their applications in zero-knowledge proof systems. The IACR ePrint paper 2025/2065, titled "TensorSwitch: Nearly Optimal Polynomial Commitments from Tensor Codes" by Bünz, Fenzi, Rothblum, and Wang, emerged simultaneously with three landmark papers that resolved longstanding conjectures about Reed-Solomon proximity gaps. These concurrent developments represent the culmination of decades of research in algebraic coding theory while simultaneously exposing critical security assumptions in deployed blockchain infrastructure processing billions of dollars in daily transaction volume. The intersection of these theoretical advances with practical cryptographic systems creates both immediate challenges for production deployments and unprecedented opportunities for next-generation proof system optimization. :::danger **Critical Security Implication**: The capacity conjecture disproof by Crites-Stewart on November 5, 2025 revealed that deployed STARK systems relying on conjectured proximity gap parameters may have security margins 20-60 bits lower than claimed. Systems using code rates approaching capacity without Johnson bound limitations require immediate parameter audits and potential security upgrades. The window between theoretical disproof and practical exploitation creates an urgent timeline for infrastructure updates across major Layer 2 blockchain networks. ::: ## TensorSwitch: polynomial commitment innovation at the intersection of codes and proofs The TensorSwitch construction represents a fundamental advance in polynomial commitment scheme design by achieving near-optimal efficiency across multiple performance dimensions simultaneously. Published to IACR ePrint on November 8, 2025 and approved November 12, 2025, the paper introduces a hash-based polynomial commitment scheme for multilinear polynomials that leverages tensor code structures to overcome the prover time and proof size bottlenecks that have constrained practical deployment of transparent proof systems. The work was conducted primarily at Succinct Labs and the Simons Institute, with the authors framing their results as an interactive oracle proof of proximity that can be compiled to cryptographic commitments through standard Merkle tree and Fiat-Shamir transformation techniques, working with any linear code possessing specific rate, list-decoding, and correlated agreement properties. The core mathematical framework establishes that for a size $n$ polynomial over a sufficiently large field with security parameter $\lambda$, TensorSwitch achieves commitment time dominated by $(\tau/\rho^2 + \tau/\rho + 3) \cdot n$ field multiplications, opening time requiring only $6n$ field multiplications, query complexity of $\frac{1}{-\log(1-\delta^2)} \cdot \lambda$, verification time $O(\lambda \log n)$, and proof structure containing $O(\log \log n)$ oracles with total size $(\lambda n)^{0.5 + o(1)}$. The dependency on code parameters $\rho$ (rate), $\delta$ (list-decoding and correlated agreement threshold), and $\tau$ (encoding time coefficient) reveals the delicate balance between code-theoretic properties and cryptographic efficiency that defines modern proof system design. According to the authors' published concrete efficiency analysis, the practical implementation achieves commit time of approximately one encoding operation plus three field multiplications per coefficient, with prover time requiring roughly six field multiplications per coefficient without necessitating commitments to large extension field messages, representing a significant practical advantage over schemes requiring extension field arithmetic throughout the proving process. For concrete instantiations with Reed-Solomon codes at rate $1/2$, the scheme achieves query complexity of merely $2.41\lambda$ with commitment time dominated by $(6 \log n + 3)n$ field multiplications. Alternative constructions using RAA codes with rate $1/4$ and distance $0.19$ demonstrate query complexity of $19\lambda$ with commitment requiring $42n$ field additions plus $3n$ field multiplications. These parameters reveal the fundamental tradeoff between code rate and proximity testing efficiency that the November 2025 proximity gap results directly address. The logarithmic dependence of commitment time on block length when using FFT-friendly fields represents a critical practical advantage over schemes requiring quadratic or higher complexity in preliminary encoding phases. The authors emphasize that the query complexity achieved is optimal for tensor code constructions, representing a fundamental efficiency limit for this architectural approach. :::info **Technical Innovation and Flexibility**: TensorSwitch's tensor product code architecture enables decomposition of multilinear polynomial evaluations into structured components that can be encoded and tested efficiently. The scheme demonstrates remarkable flexibility, being both field-agnostic (working over arbitrary fields without FFT structure requirements) and code-agnostic (applicable to any linear code satisfying the correlated agreement properties). This generality positions TensorSwitch as a universal framework for polynomial commitments rather than a specialized construction limited to specific algebraic settings. ::: ### Advanced applications: PCD, distributed proving, and sparse data Beyond basic polynomial commitment functionality, TensorSwitch enables sophisticated applications that extend the reach of zero-knowledge proof systems. The scheme yields Proof-Carrying Data (PCD) constructions matching the theoretical efficiency of the WARP protocol, representing the state-of-art in recursive proof composition. Critically, TensorSwitch achieves a sublinear accumulator, meaning that the state required to accumulate multiple proofs grows slower than linearly with the number of accumulated proofs. This property makes the scheme particularly suitable for distributed proving architectures where multiple provers contribute to different portions of a computation and their results must be efficiently combined. The structure of tensor codes provides natural support for committing to sparse data, where the committed polynomial has only a small number of nonzero coefficients relative to its degree bound. Traditional polynomial commitment schemes incur costs proportional to the maximum degree regardless of sparsity, while TensorSwitch's tensor decomposition allows targeted encoding of populated coefficient regions. This capability proves especially valuable for applications like verifiable database queries where the witness data exhibits natural sparsity patterns, and for machine learning verification where neural network weight matrices contain structured zero regions that can be exploited for efficiency gains. ### Formalized notions: interactive oracle commitments and code switching The theoretical development of TensorSwitch introduces two conceptual frameworks that may prove valuable beyond the immediate application to polynomial commitments. The formalization of interactive oracle commitment schemes provides a clean abstraction for reasoning about commitment protocols where the committer and verifier engage in multiple rounds of interaction, with the verifier making queries to oracle-accessible data structures. This model generalizes both traditional non-interactive commitments and interactive oracle proofs, creating a unified framework for analyzing protocols that combine commitment and proof-of-proximity phases. The right-extension of a linear code captures how "code switchable" the code is, formalizing the property that enables efficient transitions between different encoded representations. For a linear code $C$, the right-extension characterizes the space of linear transformations that preserve the code structure while potentially changing the encoding basis. Codes with favorable right-extension properties support efficient switching operations where a codeword in one representation can be converted to a related codeword in a different representation without full decoding and re-encoding. The tensor product structure of TensorSwitch naturally exhibits strong right-extension properties, enabling the protocol's ability to "switch" between different tensor factorizations during proof generation while maintaining commitment binding and soundness guarantees. The scheme's security properties derive from working in the random oracle model with collision-resistant hash functions, providing plausible post-quantum security without requiring pairing-based cryptography or trusted setup ceremonies. The binding property follows from Merkle commitment binding, while soundness depends critically on the correlated agreement properties of the underlying code up to the proximity threshold $\delta$. Knowledge soundness likely employs straightline extraction techniques where an extractor can recover the committed polynomial from a successful prover's interaction transcript through efficient decoding of the underlying code. The completeness property holds without error when valid parameters satisfying the code distance and agreement requirements are employed. ## November 2025: the capacity conjecture falls and rises transformed The week of November 5-6, 2025 witnessed an extraordinary confluence of results that resolved fundamental questions about Reed-Solomon code proximity gaps while simultaneously refining our understanding of achievable bounds. On November 5, Elizabeth Crites and Alistair Stewart published IACR ePrint 2025/2046 titled "On Reed-Solomon Proximity Gaps Conjectures," delivering the surprising result that three major capacity conjectures were false. Specifically, they disproved the correlated agreement up-to-capacity conjecture of Ben-Sasson, Carmon, Ishai, Kopparty, and Saraf from their Journal of the ACM 2023 paper, the mutual correlated agreement up-to-capacity conjecture underlying the WHIR protocol, and demonstrated that list-decodability up-to-capacity conjectures associated with DEEP-FRI follow from existing impossibility results in the list-decoding literature. The technical heart of the Crites-Stewart disproof establishes a reduction showing that correlated agreement with sufficiently small error probability necessarily implies list-decoding capability for Reed-Solomon codes, thereby bounding proximity gaps by the list-decoding radius rather than the Shannon capacity bound. For Reed-Solomon codes of rate $\rho$, the list-decoding capacity reaches the Johnson radius $\tau_J = 1 - \sqrt{\rho}$ where polynomial-sized output lists are guaranteed, falling short of the Shannon capacity bound of $1-\rho$ that had been conjectured achievable. This gap between $1-\sqrt{\rho}$ and $1-\rho$ represents the fundamental barrier that general Reed-Solomon codes cannot cross without extraordinary list size explosion. The authors propose minimal modifications to the capacity conjectures replacing the capacity bound with the Johnson bound, providing a natural corrected framework that aligns theory with the newly understood limitations. One day later on November 6, two complementary positive results emerged that demonstrated the capacity conjecture was not entirely misguided but rather required more careful code construction. Rohan Goyal and Venkatesan Guruswami published IACR ePrint 2025/2054 (also ECCC TR25-166) titled "Optimal Proximity Gaps for Subspace-Design Codes and (Random) Reed-Solomon Codes," proving that folded Reed-Solomon codes, univariate multiplicity codes, and random Reed-Solomon codes with random evaluation points achieve proximity gaps approaching the capacity bound $1-\rho$. This remarkable result, which had been posed as a challenge with an informal "million dollar prize" mentioned in the literature, shows that appropriate code variants can indeed reach the theoretical limit that general Reed-Solomon codes cannot attain. The Goyal-Guruswami methodology introduces a clean abstraction called "line/curve decodability" that avoids heavy algebraic machinery while establishing stronger mutual correlated agreement properties. Critically, their construction requires only linear field size in the block length rather than the quadratic field size that previous proximity gap results demanded, removing a significant practical barrier to deployment. The approach works by showing that codes with the subspace-design property naturally exhibit the distance-preserving characteristics needed for proximity gaps, with random Reed-Solomon codes satisfying these requirements with high probability. The practical consequences for SNARK efficiency are substantial, as the authors explicitly note, providing a pathway to achieve the capacity-level parameters that the Crites-Stewart impossibility results show are unattainable for general constructions. Concurrent with Goyal-Guruswami, Eli Ben-Sasson, Dan Carmon, Ulrich Haböck, Swastik Kopparty, and Shubhangi Saraf published IACR ePrint 2025/2055 (also ECCC TR25-169) titled "On Proximity Gaps for Reed-Solomon Codes," providing refined positive bounds and crucial impossibility results that clarify the theoretical landscape. Their positive results show that for unique decoding radius $\delta/2$, arbitrarily small proximity loss $\epsilon^* > 0$ can be achieved with only $O(1)$ exceptional points (improved from $O(n)$), while for the Johnson radius $J(\delta)$, zero proximity loss becomes possible with $O(n)$ exceptional points (improved from $O(n^2)$). These improvements significantly reduce soundness error in STARK systems by tightening the analysis of the FRI proximity testing protocol. The negative results from Ben-Sasson et al. establish fundamental barriers showing that proximity gaps at or beyond the Johnson radius with small proximity loss require at least $\Omega(n^{1.99})$ exceptional points, while for all constants $\tau$, proximity gaps at radius $\delta - \Omega(1)$ with small loss need $n^\tau$ exceptions. Most significantly, they prove that improved proximity gap results beyond the Johnson radius are impossible without corresponding improvements in list-decoding radius bounds, formally establishing the deep connection between these two coding-theoretic properties. This result explains why the capacity conjecture required correction and provides a roadmap for future progress: advances in list-decoding algorithms are a prerequisite for proximity gap improvements beyond current limits. :::success **Synthesis of November 2025 Results**: The three concurrent papers form a coherent picture: Crites-Stewart establish that general Reed-Solomon codes cannot achieve capacity-level proximity gaps due to list-decoding limitations at the Johnson bound; Goyal-Guruswami prove that special Reed-Solomon code variants (folded, random) with subspace-design properties CAN achieve capacity; Ben-Sasson et al. refine intermediate bounds while proving that the list-decoding barrier is fundamental. The capacity conjecture was simultaneously too strong (for general codes) and achievable (for carefully designed variants). ::: ## Mathematical foundations: from polynomials to proximity :::spoiler Historical Context: The Journey to Proximity Gaps Reed-Solomon codes emerged from Irving Reed and Gustave Solomon's 1960 paper "Polynomial Codes over Certain Finite Fields," introducing a construction that achieved the Singleton bound and thus optimal rate-distance tradeoff. For codes specified as $\text{RS}(n,k)$ with $n$ total symbols and $k$ information symbols, the minimum distance $d = n - k + 1$ represents the maximum possible for any code with these parameters, establishing RS codes as Maximum Distance Separable (MDS) codes. The construction evaluates a degree-$(k-1)$ message polynomial $p(X) = \sum_{i=0}^{k-1} a_i X^i$ at $n$ distinct field points to generate codewords $(p(\alpha_1), p(\alpha_2), \ldots, p(\alpha_n))$. The unique decoding barrier at $\lfloor d/2 \rfloor$ errors constrained Reed-Solomon applications for decades until Peter Elias's 1950s concept of list decoding suggested outputting small lists of candidates rather than unique answers. Madhu Sudan's breakthrough 1997 algorithm achieved the first non-trivial list decoding beyond the unique radius, correcting up to $\sqrt{2Rn}$ errors through a two-phase interpolation-factorization approach. Venkatesan Guruswami and Madhu Sudan's 1999 improvement extended this to the Johnson bound using multiplicities in polynomial interpolation, proving that Reed-Solomon codes can be list decoded up to $1-\sqrt{\rho}$ fraction of errors in polynomial time where $\rho$ denotes the code rate. The Johnson bound emerges from geometric considerations: for codes with minimum distance $d$ and block length $n$, the radius $\tau_J = n(1 - \sqrt{d/n})$ represents the threshold below which Hamming balls contain at most polynomial numbers of codewords. Beyond this radius, list sizes can grow super-polynomially, making efficient decoding algorithmically intractable. For Reed-Solomon codes with rate $\rho = k/n$, this bound becomes $\tau_J = 1 - \sqrt{\rho}$, strictly less than the Shannon capacity bound of $1-\rho$ that represents the information-theoretic limit for adversarial channels. Proximity testing asks whether a received word is close to any codeword, a weaker question than decoding but sufficient for many cryptographic applications. Ben-Sasson, Carmon, Ishai, Kopparty, and Saraf's 2020 work (published in JACM 2023) proved that collections of affine spaces display proximity gaps for Reed-Solomon codes: every affine space either has all members close to the code or only a vanishingly small fraction close, with no intermediate case. This property enables efficient probabilistic testing through random sampling rather than exhaustive verification, forming the foundation of the FRI protocol used in STARK proof systems. ::: The formal mathematical framework for proximity gaps requires precise definitions of code properties and agreement parameters. Consider a Reed-Solomon code $C = \text{RS}_k[\mathbb{F}_q, D, k]$ defined over finite field $\mathbb{F}_q$ with evaluation domain $D$ of size $n = |D|$ and dimension $k$, possessing rate $\rho = k/n$ and relative minimum distance $\delta = 1 - \rho$. For a proximity parameter $\theta$ with $0 < \theta < \delta$, a collection of sets $\mathcal{S}$ displays a $(\theta, \epsilon)$-proximity gap if for every set $S \in \mathcal{S}$ and every function $f: D \to \mathbb{F}_q$, either the relative agreement $\Delta(f, C) > \theta$ (measuring the maximum fraction of coordinates where $f$ agrees with any codeword) or at most an $\epsilon$ fraction of elements in $S$ have agreement greater than $\theta$ with $C$. The correlated agreement property strengthens this by considering random linear combinations. For a family of functions $\{f_1, \ldots, f_m\}$ and random coefficients $\alpha_1, \ldots, \alpha_m \in \mathbb{F}_q$, correlated agreement up to proximity $\theta$ with error $\epsilon$ requires that if sufficiently many random linear combinations $\sum_{i=1}^m \alpha_i f_i$ are $\theta$-close to the code, then there exists a common agreement set where all functions simultaneously agree with some codeword. The mutual correlated agreement variant used in WHIR considers multiple independent tuples of functions and requires consistent agreement sets across all tuples, providing even stronger testing guarantees. The Ben-Sasson et al. 2020 main theorem establishes proximity gaps for affine spaces $A \subseteq D$ up to the Johnson bound. For unique decoding regime $\theta < \delta/2$, the error parameter achieves near-optimal $\epsilon_U = n/q$ over fields of size linear in block length. For list decoding regime $\delta/2 < \theta < 1-\sqrt{\rho}$, the error parameter becomes $\epsilon_J = O(1/(\eta\rho)^{O(1)} \cdot n^2/q)$ requiring quadratic field size for constant rate codes, where $\theta = 1 - \sqrt{\rho} - \eta$ and $\eta$ controls the gap below the Johnson bound. The proof analyzes classical Berlekamp-Welch and Guruswami-Sudan decoding algorithms over rational function fields $\mathbb{F}_q(Z)$, extending Arora-Sudan low-degree test techniques to handle parameterized affine spaces. The Crites-Stewart reduction proves that correlated agreement with error probability $\epsilon$ up to proximity $\theta$ implies list-decoding capability with list size at most $1/\epsilon$ up to the same radius $\theta$. Since list-decoding capacity for Reed-Solomon codes reaches the Johnson bound with polynomial lists but requires exponential lists beyond, any proximity gap conjecture claiming to reach capacity $1-\rho$ necessarily fails. The formal statement shows that for any collection displaying $(\theta, \epsilon)$-correlated agreement, the underlying code must be $(n(1-\theta), 1/\epsilon)$-list-decodable, creating a direct connection between these previously separate properties and resolving the natural open problem of understanding their relationship. The Goyal-Guruswami positive results introduce the line-decodability property as a key abstraction. A code possesses the line-decodability property with parameters $(t, \ell)$ if for any $t$ arbitrary codewords and any line (one-dimensional affine subspace), the restrictions of these codewords to the line have relative distance at least $(1-\ell)$ from each other. Codes with the subspace-design property automatically satisfy line-decodability with appropriate parameters. For folded Reed-Solomon codes constructed by bundling $m$ consecutive evaluations into super-symbols over the extension field $\mathbb{F}_{q^m}$, the line-decodability property enables proximity gap analysis approaching capacity $1-\rho$ rather than merely Johnson bound $1-\sqrt{\rho}$. Random Reed-Solomon codes constructed by selecting evaluation points uniformly at random from a large field similarly achieve optimal proximity gaps with high probability. The key insight involves showing that random structure prevents pathological configurations where multiple codewords could concentrate agreement in overlapping regions, thereby preserving the distance-separation properties needed for effective proximity testing. The field size requirement reduces from quadratic to linear through careful analysis of collision probabilities and local code properties, making these constructions significantly more practical for cryptographic applications where field arithmetic cost scales with field size. ## Implications for SNARK and STARK security architecture The Fast Reed-Solomon Interactive Oracle Proof (FRI) protocol serves as the central low-degree testing component in STARK proof systems, enabling efficient verification that a committed polynomial has bounded degree. Proposed by Ben-Sasson, Bentov, Horesh, and Riabzev at ICALP 2018, FRI transforms the problem of checking whether a function is close to a Reed-Solomon codeword into a recursive proximity testing procedure requiring only logarithmic rounds and polylogarithmic query complexity. The protocol operates by having the prover commit to intermediate folded polynomials through Merkle trees, with the verifier providing random field elements for split-and-fold operations that reduce the polynomial degree geometrically until reaching a constant that can be verified directly. In deployed STARK systems like StarkWare's StarkDEX and StarkNet processing billions of dollars in transactions, typical parameters employ blowup factors $\beta$ ranging from $4$ to $16$, meaning the evaluation domain size exceeds the trace domain by this multiplicative factor to achieve desired code rates $\rho = 1/\beta$. Security targets of $96$ to $128$ bits require careful parameter selection balancing query complexity, proof size, and soundness error. Each FRI query contributes approximately $\log_2(\beta)$ bits of security when combined with proof-of-work grinding typically providing $20$ to $30$ additional bits. For $80$-bit provable security with blowup factor $4$, systems require approximately $29$ to $37$ queries resulting in proof sizes of $100$ to $300$ kilobytes depending on field extensions and optimization choices. The distinction between conjectured and provable security parameters creates substantial practical tension. Production systems frequently employ the assumption that proximity gap error parameters satisfy $\epsilon \leq 1/(\eta \cdot \rho)^{c_1} \cdot (N \cdot n)^{c_2}/|\mathbb{F}|$ with constants $c_1 = c_2 = 1$ over prime fields larger than block length, an assumption that enables proof sizes approximately $50\%$ smaller than provably secure alternatives. The November 2025 Crites-Stewart disproof demonstrates that capacity-level proximity gaps with such tight parameters do not hold for general Reed-Solomon codes, revealing that deployed systems relying on these conjectures may have actual security margins $20$ to $63$ bits lower than claimed, as analyzed by Block et al. in their 2024 paper "On the Concrete Security of Non-interactive FRI." The Polygon zkEVM implementation employs the PIL-STARK framework with degree-3 extensions of 64-bit base fields to achieve $100$ to $128$-bit security targets while maintaining practical proof generation times. The February 2024 Circle STARK collaboration between Polygon and StarkWare aims to improve proving performance through the Plonky3 framework with focus on small fields. However, the December 2023 discovery by Verichains of a proof forgery vulnerability related to field incompatibility between STARK components over $\mathbb{F}_{p^3}$ and SNARK verifiers over $\mathbb{F}_q$ highlights the critical importance of rigorous parameter analysis. The vulnerability involved Merkle root computation weaknesses that allowed manipulation of verification checks, demonstrating that theoretical security properties must be carefully preserved through all implementation layers. RISC Zero's zkVM explicitly adopts the original FRI protocol based on proximity gap results rather than DEEP-FRI variants, with documentation noting that "the Proximity Gaps for Reed-Solomon Codes paper shows that the original FRI protocol offers the same soundness results as DEEP-FRI at less computational complexity." Their parameter selection employs commit rounds reducing degree by factor $16$ continuing until degree reaches $255$ or below, with query rounds determined via Fiat-Shamir hashing. The system targets practical proving times for RISC-V program execution verification while maintaining adequate security margins, though the capacity conjecture disproof necessitates re-evaluation of whether deployed parameters achieve their intended security levels. :::warning **Security Parameter Adjustment Guidelines**: Systems currently deployed with proximity parameters assuming capacity-level gaps (approaching $1-\rho$) should immediately audit their actual security margin. Conservative provable security for $128$-bit target requires: field with degree-3 extension providing $192$ total bits, code rate $\rho = 2^{-5}$ (blowup $32$), query complexity $s \approx 57$ based on Johnson bound analysis, and optional grinding providing $0$ to $10$ additional bits. This produces proofs approximately $211$ kilobytes with guaranteed soundness. Alternatively, systems accepting conjectured security with safety margins should use rate $\rho = 2^{-10}$ (blowup $1024$), queries $s = 14$ to $24$, grinding $19$ to $28$ bits, resulting in $58$ to $99$ kilobyte proofs with $19+$ bit margin if the capacity conjecture breaks. ::: The Goyal-Guruswami results offer an optimization pathway for future systems. Folded Reed-Solomon codes with appropriate folding parameter $m$ achieve proximity gaps up to capacity $1-\rho$ while requiring only linear field size, removing the quadratic field size bottleneck that made capacity-approaching parameters impractical. Random Reed-Solomon codes with random evaluation points similarly achieve optimal proximity gaps, though their non-constructive nature currently prevents direct deployment. The line-decodability abstraction suggests that identifying explicit code families with this property could enable new STARK constructions achieving provably secure capacity-level parameters without relying on unproven conjectures. The TensorSwitch polynomial commitment scheme directly depends on the correlated agreement properties of underlying codes, requiring "list-decoding and correlated agreement up to $\delta$" as specified in the abstract. The November 2025 proximity gap results immediately constrain the achievable $\delta$ values for general Reed-Solomon instantiations to the Johnson bound $1-\sqrt{\rho}$ rather than capacity $1-\rho$. This affects the query complexity formula $\frac{1}{-\log(1-\delta^2)} \cdot \lambda$ where smaller $\delta$ values increase the required number of queries, directly impacting proof size. Systems implementing TensorSwitch with Reed-Solomon codes must use Johnson-bound-limited proximity parameters unless employing the special folded or random constructions proven capacity-achieving by Goyal-Guruswami. The field-agnostic and code-agnostic properties of TensorSwitch provide implementation flexibility, allowing systems to select code families optimized for their specific efficiency requirements while maintaining the unified theoretical framework for security analysis. ## Comparative methodology analysis and related cryptographic constructions The Diamond-Gruen work from IACR ePrint 2024/1351 (published in CIC 2025) provides complementary infrastructure by proving that interleaved codes inherit proximity gaps from their constituent codes within the unique decoding radius. For linear codes with proximity gaps up to radius $\delta/2$, their tensor products and interleavings preserve this property, extending the tensor-based proximity gap results of Diamond-Posen. This enables compositional reasoning about complex code constructions built from simpler components, relevant for TensorSwitch's tensor code approach where understanding proximity properties of tensor products directly impacts security analysis. The right-extension formalism introduced in TensorSwitch provides the mathematical framework for analyzing exactly these kinds of code composition operations. The STIR protocol from Arnon, Chiesa, Fenzi, and Yogev (CRYPTO 2024 Best Paper) and its successor WHIR represent the state-of-art in Reed-Solomon proximity testing prior to November 2025. STIR achieved significant query complexity reduction through improved analysis of correlated agreement, while WHIR pushed efficiency further using mutual correlated agreement properties. Notably, Giacomo Fenzi (one of the TensorSwitch authors) also contributed to these protocols, bringing deep expertise in proximity testing to the TensorSwitch design. The Crites-Stewart disproof of the mutual correlated agreement up-to-capacity conjecture specifically impacts WHIR's security claims, requiring parameter adjustments to ensure soundness guarantees remain valid. The corrected conjectures proposed by Crites-Stewart with Johnson bound rather than capacity bound provide a foundation for updated WHIR analysis that maintains provable security. BaseFold from Chiesa, Fenzi, and others offers an alternative approach using foldable codes that work over arbitrary fields without requiring FFT-friendly structure. With prover complexity $O(n \log n)$ and verifier complexity $O(\log^2 n)$, BaseFold provides field-agnostic polynomial commitments trading some efficiency for broader applicability. The comparison with TensorSwitch reveals different design philosophies: BaseFold prioritizes generality and provable security with clear complexity bounds, while TensorSwitch targets near-optimal constants across multiple metrics simultaneously by leveraging specific code properties. The TensorSwitch authors explicitly note that their scheme also achieves field-agnostic operation, positioning it as combining BaseFold's generality with improved concrete efficiency through the tensor code structure. The choice between these systems depends on application requirements regarding specific performance bottlenecks versus overall efficiency optimization. The KZG polynomial commitment scheme based on bilinear pairings offers constant-size proofs and constant-time verification but requires a trusted setup ceremony and relies on discrete logarithm assumptions vulnerable to quantum attack. Hash-based schemes like TensorSwitch, FRI, and BaseFold avoid trusted setup and provide plausible post-quantum security, but generate larger proofs and require more queries. The November 2025 proximity gap results primarily impact hash-based schemes since they fundamentally depend on Reed-Solomon proximity testing, while pairing-based schemes remain unaffected by these coding-theoretic developments. However, the post-quantum security advantage of transparent schemes becomes increasingly critical as quantum computing capabilities advance, making the efficiency improvements enabled by better proximity gap understanding strategically important for the cryptographic ecosystem. | **Polynomial Commitment Scheme** | **Proof Size** | **Prover Time** | **Verifier Time** | **Trust Assumptions** | **Quantum Resistance** | |----------------------------------|----------------|-----------------|-------------------|-----------------------|------------------------| | KZG (pairing-based) | $O(1)$ | $O(n)$ | $O(1)$ | Trusted setup required | Vulnerable (DLog) | | FRI (Reed-Solomon IOP) | $O(\lambda \log n)$ | $O(n \log n)$ | $O(\log^2 n)$ | Random oracle only | Plausibly resistant | | BaseFold | $O(\lambda \log^2 n)$ | $O(n \log n)$ | $O(\log^2 n)$ | Random oracle only | Plausibly resistant | | TensorSwitch (RS codes) | $(\lambda n)^{0.5+o(1)}$ | $O(n \log n)$ | $O(\lambda \log n)$ | Random oracle only | Plausibly resistant | | TensorSwitch (RAA codes) | $(\lambda n)^{0.5+o(1)}$ | $O(n)$ | $O(\lambda \log n)$ | Random oracle only | Plausibly resistant | The table reveals TensorSwitch's distinctive position achieving sublinear proof size $(\lambda n)^{0.5+o(1)}$ compared to FRI's $O(\lambda \log n)$, representing substantial savings for large polynomials. The prover time varies from $O(n)$ field additions for RAA codes to $O(n \log n)$ field multiplications for Reed-Solomon codes via FFT encoding, competitive with existing schemes. The concrete efficiency claims from the authors indicate that practical implementations achieve commit time of roughly one encoding plus $3n$ multiplications, with prover time requiring only $6n$ multiplications while avoiding commitments to large extension field messages. The verification time $O(\lambda \log n)$ matches or improves upon alternatives, while the $O(\log \log n)$ number of oracles represents a logarithm of logarithm scaling that becomes practically constant for reasonable parameter ranges. ## Technical depth: theorems, proofs, and cryptographic implications :::spoiler Detailed Theorem Statements and Proof Techniques **Theorem (Ben-Sasson et al. Proximity Gaps, JACM 2023)**: Let $\mathbb{F}$ be a finite field with $|\mathbb{F}| = q$, let $D \subset \mathbb{F}$ be an evaluation domain of size $n$, and let $k \leq n$ be dimensions. For the Reed-Solomon code $\text{RS}_k[\mathbb{F}, D, k]$ with rate $\rho = k/n$ and minimum distance $\delta = 1-\rho$, consider the collection $\mathcal{A}$ of all affine subspaces of $D$. For unique decoding regime with proximity parameter $\theta = (1-\rho)/2 - \eta$, the collection $\mathcal{A}$ displays a $(\theta, \epsilon_U)$-proximity gap where $\epsilon_U = n/q$ over fields of size $q \geq \text{poly}(n)$. For list decoding regime with $\theta = 1 - \sqrt{\rho} - \eta$, the collection displays a $(\theta, \epsilon_J)$-proximity gap where $\epsilon_J = O(1/(\eta\rho)^{O(1)} \cdot n^2/q)$ over fields of size $q = \Omega(n^2)$ for constant rate. The proof technique analyzes decoder output over rational function fields. For an affine subspace parameterized by $Z$, functions on the subspace become univariate rational functions in $\mathbb{F}(Z)$. The Berlekamp-Welch algorithm or Guruswami-Sudan algorithm applied over this extended field either successfully decodes all functions to a common agreement codeword or fails for most random linear combinations, with success/failure probabilities determined by algebraic error term analysis. The key insight involves showing that decoder success for random combinations implies structured agreement that must arise from actual proximity to a codeword. **Theorem (Crites-Stewart, IACR 2025/2046)**: Let $C$ be a Reed-Solomon code with rate $\rho$ over field $\mathbb{F}_q$. If a collection $\mathcal{S}$ displays $(\theta, \epsilon)$-correlated agreement for $C$ with $\theta > 1-\sqrt{\rho}$ (beyond Johnson bound), then the code $C$ must be $(n(1-\theta), 1/\epsilon)$-list-decodable at the same radius. Since Reed-Solomon codes have list size growing exponentially beyond the Johnson bound $1-\sqrt{\rho}$, any proximity gap with polynomial error probability $\epsilon$ (corresponding to polynomial list size $1/\epsilon$) cannot exceed the Johnson threshold. The proof constructs an explicit list-decoding strategy from correlated agreement testers. Given a received word and the ability to test correlated agreement, the reduction generates candidate codewords by sampling from the agreement structure and validates them through consistency checking. The number of distinct candidates corresponds to the list size, while the error probability in correlated agreement testing translates to list-decoding failure probability. This establishes list-decoding capability as a necessary condition for proximity gaps, resolving the relationship between these properties. **Theorem (Goyal-Guruswami, IACR 2025/2054)**: Folded Reed-Solomon codes of rate $\rho$ with folding parameter $m$ achieve proximity gaps up to radius $\theta = 1 - \rho - \eta$ (approaching capacity) with error parameter $\epsilon = \text{poly}(1/\eta)$ over fields of size $q = O(n)$ (linear in block length). Similarly, Reed-Solomon codes with evaluation points chosen uniformly at random from a large field achieve capacity-approaching proximity gaps with high probability. The proof introduces line-decodability: codes where restrictions to one-dimensional affine subspaces (lines) maintain large relative distance between distinct codewords. Folded RS codes inherit this property from the underlying univariate structure combined with extension field arithmetic. For proximity testing, the line-decodability property ensures that projections onto random lines preserve distance, enabling reduction from high-dimensional proximity to univariate proximity that can be analyzed via classical bounds. Random Reed-Solomon codes satisfy line-decodability with high probability through union bound arguments over possible line configurations and codeword pairs. **Theorem (Ben-Sasson et al., IACR 2025/2055 - Impossibility)**: For Reed-Solomon codes with relative distance $\delta$ and block length $n$, any proximity gap at radius $\theta \geq J(\delta)$ (at or beyond Johnson bound) with proximity loss $\epsilon^* \leq 1/\text{poly}(n)$ requires at least $\Omega(n^{1.99})$ exceptional points. More generally, for any constant $\tau$ and radius $\theta \geq \delta - \Omega(1)$, achieving proximity loss $\epsilon^* \leq 1/\text{poly}(n)$ requires $n^\tau$ exceptional points. The proof establishes that improved proximity gaps beyond Johnson radius necessarily imply improved list-decoding radius bounds, which are known to be impossible without fundamental breakthroughs in algebraic list-decoding. The argument proceeds by contradiction: assuming proximity gap parameters better than specified, the reduction from previous work constructs a list decoder exceeding known list size lower bounds. Since these lower bounds are tight barring major advances, the proximity gap assumptions must be false. This creates a formal barrier linking proximity testing limitations to list-decoding complexity. ::: The practical security implications for deployed STARK systems require careful analysis of soundness error as a function of protocol parameters. The FRI soundness error for $r$ rounds and $s$ query repetitions satisfies $\epsilon_{\text{sound}} \leq (1-\delta+\epsilon^*)^s + r \cdot \epsilon_{\text{proximity}}$ where $\delta$ represents the relative distance of the code, $\epsilon^*$ captures proximity loss, and $\epsilon_{\text{proximity}}$ bounds the per-round proximity gap error. The November 2025 results affect both $\epsilon^*$ (improved by Ben-Sasson et al. to $O(n)$ exceptional points instead of $O(n^2)$) and the achievable $\delta$ values (limited by Crites-Stewart to Johnson bound for general codes, but extended by Goyal-Guruswami to capacity for special codes). For concrete parameter selection following the Block et al. 2024 methodology, systems targeting $\lambda$-bit security must ensure $\epsilon_{\text{sound}} \leq 2^{-\lambda}$. Splitting the error budget between commit phase and query phase with $\epsilon_C \leq 2^{-(\lambda+1)}$ and $\epsilon_Q \leq 2^{-(\lambda+1)}$ allows independent optimization. The proximity parameter $m$ must satisfy $m \geq 3$ for list-decoding regime guarantees, yielding proximity threshold $\theta = 1 - \sqrt{\rho}(1 + 1/(2m))$. Query rounds must satisfy $(\sqrt{\rho})^s \cdot (1 + 1/(2m))^s \leq 2^{-(\lambda+1)}$, while commit rounds follow from recursive proximity testing analysis. The field size requirements create a critical practical tradeoff. Small fields like the Goldilocks prime $p = 2^{64} - 2^{32} + 1$ enable fast arithmetic but suffer from proximity gap error terms scaling as $\epsilon \sim n^2/q$ in the list-decoding regime, potentially requiring degree-3 extensions to $192$-bit effective fields. Large prime fields exceeding $2^{128}$ achieve tighter proximity bounds with $\epsilon \sim n/q$ in unique decoding regime, but arithmetic operations cost more in software implementations. The optimal choice depends on whether proving time (favoring small fields) or verification time (favoring tight security bounds) dominates the performance profile for the target application. ## Synthesis: security recommendations and forward research directions The convergence of the TensorSwitch polynomial commitment innovation with the November 2025 proximity gap breakthroughs creates a watershed moment requiring both immediate practical responses and strategic research prioritization. Deployed STARK systems currently processing blockchain transactions must audit their parameters against the corrected Johnson-bound-limited proximity gap guarantees rather than capacity conjectures. Systems designed with $20$ to $30$-bit security margins remain acceptable risks for many applications, but high-value financial infrastructure should consider transitioning to provably secure parameters even at the cost of $2$x to $3$x proof size increases. The alternative of adopting folded Reed-Solomon or random Reed-Solomon constructions proven capacity-achieving by Goyal-Guruswami provides an optimization path, though practical implementation requires additional engineering effort to handle the modified code structures efficiently. The TensorSwitch paper's appearance concurrent with proximity gap breakthroughs was particularly fortuitous, as the scheme's explicit dependence on "list-decoding and correlated agreement up to $\delta$" makes parameter selection immediately benefit from the clarified theoretical landscape. Systems implementing TensorSwitch should instantiate with folded Reed-Solomon codes where practical to achieve the near-optimal $(\lambda n)^{0.5+o(1)}$ proof size at capacity-level proximity parameters, or alternatively employ standard Reed-Solomon codes at Johnson-bound-limited parameters accepting the increased query complexity of $\frac{1}{-\log(1-\delta^2)} \cdot \lambda$ where $\delta \leq 1-\sqrt{\rho}$ rather than approaching $1-\rho$. The field-agnostic and code-agnostic flexibility of TensorSwitch enables experimentation with different code families to optimize the concrete efficiency versus security guarantee tradeoff for specific deployment contexts. The distributed proving and PCD applications enabled by TensorSwitch's sublinear accumulator property represent particularly compelling use cases where the November 2025 results provide immediate practical value. Distributed proving systems require multiple provers to contribute partial results that a coordinator accumulates into a final proof, with the accumulator size determining communication overhead and coordinator workload. TensorSwitch's sublinear accumulation means that combining $k$ proofs requires accumulator state growing as $o(k)$, enabling practical distributed proving at scales previously infeasible. The capacity-achieving folded Reed-Solomon constructions allow these systems to operate at optimal code rates while maintaining provable security, maximizing the practical efficiency gains from distribution. :::info **Implementation Recommendations for Production Systems** New deployments should begin with provably secure parameters using standard Reed-Solomon codes at Johnson bound proximity, field extensions providing $192$ total bits, code rates $\rho = 2^{-5}$ to $2^{-6}$, and query complexity derived from soundness error budgets. Systems should implement parameter versioning allowing future upgrades to folded Reed-Solomon constructions once optimized implementations become available. Existing systems should add explicit documentation of security assumptions, implement $20+$ bit safety margins through increased grinding or additional queries, and establish monitoring for cryptanalytic progress that might further refine understood limits. Critical research directions include developing efficient encoders and proximity testers for folded Reed-Solomon codes achieving the Goyal-Guruswami capacity-level proximity gaps, derandomizing the random Reed-Solomon construction to obtain explicit evaluation point selection achieving optimal properties, extending line-decodability analysis to identify broader code families with capacity-approaching proximity testing, investigating whether improved list-decoding algorithms beyond the Johnson bound could enable proximity gaps exceeding current limits, and formalizing the right-extension property for additional code families to expand TensorSwitch's code-agnostic applicability. ::: The interaction between code rate regimes and security properties creates a rich optimization space. Constant-rate codes with $\rho = \Theta(1)$ provide better prover efficiency through smaller evaluation domains but require careful proximity parameter selection to maintain adequate distance. Vanishing-rate codes approaching $\rho \to 0$ trivially achieve large distance but sacrifice encoding efficiency. The sweet spot for most STARK applications lies in the range $\rho = 1/8$ to $\rho = 1/32$ where blowup factors of $8$x to $32$x balance proof size against proving time, with the November 2025 results providing tight bounds on achievable proximity testing efficiency throughout this range. TensorSwitch's concrete efficiency achieving roughly $6n$ multiplications for prover time positions it as particularly competitive in this parameter regime. The broader implications for zero-knowledge proof system design extend beyond parameter selection to architectural choices. The distinction between interactive oracle proofs compiled via Fiat-Shamir and natively non-interactive constructions becomes less critical as IOP analysis matures, with TensorSwitch's interactive oracle commitment formalization providing a unified framework spanning both approaches. The choice between multilinear polynomial commitments (like TensorSwitch) and univariate commitments (like original FRI) depends on the structure of computations being proven, with multilinear approaches naturally matching boolean circuit representations while univariate methods align with algebraic intermediate representations. The November 2025 results apply uniformly across these architectural variations since both rely on underlying Reed-Solomon proximity testing. Table: Comparison of Security Properties Across November 2025 Breakthrough Papers | **Paper** | **Main Positive Result** | **Main Negative Result** | **Proof Technique** | **Implications** | |-----------|-------------------------|-------------------------|---------------------|------------------| | Crites-Stewart 2025/2046 | Reduced proximity gaps to list-decoding | Disproved capacity conjectures for general RS codes | Reduction showing correlated agreement implies list-decodability | Deployed systems require parameter audits | | Goyal-Guruswami 2025/2054 | Folded/random RS codes achieve capacity proximity gaps | N/A (purely positive) | Line-decodability abstraction with subspace-design property | Provides optimization path for future systems | | Ben-Sasson et al. 2025/2055 | Improved proximity loss to $O(n)$ exceptions at Johnson bound | Proximity beyond Johnson requires $\Omega(n^{1.99})$ exceptions | Refined Berlekamp-Welch and Guruswami-Sudan analysis | Explains fundamental barrier and guides future research | The economic implications for blockchain Layer 2 scaling solutions prove substantial. Ethereum L1 verification costs for STARK proofs range from $200{,}000$ to $500{,}000$ gas per proof, with proof size directly impacting calldata costs. A $33\%$ reduction in proof size from improved proximity gap analysis translates to approximately $200{,}000$ gas savings per proof. At current Ethereum prices and gas costs, high-throughput systems generating $100{,}000$ proofs daily could save approximately $\$3$ million annually through tighter security analysis enabling smaller proofs. The Goyal-Guruswami capacity-achieving constructions could potentially double these savings if efficiently implementable, creating strong economic incentives for continued research and engineering optimization. TensorSwitch's sublinear proof size of $(\lambda n)^{0.5+o(1)}$ compared to FRI's $O(\lambda \log n)$ represents even larger potential savings for applications with sufficiently large polynomials. ## Conclusion: the transformed landscape of algebraic proof systems The November 2025 confluence of the TensorSwitch polynomial commitment scheme with three breakthrough papers resolving Reed-Solomon proximity gap conjectures represents a fundamental phase transition in our understanding of algebraic proof system capabilities and limitations. The simultaneous disproof of capacity conjectures for general codes, proof of capacity achievement for special constructions, and refined analysis of fundamental barriers creates a complete picture absent for the previous half-decade of STARK development. The transformation from uncertain conjectured parameters to provable bounds with known tight constructions achieving optimal tradeoffs removes a major source of security ambiguity while providing clear engineering guidance for next-generation implementations. The reduction from proximity gaps to list-decoding established by Crites-Stewart resolves a natural theoretical question while explaining the barrier that capacity conjectures encountered. The connection between these properties had been suspected but not formally established, and the proof technique creates a template for analyzing relationships between other code properties relevant to cryptographic applications. The line-decodability abstraction introduced by Goyal-Guruswami similarly provides a unifying framework for understanding proximity gaps across different code families, suggesting that systematic investigation of this property could identify additional capacity-achieving constructions beyond folded and random Reed-Solomon codes. The right-extension formalism from TensorSwitch complements these theoretical tools by providing a mathematical framework for analyzing code composition and switching operations. The immediate priority for production systems involves parameter audits ensuring claimed security levels remain valid under the corrected Johnson-bound-limited analysis for general Reed-Solomon codes. Systems with adequate safety margins can continue operating while planning transitions to provably secure parameters or optimized folded Reed-Solomon implementations. High-value applications should accelerate deployment of provable security parameters accepting the practical efficiency costs, while research-stage systems can more aggressively pursue cutting-edge constructions leveraging the Goyal-Guruswami capacity-achieving codes. The diversity of deployment contexts from blockchain infrastructure to secure computation to verifiable machine learning creates varied risk-efficiency tradeoffs that the November 2025 results help navigate with unprecedented clarity. The TensorSwitch contribution extends beyond its specific polynomial commitment construction to demonstrate how tensor product structures can achieve near-optimal complexity bounds across multiple performance dimensions simultaneously. The $(\lambda n)^{0.5+o(1)}$ proof size represents a qualitative improvement over previous schemes, while the $O(\log \log n)$ oracle complexity shows that careful protocol design can achieve polylogarithmic costs in critical parameters. The concrete efficiency claims of roughly one encoding plus $3n$ multiplications for commitment and $6n$ multiplications for proving without extension field commitments suggest that practical implementations can approach these theoretical bounds. The requirement for codes with combined list-decoding and correlated agreement properties creates natural synergy with the November 2025 proximity gap advances, as these properties are precisely what the breakthrough papers characterize and optimize. Looking forward, the field faces clear research priorities: efficient implementations of folded Reed-Solomon proximity testing achieving capacity-level parameters, derandomization of random Reed-Solomon constructions for explicit capacity-achieving codes, investigation of line-decodability for broader code families, exploration of whether fundamentally new list-decoding algorithms could eventually surpass Johnson bound limitations, formalization and application of the right-extension property to additional code families, and development of distributed proving systems leveraging TensorSwitch's sublinear accumulator for practical scalability. The transformation from conjectured to proven proximity gap bounds removes uncertainty while revealing the true optimization landscape, enabling systematic engineering efforts to achieve theoretically optimal performance in practical deployments. The November 2025 breakthroughs thus represent not an endpoint but the foundation for a new generation of algebraic proof systems with provable security guarantees, optimal efficiency tradeoffs, and expanded application domains ranging from blockchain scaling to verifiable AI inference. --- **Figure Placements and Visualization Instructions** Figure 1: Generate matplotlib visualization of proximity gap parameter space showing three regions: unique decoding regime (0 to $\delta/2$) with error $\epsilon_U = n/q$, list-decoding regime ($\delta/2$ to Johnson bound $1-\sqrt{\rho}$) with error $\epsilon_J = O(n^2/q)$, and capacity regime (Johnson bound to $1-\rho$) marked as "impossible for general RS codes" but "achievable for folded RS" based on November 2025 results. Figure 2: Screenshot from page 1 of source paper 2025/2065 showing TensorSwitch main theorem statement with complexity bounds for commitment time, opening time, query complexity, and proof size as functions of code parameters $\rho$, $\delta$, $\tau$. Figure 3: Generate comparison chart showing proof size versus security level for different polynomial commitment schemes: KZG (constant line), FRI (logarithmic), BaseFold (log-squared), TensorSwitch (square-root), with November 2025 parameter regimes annotated. Include concrete efficiency annotation showing TensorSwitch's $3n$ commitment and $6n$ prover multiplication counts. Figure 4: Timeline diagram of November 2025 breakthrough papers: November 5 (Crites-Stewart disproof), November 6 (Goyal-Guruswami capacity achievement, Ben-Sasson et al. refined bounds), November 8 (TensorSwitch publication), showing concurrent development and interconnections. Include annotation that TensorSwitch work was conducted at Succinct Labs and Simons Institute. Figure 5: Generate matplotlib heatmap showing soundness error as function of code rate $\rho$ and proximity parameter $\delta$ for FRI protocol, with Johnson bound curve and capacity bound curve marked, and regions colored by achievability based on November 2025 results. Figure 6: Screenshot from Ben-Sasson et al. 2025/2055 showing improved proximity gap theorem statement with $O(n)$ exceptional points for Johnson radius compared to previous $O(n^2)$ bound. Figure 7: Generate decision tree diagram for parameter selection: starts with security target $\lambda$, branches on code family choice (general RS vs folded RS vs random RS), shows different achievable proximity thresholds and resulting query complexity formulas. Include TensorSwitch-specific branch showing field-agnostic and code-agnostic flexibility. Figure 8: Screenshot from Goyal-Guruswami 2025/2054 showing line-decodability definition and main theorem achieving capacity-level proximity gaps with linear field size requirement. Figure 9: Generate architectural diagram showing TensorSwitch's applications: polynomial commitment at center, with arrows pointing to PCD/recursion (noting WARP-matching efficiency), distributed proving (noting sublinear accumulator), and sparse data commitment use cases. --- **Acknowledgments** Special thanks to the authors of TensorSwitch (Benedikt Bünz, Giacomo Fenzi, Ron Rothblum, and Wenhao Wang) for their public communications clarifying the concrete efficiency and application scope of their construction. The work was conducted primarily at Succinct Labs and the Simons Institute, exemplifying the productive intersection of industry and academic cryptographic research.