# Spartan [Spartan](https://eprint.iacr.org/2019/550.pdf) is a sum-check-based zkSNARK with an extremely efficient prover. It also features several unique properties that are particularly relevant for client-side proving in applications such as identity and other use cases where zero-knowledge is essential. Here are some highlights: - Spartan can be instantiated with any multilinear polynomial commitment scheme (e.g., Binius, HyperKZG, Samaritan, WHIR, Mercury, BaseFold, PST13, Dory, Hyrax). Depending on the polynomial commitment scheme used, one can achieve different properties (e.g., small fields or big fields, post-quantum or pre-quantum security, transparent or universal setup, hash-based or curve-based, binary fields or prime fields). - Spartan is flexible with respect to arithmetization: it can support R1CS, Plonkish, AIR, and their generalization CCS. Spartan protocol itself internally uses lookup arguments, so one can additionally prove lookup constraints with Spartan. - The prover's work naturally splits into a witness-dependent part and a witness-independent part (a significant chunk, up to 90%, of the prover's work is incurred in the witness-independent part). The latter part can be offloaded to any untrusted entity without violating zero-knowledge. Note that such a clean decomposition between witness-dependent part and witness-independent part is not featured by other popular zkSNARKs (e.g., Groth16, Plonk, HyperPlonk, Honk). - The witness-dependent work of the Spartan prover is shown to be MPC-friendly by more recent works, allowing the whole Spartan prover to be delegated. - For uniform constraint systems, Spartan's prover can be optimized further by eliminating the witness-indepdent work of the prover, which constitutes about 90% of the prover's work. Despite several papers describing Spartan and follow-up works, there is a lack of understanding of the Spartan protocol--even among experts. The primary goal of this document is to provide a gentle introduction to the Spartan protocol. While we aim for precision, we may occasionally abuse notation for the sake of clarity and accessibility. # Adoption and use-cases Spartan has been adopted in various settings (proving machine executions aka zkVMs, stand-alone proof system, coSNARKs, folding schemes, etc). Due to its light-weight prover and because of its unique property to offload most of the prover's work to an untrusted entity *without* sacrificing zero-knowledge (which we detail below), World is planning to use Spartan as its proving scheme (see [this tweet](https://x.com/DCbuild3r/status/1886368751479644578) for details). Other [ongoing efforts](https://github.com/AntoineF4C5/Nova/tree/delegated-spartan) are also exploring how to leverage Spartan’s delegated proving properties. Spartan is also used as a proof system to compress proofs produced by folding schemes, after many folding steps have been performed (See [Nova](https://eprint.iacr.org/2021/370.pdf) and [MicroNova](https://eprint.iacr.org/2024/2099.pdf) for details). Subsequently, an early-stopping Spartan led to the creation of a more efficient folding scheme, called [HyperNova](https://eprint.iacr.org/2023/573.pdf). An early-stopping HyperNova, called [NeutronNova](https://eprint.iacr.org/2024/1606.pdf), was also recently introduced. This is currently the most efficient folding scheme, especially in the group setting. Prior to NeutronNova, folding schemes had one or more downsides. NeutronNova obtains the best properties of those prior schemes while addressing their downsides (see [this thread](https://x.com/srinathtv/status/1910787412365890043) for details). Additionally, HyperNova has been adapted to the lattice setting to build post-quantum secure folding schemes, including [LatticeFold](https://eprint.iacr.org/2024/257.pdf), [LatticeFold+](https://eprint.iacr.org/2024/257.pdf), and [Neo](https://eprint.iacr.org/2025/294.pdf). These schemes have garnered significant interest for powering next-generation zkVMs. Spartan powers all components of Jolt, a state-of-the-art zkVM, with Spartan's [original implementation](https://github.com/Microsoft/Spartan) providing a starting point for Jolt's implementation. Specifically, Jolt uses Spartan to prove R1CS constraints. Additionally, an internal component of Spartan, called Spark, was refactored into a lookup argument for gigantic structured tables (see [Lasso](https://eprint.iacr.org/2023/1216.pdf) for details). This component is used to prove instruction execution. Furthermore, a predecessor of Spark, called [Spice](https://eprint.iacr.org/2018/907.pdf), is used to prove read-write memory operations and register operations. Due to its light-weight prover and its flexibility with commitment schemes and fields, Spartan excels at proving that someone holds a valid identity document (e.g., passport, JWT, driver's license). Indeed, Spartan is used by [several projects](https://eprint.iacr.org/2025/538.pdf) to prove the validity of ECDSA signatures and hash functions. See also the earlier [Spartan-ecdsa](https://github.com/personaelabs/spartan-ecdsa) project and an even earlier [standardization effort](https://github.com/decentralized-identity/crypto-wg/blob/main/work_items/spartan_zkSNARK_signatures.md). ## Unique properties of Spartan A unique property of the Spartan protocol is that it naturally splits into two parts: (1) The core protocol that involves the witness, and (2) The Spark protocol that proves evaluations of sparse polynomials encoding constraints/circuit. For the purpose of zero-knowledge, only the first part needs to be run by the prover holding the secret witness. The second part can be outsourced to any untrusted entity. Note that the untrusted entity can prove a batch of claims about the same sparse polynomial much more efficiently than proving them one by one: the untrusted party first performs an $n$-to-$1$ reduction and then runs the Spark protocol only once (we will unpack details of this in another post). The untrusted entity can also produce a single proof proving the correctness of multiple Spartan proofs using another proof system (e.g., MicroSpartan, Groth16). Note that this type of decomposition of prover's work between a witness-dependent part and a witness-independent part is not satisfied by other proof systems such as Plonk or Groth16. HyperPlonk adopts Spartan's techniques but closely follows Plonk's blueprint, so it also does not have this feature. Spartan's core protocol that is witness dependent involves commiting to the witness, two sum-check invocations, and proving the witness polynomial at a single random point. This is extremely light-weight compared to many other zkSNARK protocols (e.g., Plonk, Groth16). [Recent work](https://eprint.iacr.org/2025/296.pdf) demonstrates that Spartan's core protocol is MPC-friendly, allowing the creation of coSNARKs powered by Spartan. Since the Spark protocol does not need to be run via the MPC protocol, this allows the whole Spartan protocol to be used in the coSNARK setting. ### Uniform constraint systems Spartan can be optimized further, reducing reliance on Spark protocol if the constraint system that is proven is structured (e.g., repeated copies of the same sub constraint system). Instead of offloading the Spark prover, which is responsible for proving sparse polynomials related to constraint systems, to an untrusted entity, the verifier could evaluate the required sparse polynomials on its own. For some constraint systems (e.g., uniform constraints like AIR), the cost to evaluate the required sparse polynomials is logarithmic in the size of the circuit, so the Spark prover can be omitted entirely for uniform circuits. The [SuperSpartan paper](https://eprint.iacr.org/2023/552.pdf) describes details of how to quickly evaluate sparse polynomials by leveraging uniform structure. Another example is [Phalanx](https://eprint.iacr.org/2021/1263.pdf), which considers the case of "R1CS that encodes an iterated sequential computation". It introduces three sequential sum-checks (instead of Spartan's two sum-checks), where the outer most sum-check is used to fold $n$ copies of the same R1CS into a single copy. Then, the core Spartan protocol is applied on a single, small copy of R1CS. Jolt also applies Spartan to prove an iterated R1CS, but each iteration is tiny (less than 100 constraints). For this, Jolt uses two sequential sum-check invocations and optimizes the second sum-check to leverage the repeated structure (see this [link](https://github.com/a16z/jolt/pull/530) for details). ## Arithmetization For simplicity, this document focuses on R1CS. Let the constraint system be defined over a finite field $\mathbb{F}$, with: - Three sparse matrices $A, B, C \in \mathbb{F}^{m \times m}$. An instance-witness pair is defined with: - A public input/output vector $x \in \mathbb{F}^{n_x}$. - A witness vector $w \in \mathbb{F}^{n_w}$. Without loss of generality, $n_w=n_x+1$ (we use this below). Define: $$ z := (w, 1, x) \in \mathbb{F}^m, \quad \text{with } m = n_w + 1 + n_x. $$ A valid assignment satisfies: $$ Az \circ Bz = Cz, $$ where $\circ$ denotes element-wise multiplication. ## Preprocessing / Computation commitment / Holography Each matrix $M \in \{A, B, C\}$ is stored in sparse format: Let $N$ be the number of non-zero entries in $M$. Encode: - $\text{row}_M \in [m]^N$: row indices of non-zero entries, - $\text{col}_M \in [m]^N$: column indices, - $\text{val}_M \in \mathbb{F}^N$: corresponding values. This representation enables efficient sparse matrix-vector products and serves as the foundation for Spartan's internal **Spark** protocol. ### Spark, Lookup Tables, and Auxiliary Material Spartan internally invokes **Spark**, which relies on **lookup arguments** to check the correctness of sparse polynomial evaluations (e.g., matrix-vector products). These arguments treat $\text{row}_M$ and $\text{col}_M$ as address queries into structured lookup tables. To enable the lookup argument, additional auxiliary material is included during preprocessing. The exact material depends on the choice of lookup protocol: - **Classic Spartan** (as described in the original paper) uses a variant of **offline memory checking** from Blum et al. (as used in Spice), where the auxiliary material is timestamps/counters. - **Alternative lookup protocols** can also be used: - **logUp**: a lookup argument that is less efficient for the prover, and the auxiliary material is similar to Spice's. It is less efficient because it requires the prover to send oracles during the lookup protocol whereas the protocol used in the original Spartan and Shout (introduced next) do not. [LogSpartan](https://eprint.iacr.org/2025/419.pdf) and [DFS](https://eprint.iacr.org/2025/296.pdf) are both Spartan variants that use LogUp in Spark. - **Shout**: A new lookup argument designed for better prover efficiency. A version of Spartan using Shout as the lookup argument was recently described as Spartan++ and analyzed in [this paper](https://eprint.iacr.org/2025/105.pdf). These options provide flexibility in instantiating Spartan with different trade-offs in prover time, verifier complexity, and proof size. ## Proving protocol We describe the protocol in the **polynomial IOP model**, where the prover sends its polynomials as messages and the verifier is allowed to request an evaluation of the polynomial at any point in the domain. The verifier can also query any polynomial from the preprocessing step. Such polynomial IOPs are compiled into **succinct arguments** by using a **polynomial commitment scheme**: instead of the prover sending the polynomial, it sends a succinct commitment to the polynomial. When the verifier queries the polynomial, the prover sends an evaluation along with a proof that the evaluation is consistent with the earlier commitment. This compilation leads to the Spartan **succinct interactive argument**. It can then be turned into a **zkSNARK** using Fiat-Shamir transformation and zero-knowledge transformation. At its core, Spartan relies on the sum-check protocol. The sum-check protocol is an interactive proof system that allows the prover and the verifier to reduce the task of checking if $T=\sum_{x \in [m]} g(x)$, where $g$ is a polynomial in variable $x$, to the task of checking if $e_g = g(r_x)$, where $e_g$ and $r_x$ are chosen over the course of the sum-check protocol. In essense, the sum-check protocol reduces the task of checking the sum of evaluations of $g$ to a single evaluation of $g$ at a random point. ### (1) The core protocol (witness dependent) We now describe Spartan's core protocol. This protocol was originally described in the Section 5.1 of the [Spartan paper](https://eprint.iacr.org/2019/550.pdf). We can find additional descriptions of Spartan's core protocol in the following papers: (i) section 3 of [Brakedown](https://eprint.iacr.org/2021/1043.pdf), (ii) Figure 2 of [SuperSpartan](https://eprint.iacr.org/2023/552.pdf). Let $s = \log{m}$. When we write $[m]$, we mean the set of bit strings that represent numbers from $0$ to $m - 1$, i.e., $\{0, 1\}^{s}$. For simplicity, we will assume a natural mapping from vectors to multilinear extension polynomials. 1. **The prover sends** the purported witness polynomial $w$. (As noted above, when transformed to a SNARK, this will be a commitment to the polynomial.) 2. **The verifier picks** $\tau \in \mathbb{F}^s$ and sends it to the prover. 3. Let $a = Az$, $b = Bz$, $c = Cz$ (i.e., $a, b, c$ are computed with matrix-vector products). The prover and the verifier run the **sum-check protocol** to check if: $$ 0 = \sum_{x \in [m]} g(x), \quad \text{where } g(x) = eq(\tau, x) \cdot (a(x) \cdot b(x) - c(x)). $$ The sum-check protocol reduces this to checking if: $$ e_g = g(r_x), $$ where $e_g$ and $r_x$ are chosen over the course of the protocol. 4. **The prover sends** $v_A, v_B, v_C$, which are purported to equal $a(r_x), b(r_x), c(r_x)$. 5. **The verifier rejects if the following does not hold**: $$ e_g = eq(\tau, r_x) \cdot (v_A \cdot v_B - v_C). $$ The verifier can calculate the first quantity on its own in logarithmic time. 6. **The prover now needs to establish the correctness of** $v_A$, $v_B$, and $v_C$. The verifier picks a random value $\rho \in \mathbb{F}$ and sends it to the prover. 7. **The verifier and the prover run the second sum-check** to check if the following holds: $$ T = \sum_{y \in [m]} h(y), $$ where: $$ T = v_A + \rho \cdot v_B + \rho^2 \cdot v_C, $$ $$ h(y) = \left(A(r_x, y) + \rho \cdot B(r_x, y) + \rho^2 \cdot C(r_x, y)\right) \cdot z(y). $$ This reduces to checking if $e_h = h(r_y)$ where $e_h$ and $r_y$ are chosen over the course of the sum-check protocol. 8. **To facilitate the above check, the prover sends**: - $e_w = w(r_y[1..])$, where $r_y[1..]$ refers to a slice of the vector $r_y$ except the first entry. - $e_A = A(r_x, r_y)$ - $e_B = B(r_x, r_y)$ - $e_C = C(r_x, r_y)$ 9. **The verifier computes**: $$ e_z = (1 - r_y[0]) \cdot e_w + r_y[0] \cdot (1, x)(r_y[1..]), $$ and checks: $$ e_h = (e_A + \rho \cdot e_B + \rho^2 \cdot e_C) \cdot e_z. $$ If the check fails, the verifier **rejects**. 10. **Since the verifier does not have oracle access to** $A, B, C$ — only their sparse representations — we are left with checking the correctness of $e_A, e_B, e_C$. This is done by invoking the **Spark** protocol for each of them. (Note that in the compiled zkSNARK, $e_w$ is checked by having the prover send proof that the evaluation is consistent with the commitment sent in the first step.) ### (2) The Spark protocol (witness independent) We now describe the Spark protocol, which let's us complete the last step of the core protocol. This protocol was first described in Section 7.2.2 of the [Spartan paper](https://eprint.iacr.org/2019/550.pdf). We can find additional descriptions of this in Section 6 of [Brakedown](https://eprint.iacr.org/2021/1043.pdf), and Section 4 of [Lasso](https://eprint.iacr.org/2023/1216.pdf#page=15.55). We will focus on describing the case of one sparse polynomial evaluation, but the protocol naturally generalizes to prove a batch of evaluation claims at once. For simplicity, we focus on encoding the sparse matrix with two "address" vectors ($\text{row}$ and $\text{col}$). As noted in the Lasso paper, one can generalize this into any number of vectors. For example, one can further split each of these into two vectors, resulting in a total of four vectors. In this particular example, the main benefit is that the size of the tables on which the lookup is made goes down from linear to sqrt in the number of constraints. Lev Soukhanov observes that in that setting, one can commit to oracles in the first step of Spark much more efficiently than when we only use two address vectors. Each matrix $M \in \{A, B, C\}$ is represented using: - $\text{row}_M \in [m]^N$ - $\text{col}_M \in [m]^N$ - $\text{val}_M \in \mathbb{F}^N$ Recall the multilinear extension of $M$ is: $$ M(r_x, r_y) = \sum_{z=0}^{N-1} eq(r_x, \text{row}_M[z]) \cdot eq(r_y, \text{col}_M[z]) \cdot \text{val}_M(z) $$ Let us construct two structured tables, $T_1,T_2 \in \mathbb{F}^m$, where the entries are as follows. - $T_1(i) = eq(r_x, i)$ for all $i \in [m]$ - $T_2(i) = eq(r_y, i)$ for all $i \in [m]$ The prover and the verifier check $e_M = M(r_x, r_y)$ for each $M \in \{A, B, C\}$ as follows. 1. **The prover sends oracles, by making lookups into the tables:** - $E_1(z) = T_1[\text{row}_M[z]]$ - $E_2(z) = T_2[\text{col}_M[z]]$ for all $z \in [N]$ 2. **They run a sum-check** to verify: $$ e_M = \sum_{z \in [N]} f(z), \quad \text{where } f(z) = E_1(z) \cdot E_2(z) \cdot \text{val}_M(z) $$ This reduces to checking if: $$ e_f = f(r_z) $$ for some random point $r_z$ and $e_f$. 3. **The prover sends:** - $v_1 = E_1(r_z)$ - $v_2 = E_2(r_z)$ - $v_3 = \text{val}_M(r_z)$ 4. **The verifier checks:** $$ e_f = v_1 \cdot v_2 \cdot v_3 $$ and rejects if this fails. 5. **The verifier now checks correctness of** $E_1$ and $E_2$ by using a **lookup argument**. Note that the addresses into the lookup table are from the preprocessing phase: $\text{row}_M$ and $\text{col}_M$. Note that most lookup arguments allow batch proving, so the correctness of both $E_1$ and $E_2$ can be proven at once. ## Additional resources * [Spartan](https://eprint.iacr.org/2019/550.pdf) describes the Spartan protocol including Spark. * [Brakedown](https://eprint.iacr.org/2021/1043.pdf) paper provides the description of Spartan as a polynomial IOP * [SuperSpartan](https://eprint.iacr.org/2023/552.pdf) paper shows that Spartan supports CCS, which generalizes Plonkish, R1CS, and AIR. * [Lasso](https://eprint.iacr.org/2023/1216.pdf) paper generalizes Spark * [MicroSpartan](https://eprint.iacr.org/2024/2099.pdf) describes a variant of Spartan with minimal proof sizes; this internally uses LogUp. * [Shout](https://eprint.iacr.org/2025/105.pdf) paper describes the Shout lookup argument, as well as variants of Spartan called SpeedySpartan and Spartan++ * [Thaler's book](https://people.cs.georgetown.edu/jthaler/ProofsArgsAndZK.html) for foundations including details of the sum-check protocol, commitment schemes, Spartan, and Spark.