Spartan: Understanding Transparent SNARKs Without Trusted Setup

February 25, 2026

Introduction

Spartan is a transparent proof system that achieves something previously thought to require tradeoffs: no trusted setup (no toxic waste), general-purpose (works for any NP statement via R1CS), and sub-linear verification (verifier doesn’t read the entire statement). Zero-knowledge can be added on top.

Before Spartan (2019), you had to pick your poison:

  • Groth16: Fast verification, tiny proofs, but requires a trusted setup ceremony per circuit
  • Bulletproofs: Transparent, but verification scales linearly with circuit size
  • STARKs: Transparent and sub-linear verification, but large proofs and use a different constraint model (AIR, not R1CS)

Spartan breaks this tradeoff by combining two ingredients:

  1. Sum-check protocol applied directly to R1CS constraints
  2. Polynomial commitments to achieve succinctness

The result is the first transparent SNARK with sub-linear verification for arbitrary R1CS circuits.

This article explains how Spartan works. We’ll start with the basic protocol where the verifier runs in linear time, then show how polynomial commitments compress this into a SNARK with sub-linear verification. Zero-knowledge can be layered on top—we cover that briefly at the end. We treat the polynomial commitment scheme as a blackbox—Spartan works with any PCS that supports committing to multilinear polynomials and opening them at arbitrary points.

What you’ll need: Familiarity with R1CS, multilinear extensions (MLEs), and the sum-check protocol. We provide brief refreshers below, with links to deeper resources.

What we won’t cover: The SPARK protocol (Spartan’s technique for committing to sparse matrices) gets its own article. We also skip security proofs and implementation details—the paper and code are there for that.

Background

R1CS: The Language of Constraints

R1CS (Rank-1 Constraint System) is a standard format for representing computations as a system of constraints. Most ZK proof systems that work with general computations use R1CS as their intermediate representation.

An R1CS instance consists of three sparse matrices A,B,CA, B, C of size m×nm \times n, where mm is the number of constraints and nn is the number of variables. A witness vector zz of length nn satisfies the R1CS if:

AzBz=CzA \cdot z \odot B \cdot z = C \cdot z

where \odot denotes element-wise (Hadamard) multiplication. In other words, for each constraint ii:

(A[i]z)×(B[i]z)=C[i]z(A[i] \cdot z) \times (B[i] \cdot z) = C[i] \cdot z

The witness vector zz has structure: z=(1,x,w)z = (1, x, w) where:

  • ww: private witness (the secret the prover knows)
  • 11: a constant term (always 1)
  • xx: public inputs/outputs (known to both prover and verifier)

The matrices A,B,CA, B, C are public—they define the computation. The prover’s job is to convince the verifier that they know a ww such that the full zz satisfies all mm constraints.

Example: The constraint z1×z2==z3z_1 \times z_2 == z_3 is encoded as a single row where AA picks out z1z_1, BB picks out z2z_2, and CC picks out z3z_3.

For a deeper dive into R1CS, see RareSkills’ R1CS explainer or Vitalik’s QAP article.

Multilinear Extensions: Turning Data into Polynomials

The key to Spartan (and many modern proof systems) is viewing data structures—vectors and matrices—as polynomials. This lets us use polynomial techniques like the sum-check protocol.

The idea: Any function f:0,1nFf: \\{0,1\\}^n \to \mathbb{F} (mapping nn-bit binary strings to field elements) has a unique multilinear extension f~:FnF\tilde{f}: \mathbb{F}^n \to \mathbb{F} that:

  1. Agrees with ff on all boolean inputs: f~(b)=f(b)\tilde{f}(b) = f(b) for all b0,1nb \in \\{0,1\\}^n
  2. Is multilinear: linear in each variable individually

Vectors as polynomials: A vector vv of length 2n2^n can be viewed as a function f:0,1nFf: \\{0,1\\}^n \to \mathbb{F} where f(b)=v[int(b)]f(b) = v[\text{int}(b)] (interpreting the binary string as an index). Its MLE v~(x1,,xn)\tilde{v}(x_1, \ldots, x_n) is a polynomial that “encodes” the entire vector.

Matrices as polynomials: Similarly, an m×nm \times n matrix MM (with m=2sm = 2^s, n=2tn = 2^t) becomes a polynomial M~(x,y)\tilde{M}(x, y) in s+ts + t variables, where:

  • x=(x1,,xs)x = (x_1, \ldots, x_s) indexes the row
  • y=(y1,,yt)y = (y_1, \ldots, y_t) indexes the column
  • M~(x,y)=M[int(x),int(y)]\tilde{M}(x, y) = M[\text{int}(x), \text{int}(y)] for boolean inputs

Why this matters for Spartan: The R1CS matrices AA, BB, CC and witness vector zz all become polynomials. The constraint equation AzBz=CzAz \circ Bz = Cz becomes a polynomial identity that we can check using sum-check.

The eq polynomial: A building block you’ll see often is eq~(τ,x)\widetilde{eq}(\tau, x), which evaluates to 1 when x=τx = \tau (on boolean inputs) and 0 otherwise. It’s defined as:

eq~(τ,x)=i=1n(τixi+(1τi)(1xi))\widetilde{eq}(\tau, x) = \prod_{i=1}^{n} \left( \tau_i x_i + (1 - \tau_i)(1 - x_i) \right)

This polynomial lets us “select” a specific point—crucial for combining multiple constraints into one.

For more on MLEs, see Chapter 3 of Thaler’s “Proofs, Arguments, and Zero-Knowledge”.

Sum-check Protocol: The Workhorse

The sum-check protocol is the engine that powers Spartan. It lets a prover convince a verifier about the sum of a polynomial over the boolean hypercube—without the verifier evaluating all 2n2^n points.

The claim: Given an nn-variate polynomial g(x1,,xn)g(x_1, \ldots, x_n), the prover claims:

H=b0,1ng(b1,,bn)H = \sum_{b \in \\{0,1\\}^n} g(b_1, \ldots, b_n)

Naively, verifying this requires 2n2^n evaluations. Sum-check reduces it to a single evaluation at a random point.

The protocol (interactive, nn rounds):

  1. Round 0 (Setup):

    • Prover computes the sum H=b0,1ng(b)H = \sum_{b \in \\{0,1\\}^n} g(b)
    • Prover sends claimed sum HH to verifier
  2. Round 1: Prover sends univariate polynomial s1(X1)=b2,,bn0,1g(X1,b2,,bn)s_1(X_1) = \sum_{b_2, \ldots, b_n \in \\{0,1\\}} g(X_1, b_2, \ldots, b_n)

    • Verifier checks: s1(0)+s1(1)=Hs_1(0) + s_1(1) = H
    • Verifier sends random challenge r1r_1
  3. Round 2: Prover sends s2(X2)=b3,,bn0,1g(r1,X2,b3,,bn)s_2(X_2) = \sum_{b_3, \ldots, b_n \in \\{0,1\\}} g(r_1, X_2, b_3, \ldots, b_n)

    • Verifier checks: s2(0)+s2(1)=s1(r1)s_2(0) + s_2(1) = s_1(r_1)
    • Verifier sends random challenge r2r_2
  4. … continue for all nn variables …

  5. Final step: After round nn, verifier has random point r=(r1,,rn)r = (r_1, \ldots, r_n)

    • Verifier checks: sn(rn)=g(r1,,rn)s_n(r_n) = g(r_1, \ldots, r_n)
    • This requires one evaluation of gg at the random point

Why it works: Each round “binds” one variable to a random value. A cheating prover would need to guess the verifier’s random challenges to produce consistent polynomials—Schwartz-Zippel lemma makes this negligibly probable.

Cost:

  • Prover: O(2n)O(2^n) work (must touch all terms)
  • Verifier: O(n)O(n) field operations + one evaluation of gg at random point
  • Communication: O(n)O(n) field elements (for multilinear gg, each round sends a degree-1 polynomial)

The catch: The verifier still needs to evaluate g(r1,,rn)g(r_1, \ldots, r_n) at the end. For some polynomials (like MLEs of sparse matrices), this can be expensive—this is where Spartan’s cleverness comes in.

For a complete treatment, see Chapter 4 of Thaler’s book.

The Core Idea: R1CS as a Polynomial Equation

Here’s Spartan’s key insight: R1CS satisfiability can be expressed as a polynomial equation, which we can verify using sum-check.

Recall the R1CS condition: for a valid witness zz, we need:

(Az)i×(Bz)i=(Cz)ifor all i1,,m(Az)_i \times (Bz)_i = (Cz)_i \quad \text{for all } i \in \\{1, \ldots, m\\}

Let’s rewrite this using MLEs. Define:

  • A~(x,y)\widetilde{A}(x, y), B~(x,y)\widetilde{B}(x, y), C~(x,y)\widetilde{C}(x, y): MLEs of matrices AA, BB, CC
  • z~(y)\widetilde{z}(y): MLE of witness vector zz

The matrix-vector product (Az)i(Az)_i becomes a sum over the MLE:

(Az)i=jAi,jzj=y0,1lognA~(i,y)z~(y)(Az)_i = \sum_{j} A_{i,j} \cdot z_j = \sum_{y \in \\{0,1\\}^{\log n}} \widetilde{A}(i, y) \cdot \widetilde{z}(y)

Now, R1CS satisfaction means: for every row x0,1logmx \in \\{0,1\\}^{\log m}:

(yA~(x,y)z~(y))×(yB~(x,y)z~(y))=yC~(x,y)z~(y)\left(\sum_y \widetilde{A}(x, y) \cdot \widetilde{z}(y)\right) \times \left(\sum_y \widetilde{B}(x, y) \cdot \widetilde{z}(y)\right) = \sum_y \widetilde{C}(x, y) \cdot \widetilde{z}(y)

The problem: We have mm equations to check (one per constraint). Checking each individually is too expensive.

The τ\tau trick: Instead of checking all mm equations, the verifier picks a random τFlogm\tau \in \mathbb{F}^{\log m} and we check a random linear combination:

x0,1logmeq~(τ,x)[(yA~(x,y)z~(y))(yB~(x,y)z~(y))yC~(x,y)z~(y)]=0\sum_{x \in \\{0,1\\}^{\log m}} \widetilde{eq}(\tau, x) \cdot \left[ \left(\sum_y \widetilde{A}(x, y) \cdot \widetilde{z}(y)\right) \cdot \left(\sum_y \widetilde{B}(x, y) \cdot \widetilde{z}(y)\right) - \sum_y \widetilde{C}(x, y) \cdot \widetilde{z}(y) \right] = 0

Here eq~(τ,x)\widetilde{eq}(\tau, x) acts as a “random weight” for each constraint. If the R1CS is satisfied, every term in the brackets is zero, so the sum is zero. If any constraint is violated, the sum is non-zero with high probability (by Schwartz-Zippel).

What we’ve achieved: Checking mm constraints reduces to checking one sum equals zero—exactly what sum-check is designed for.

The polynomial inside the sum has degree 3 overall (but still multilinear in xx, one from eq~\widetilde{eq}, and the product of two linear terms). This is the polynomial we’ll apply sum-check to in Phase 1.

Phase 1: Checking All Constraints at Once

We now have a sum to verify:

x0,1seq~(τ,x)[Az(x)Bz(x)Cz(x)]F(x)=0\sum_{x \in \\{0,1\\}^s} \underbrace{\widetilde{eq}(\tau, x) \cdot \left[ A_z(x) \cdot B_z(x) - C_z(x) \right]}_{F(x)} = 0

where s=logms = \log m and we define:

  • Az(x)=y0,1tA~(x,y)z~(y)A_z(x) = \sum_{y \in \\{0,1\\}^t} \widetilde{A}(x, y) \cdot \widetilde{z}(y) — the MLE of vector AzAz
  • Bz(x)B_z(x) and Cz(x)C_z(x) defined similarly
  • t=lognt = \log n where nn is the witness length

This is exactly a sum-check claim. The prover claims the sum is 0, and we run sum-check on the polynomial F(x)F(x).

What happens during Phase 1:

  1. Prover sends the claimed sum: H=0H = 0
  2. Over ss rounds, prover and verifier run sum-check on F(x)F(x)
  3. Each round, verifier sends a random challenge rir_i
  4. After ss rounds, all xx variables are bound to random values rx=(r1,,rs)r_x = (r_1, \ldots, r_s)

At the end of Phase 1, the verifier needs to check:

F(rx)=eq~(τ,rx)[Az(rx)Bz(rx)Cz(rx)]F(r_x) = \widetilde{eq}(\tau, r_x) \cdot \left[ A_z(r_x) \cdot B_z(r_x) - C_z(r_x) \right]

The verifier can compute eq~(τ,rx)\widetilde{eq}(\tau, r_x) easily (it’s a product of ss terms). But to compute Az(rx)A_z(r_x), Bz(rx)B_z(r_x), and Cz(rx)C_z(r_x), they need:

Az(rx)=y0,1tA~(rx,y)z~(y)A_z(r_x) = \sum_{y \in \\{0,1\\}^t} \widetilde{A}(r_x, y) \cdot \widetilde{z}(y)

This is another sum over the boolean hypercube—with 2t2^t terms!

The problem: Computing Az(rx)A_z(r_x) directly requires O(n)O(n) work (summing over all yy). If the verifier does this for AA, BB, and CC, we’re back to linear verification time.

The solution: We run another sum-check (Phase 2) to reduce this to a single evaluation.

Phase 2: Ensuring Consistency

At the end of Phase 1, the verifier needs Az(rx)A_z(r_x), Bz(rx)B_z(r_x), and Cz(rx)C_z(r_x). The prover sends these as claims:

vA=Az(rx),vB=Bz(rx),vC=Cz(rx)v_A = A_z(r_x), \quad v_B = B_z(r_x), \quad v_C = C_z(r_x)

But why should the verifier believe these values? We need to verify that:

vA=y0,1tA~(rx,y)z~(y)v_A = \sum_{y \in \\{0,1\\}^t} \widetilde{A}(r_x, y) \cdot \widetilde{z}(y)

(and similarly for vBv_B and vCv_C).

Combining three claims into one: Instead of running three separate sum-checks, the verifier sends random challenges ρA,ρB,ρC\rho_A, \rho_B, \rho_C and we verify a single combined claim:

ρAvA+ρBvB+ρCvC=y0,1t(ρAA~(rx,y)+ρBB~(rx,y)+ρCC~(rx,y))z~(y)\rho_A \cdot v_A + \rho_B \cdot v_B + \rho_C \cdot v_C = \sum_{y \in \\{0,1\\}^t} \left( \rho_A \cdot \widetilde{A}(r_x, y) + \rho_B \cdot \widetilde{B}(r_x, y) + \rho_C \cdot \widetilde{C}(r_x, y) \right) \cdot \widetilde{z}(y)

This is another sum-check! Let’s define:

G(y)=(ρAA~(rx,y)+ρBB~(rx,y)+ρCC~(rx,y))z~(y)G(y) = \left( \rho_A \cdot \widetilde{A}(r_x, y) + \rho_B \cdot \widetilde{B}(r_x, y) + \rho_C \cdot \widetilde{C}(r_x, y) \right) \cdot \widetilde{z}(y)

What happens during Phase 2:

  1. Verifier sends random ρA,ρB,ρC\rho_A, \rho_B, \rho_C to combine the three claims
  2. Prover and verifier run sum-check on G(y)G(y) for tt rounds
  3. Each round, verifier sends random challenge rjr'_j
  4. After tt rounds, all yy variables are bound to ry=(r1,,rt)r_y = (r'_1, \ldots, r'_t)

At the end of Phase 2, the verifier needs to check:

G(ry)=(ρAA~(rx,ry)+ρBB~(rx,ry)+ρCC~(rx,ry))z~(ry)G(r_y) = \left( \rho_A \cdot \widetilde{A}(r_x, r_y) + \rho_B \cdot \widetilde{B}(r_x, r_y) + \rho_C \cdot \widetilde{C}(r_x, r_y) \right) \cdot \widetilde{z}(r_y)

What the verifier now needs:

  1. A~(rx,ry)\widetilde{A}(r_x, r_y), B~(rx,ry)\widetilde{B}(r_x, r_y), C~(rx,ry)\widetilde{C}(r_x, r_y) — evaluations of the R1CS matrices at random point (rx,ry)(r_x, r_y)
  2. z~(ry)\widetilde{z}(r_y) — evaluation of the witness polynomial at random point ryr_y

We’ve reduced everything to four point evaluations. This is the final check.

The Complete Protocol (Linear Verifier)

Let’s put the pieces together. Here’s the full Spartan protocol with linear-time verification:

Setup:

  • Public: R1CS matrices AA, BB, CC (size m×nm \times n)
  • Public: input xx
  • Prover’s private input: witness ww
  • Prover constructs: z=(1,x,w)z = (1, x, w)

Protocol:

StepProverVerifier
1Sends random τFlogm\tau \in \mathbb{F}^{\log m}
2Phase 1 Sum-check: Proves xeq~(τ,x)[Az(x)Bz(x)Cz(x)]=0\sum_x \widetilde{eq}(\tau, x) \cdot [A_z(x) \cdot B_z(x) - C_z(x)] = 0Participates in sum-check, obtains rxr_x
3Sends claimed values vA,vB,vCv_A, v_B, v_C
4Sends random ρA,ρB,ρC\rho_A, \rho_B, \rho_C
5Phase 2 Sum-check: Proves y(ρAA~+ρBB~+ρCC~)(rx,y)z~(y)=ρAvA+ρBvB+ρCvC\sum_y (\rho_A \widetilde{A} + \rho_B \widetilde{B} + \rho_C \widetilde{C})(r_x, y) \cdot \widetilde{z}(y) = \rho_A v_A + \rho_B v_B + \rho_C v_CParticipates in sum-check, obtains ryr_y
6Final check (see below)

Final Check:

The verifier must verify G(ry)G(r_y) from Phase 2. This requires:

  1. Matrix evaluations: A~(rx,ry)\widetilde{A}(r_x, r_y), B~(rx,ry)\widetilde{B}(r_x, r_y), C~(rx,ry)\widetilde{C}(r_x, r_y)
  2. Witness evaluation: z~(ry)\widetilde{z}(r_y)

For the witness: the verifier can split z=(1,x,w)z = (1, x, w). The public part (1,x)(1, x) they know, so they can compute the contribution from public inputs. For the private part ww, the prover sends w~(ry)\widetilde{w}(r_y) and proves it’s correct (using a polynomial commitment—more on this in the SNARK section).

For the matrices: the verifier computes A~(rx,ry)\widetilde{A}(r_x, r_y), B~(rx,ry)\widetilde{B}(r_x, r_y), C~(rx,ry)\widetilde{C}(r_x, r_y) directly. This takes O(N)O(N) time where NN is the number of non-zero entries in the matrices.

Making it non-interactive: The protocol above is interactive—the verifier sends random challenges. In practice, we use the Fiat-Shamir transform: replace verifier’s random challenges with hashes of the transcript so far. The Spartan implementation uses merlin for this. The result is a single proof message from prover to verifier.

Complexity:

ProverVerifierProof Size
TimeO(N)O(N)O(N)O(N)O(logm+logn)O(\log m + \log n) field elements

(where NN is the number of non-zero entries in the R1CS matrices)

The takeaway: The proof is succinct (logarithmic size), but verification is linear. The verifier must read the entire R1CS instance to compute those matrix evaluations.

This is not yet a SNARK (Succinct Non-interactive ARgument of Knowledge). The “succinct” in SNARK refers to sub-linear verification—which we don’t have yet.

Can we do better?

The Verifier’s Bottleneck

Yes. The bottleneck is clear: the verifier needs to compute A~(rx,ry)\widetilde{A}(r_x, r_y), B~(rx,ry)\widetilde{B}(r_x, r_y), and C~(rx,ry)\widetilde{C}(r_x, r_y).

For a sparse matrix with NN non-zero entries, evaluating its MLE at a random point takes O(N)O(N) time. The verifier must touch every non-zero entry to compute the evaluation. There’s no shortcut—unless the verifier already knows the answer.

The insight: What if the prover just tells the verifier what A~(rx,ry)\widetilde{A}(r_x, r_y), B~(rx,ry)\widetilde{B}(r_x, r_y), C~(rx,ry)\widetilde{C}(r_x, r_y) are?

The problem is trust. The prover could lie. We need a way for the prover to commit to the matrices beforehand, and then prove that the claimed evaluations are correct.

This is exactly what polynomial commitment schemes do.

Achieving Sub-linear Verification

The SNARK variant adds a preprocessing phase and uses polynomial commitments to eliminate the linear-time matrix evaluation.

Preprocessing (done once per circuit):

Before any proofs are generated, we commit to the R1CS matrices:

  1. Compute commitments: CA=Commit(A~)C_A = \text{Commit}(\widetilde{A}), CB=Commit(B~)C_B = \text{Commit}(\widetilde{B}), CC=Commit(C~)C_C = \text{Commit}(\widetilde{C})
  2. Publish these commitments as part of the public parameters

This preprocessing takes O(N)O(N) time where NN is the number of non-zero entries, but it’s done only once.

Modified Protocol:

The SNARK protocol is almost identical to the basic protocol, with one key difference at the end:

Linear VerifierSuccinct Verifier (SNARK)
Verifier computes A~(rx,ry)\widetilde{A}(r_x, r_y) directlyProver sends claimed value eAe_A
(takes O(N)O(N) time)Prover provides proof that eA=A~(rx,ry)e_A = \widetilde{A}(r_x, r_y) via PCS
Verifier checks proof against commitment CAC_A

What the verifier needs from PCS:

The polynomial commitment scheme must support:

  • Commit: Given polynomial pp, output commitment CC
  • Open: Given point rr, prove that p(r)=vp(r) = v
  • Verify: Given commitment CC, point rr, claimed value vv, and proof π\pi, check correctness

The verification must be faster than evaluating the polynomial directly—otherwise there’s no point.

The result:

Linear VerifierSuccinct Verifier
Matrix evalsO(N)O(N) — compute directlyO(logN)O(\log N) — verify PCS proof
Witness evalO(n)O(n) — compute directlyO(logn)O(\log n) — verify PCS proof
TotalO(N+n)O(N + n)O(logm+logn+logN)O(\log m + \log n + \log N)

The verifier now runs in polylogarithmic time in the circuit size. This is the “succinct” in SNARK.

Trade-off:

Linear VerifierSuccinct Verifier (SNARK)
PreprocessingNoneO(N)O(N) one-time
Prover timeO(N)O(N)O(N)O(N) (includes PCS proofs)
Verifier timeO(N)O(N)O(log(mnN))O(\log(m \cdot n \cdot N))
Proof sizeO(logm+logn)O(\log m + \log n)O(logm+logn+logN)O(\log m + \log n + \log N)

The linear-verifier variant is better when you have a small circuit or don’t want preprocessing. The SNARK is better when verification cost matters (e.g., on-chain verification).

A Note on Zero-Knowledge

Everything we’ve described so far is an argument of knowledge—the prover convinces the verifier that they know a valid witness. But it’s not yet zero-knowledge: the proof might leak information about the witness.

To make Spartan zero-knowledge, two modifications are needed:

1. Blind the witness commitment

When committing to the witness polynomial z~\widetilde{z}, add randomness:

  • Instead of Cz=Commit(z~)C_z = \text{Commit}(\widetilde{z})
  • Use Cz=Commit(z~;r)C_z = \text{Commit}(\widetilde{z}; r) with random blinding factor rr

This ensures the commitment reveals nothing about zz.

2. Blind the sum-check polynomials

During each round of sum-check, the prover sends a univariate polynomial. These polynomials depend on the witness and could leak information.

The fix: add random “masking” polynomials that cancel out in expectation but hide the true values. The prover also provides zero-knowledge proofs (using sigma protocols) that the masked values are consistent.

The Spartan paper and implementation use:

  • Pedersen commitments for hiding values
  • Sigma protocols (knowledge proofs, equality proofs, product proofs) for proving relationships between committed values

We won’t go deeper here—the key point is that zero-knowledge is achievable with standard techniques, adding only constant overhead to each sum-check round.

See Section 6 of the Spartan paper for the full zero-knowledge construction.

What We Didn’t Cover

SPARK: Committing to Sparse Matrices

We glossed over a critical detail: how do you efficiently commit to the sparse matrices AA, BB, CC?

Standard polynomial commitments work on dense polynomials. But R1CS matrices are sparse—they have O(n)O(n) non-zero entries in an m×nm \times n matrix. Committing naively would require O(mn)O(m \cdot n) work.

SPARK is Spartan’s solution. It’s a specialized commitment scheme for sparse multilinear polynomials that:

  • Commits in time O(N)O(N) where NN is the number of non-zero entries
  • Opens at any point in time O(N)O(N)
  • Verification is O(logN)O(\log N)

SPARK uses memory-checking techniques and product arguments. It deserves its own article—coming soon.

Polynomial Commitment Schemes

We treated PCS as a blackbox. Spartan can work with different PCS backends:

  • Hyrax-style (used in the original implementation): Based on Pedersen commitments and inner product arguments
  • KZG/Kate: Requires trusted setup but gives constant-size proofs
  • FRI-based: Transparent, used in STARKs

The choice of PCS affects proof size, verification time, and whether you need a trusted setup.

Security Analysis

We skipped all security proofs. The paper proves:

  • Completeness: Honest prover always convinces verifier
  • Soundness: Cheating prover fails with high probability (via Schwartz-Zippel)
  • Zero-knowledge: Proof reveals nothing beyond statement validity (via simulation)

Conclusion

Spartan achieves transparent SNARKs through a clean insight: view R1CS as a polynomial identity and verify it using sum-check.

The core ideas:

  1. Express R1CS satisfaction as xeq~(τ,x)[Az(x)Bz(x)Cz(x)]=0\sum_x \widetilde{eq}(\tau, x) \cdot [A_z(x) \cdot B_z(x) - C_z(x)] = 0
  2. Apply sum-check twice: first to eliminate constraint variables, then to reduce to point evaluations
  3. Use polynomial commitments to make verification sub-linear

What makes Spartan significant:

  • First transparent SNARK with sub-linear verification for general R1CS
  • No trusted setup (unlike Groth16)
  • Works with any PCS (modular design)
  • Efficient prover (no FFTs required)

The techniques pioneered in Spartan—applying sum-check directly to R1CS, the two-phase structure—have influenced subsequent proof systems like HyperPlonk and others.

If you want to go deeper:

  • Read the paper for formal definitions and security proofs
  • Study the implementation to see how sum-check is realized in code
  • Look at r1csproof.rs for Phase 1 and Phase 2 implementation
  • Look at sumcheck.rs for the sum-check protocol details

References