Try   HackMD

Grand product arguments

Let

F be a finite field and consider a vector
f=(f0,,fn1)Fn
, where
n=2v
.
Given some kind of commitment
commf
to
f
and a value
yF
, we want to convince a verifier that the product of the elements of
f
is
y
,
fi=y
.

Using AIR constraints

This problem can be solved using univariate polynomials and AIR constraints. This is the approach taken in Plonk[1].

Let

ci=jifj be the vector of cumulative products of
f
. The equality
fi=y
is equivalent to
cn1=y
. Therefore, a protocol to prove this equality is to commit to both
f
and
c
and check the following AIR constraints

  • c0=f0
    ,
  • ci+1=cifi+1
    for all
    i<n1
    ,
  • cn1=y
    .

Using GKR

This is the approach of [Tha13, 5.3.1][2].
For

0kv, we consider the vector
gk
of size
2k
given by
gk,i=i2vkj<(i+1)2vkfj.

Note that
gv=f
and
g0
is the constant
y
. The vector
gk
can be seen as the
k
-th layer in the binary multiplication tree with leaves
f
.
These vectors satisfy the relations
gk,i=gk+1,2igk+1,2i+1.

Seeing
gk
as a function
{0,1}kF
instead, the relation becomes
gk(i)=gk+1(i,0)gk+1(i,1).

Therefore, the MLEs
g~k
satisfy
g~k(i)=g~k+1(i,0)g~k+1(i,1).

and thus for any
zFk
we have
g~k(z)=i{0,1}keq~(z,i)g~k+1(i,0)g~k+1(i,1).

The evaluation of this sum can be proved using the sumcheck protocol. At the end of the protocol, the verifier needs to evaluate
g~k+1
at
(r,0)
and
(r,1)
for a random
rFk
.

The trick to do so efficiently is to consider the degree one univariate polynomial

h(t)=g~k+1(r,t) for
tF
. We have
h(0)=g~k+1(r,0)  and  h(1)=g~k+1(r,1)

therefore to send
h
it is sufficient to send
g~k+1(r,0)
and
g~k+1(r,1)
(which are anyways required by the verifier). Then the verifier sends a random
uF
and uses the sumcheck protocol to check that
h(u)=g~k+1(r,u).

Therefore, proving an opening
g~k(z)
can be reduced to a sumcheck argument and an opening
g~k+1(r,u)
.

The strategy to prove

fi=g0=y is then clear:

  • Reduce the evalutation proof
    g0=y
    to a sumcheck argument and an evaluation
    g~1(r1)
    .
  • For
    k=1,,v1
    , reduce the evalutation proof
    g~k(rk)
    to a sumcheck argument and an evaluation
    g~k+1(rk+1)
    .
  • The last evaluation
    g~v(rv)=f~(rv)
    is proved using the commitment to
    f
    .

Costs

  • Prover: A sumcheck argument costing
    O(2k)
    for
    k=0v1
    and an evaluation proof for
    f
    , so in total
    O(n)+ prover cost of evaluation proof
    . The commited polynomial is
    f
    of size
    n
    .
  • Verifier: A sumcheck argument costing
    O(k)
    for
    k=0v1
    and an evaluation proof for
    f
    , so in total
    O(v2)+ verifier cost of evaluation proof
    .
  • Proof size: A sumcheck argument of size
    O(k)
    for
    k=0v1
    and an evaluation proof for
    f
    , so in total
    O(v2)+ size of evaluation proof
    .

Quarks

This is the approach of [SL20, Section 5][3].
Instead of using

v+1 different functions
{gk:{0,1}kF}k=0v
, we can pack them into a single function
g:{0,1}v+1F
given by
g(1l,0,x)=gvl(x) for x{0,1}vl,

and
g(1)=0
. Note that
g(0,x)=gv(x)=f(x)
for
x{0,1}v
and
g(1v,0)=g0=y
. Furthermore, the relation between the adjacent functions
gk
and
gk+1
becomes
g(1,x)=g(x,0)g(x,1) for x{0,1}v.

Let
h(x)=g(1,x)g(x,0)g(x,1)
for
x{0,1}v
, then
h~
is the zero polynomial, which we can check with the usual zero-check
0=h(τ)=x{0,1}veq~(τ,x)(g(1,x)g(x,0)g(x,1)),

where
τ
is a random verifier challenge, which is proved with a sumche
Therefore, the product
f=y
can be proved by commiting to
g~
and

  • using the sumcheck argument to prove
    h(τ)=0
    ,
  • prove
    g(0,x)=f(x)
    by checking
    g~(0,γ)=f~(γ)
    for a random
    γ
    ,
  • open
    g~(1v,0)=f=y
    .

Costs

  • Prover: A sumcheck argument costing
    O(n)
    and evaluation proofs for
    g
    and
    f
    , so in total
    O(n)+ prover cost of evaluation proofs
    . The commited polynomials are
    f
    of size
    n
    and
    g
    of size
    2n
    .
  • Verifier: A sumcheck argument costing
    O(v+1)
    and evaluation proofs for
    g
    and
    f
    , so in total
    O(v)+ verifier cost of evaluation proofs
    .
  • Proof size: A sumcheck argument of size
    O(v)
    and evaluation proofs for
    g
    and
    f
    , so in total
    O(v)+ size of evaluation proofs
    .

Note that compared to GKR, the verifier cost and proof size go from quadratic in

v to linear in
v
, at the cost of more and larger evaluation proofs.

Hybrid approach

This is the approach of [SL20, Section 6][3:1], also used in Lasso[4].
We can get the best of both approaches by combining them. Fix a number

1v, we use the Quarks approach for the first
+1
layers
{gk}k=0
and the GKR approach for the last
vl
layers
{gk}k=+1v
. Concretely, this means that in the Quarks protocol, we instead have
g(0,x)=g(x)
for all
x{0,1}
, which we check by evalutating at a random value
γ
,
g~(0,γ)=g~(γ)
.
The evaluation
g~(γ)
is computed using the GKR protocol, by reducing it to a sumcheck argument and an evaluation of
g~+1
, and so on until an evaluation of
g~v=f~
.

Costs

  • Prover:
    O(2)
    for Quarks +
    O(k=+1v2k)
    for GKR, so
    O(n)
    in total, plus commitments to
    f
    of size
    n
    and
    g
    of size
    2+1
    .
  • Verifier:
    O()
    for Quarks +
    O(k=+1vk)=O((v)v)
    , so
    O(+(v)v)
    in total, plus cost of evaluation proofs.
  • Proof size: Similarly to the verifier costs,
    O(+(v)v)
    in total, plus size of evaluation proofs.

For example, setting

=v3logv, the proof size is
O(v3logv+3vlogv)=O(3vlogv)
which is better than the proof size of
O(v2)
when using only GKR for large enough
v
. This comes at the cost of commiting to
g
of size
2v3logv+1=2nv3
, which is much smaller than
2n
, the corresponding size of
g
when using only Quarks.


  1. https://eprint.iacr.org/2019/953 ↩︎

  2. https://eprint.iacr.org/2013/351 ↩︎

  3. https://eprint.iacr.org/2020/1275 ↩︎ ↩︎

  4. https://eprint.iacr.org/2023/1216 ↩︎