Try   HackMD

Galois FRI-Binius: Joseph Johnston's Idea

In this note, I will try to explicate an idea suggested to us by Joseph Johnston.

Normal FRI-Binius & Notation

In normal FRI-Binius, the prover kicks off the sumcheck by sending the verifier

s0:=(t(u0,,uκ1,rκ,,r1))uBκ. This information is enough for the verifier to compute
t(r)
; indeed, the verifer can just compute
uBκs0,ueq~(u0,,uκ1,r0,,rκ1)
and compare this thing to the claimed evaluation
s
.

The tricky observation there is that the quantity

s0 above is also nothing other than the column representation of
φ1(t)(φ0(rκ),,φ0(r1))
, which itself equals
vBt(X0,,X1)eq~(φ0(rκ),,φ0(r1),X0,,X1).

By running a sumcheck on this above thing, over the algebra, the prover and verifier can thus reduce the validity of
s0
to that of
t(r0,,r1)
.

Joseph's Idea

The first nontrivial claim is that the vector

s0:=(t(u0,,uκ1,rκ,,r1))uBκ above—which, as we already saw, contains just the required data to compute
t(r0,,r1)
—is itself in turn "equivalent", up to a matrix mult, to the further data
s:=(φ{u}(t)(rκ,,r1))uBκ
. Here, we write
φ
for the Frobenius, i.e., the thing that generates
Gal(L/K)
. In particular:

Claim 1. We have the matrix identity:

[φ0(t)(rκ,,r1)φ2κ1(t)(rκ,,r1)]=[φ{u}(βv)][t(0,,0,rκ,,r1)t(1,,1,rκ,,r1)].
Proof. It's enough to fix arbitrary
(rκ,,r1)L
and
uBκ
and prove that
φ{u}(t)(rκ,,r1)=?vBκφ{u}(βv)t(vr)
. To this end, we carry out the following calculation:
φ{u}(t)(rκ,,r1)=wBφ{u}(t(w))eq~(r[κ:],w)=wBφ{u}(vBκt(vw)βv)eq~(r[κ:],w)=wB(vBκt(vw)φ{u}(βv))eq~(r[κ:],w)=vBκφ{u}(βv)(wBt(vw)eq~(r[κ:],w))=vBκφ{u}(βv)t(vr),

which is what we want. The second equality above is just the definition of
t(X0,,X1)
. In the third, we distribute the Frobenius, and also use the fact that each
t(vw)K
(and is fixed by the Frobenius).

Claim 2. The matrix

M—i.e., whose
(u,v)th
entry is
φ{u}(βv)
—is invertible.
Proof. This nontrivial fact is exactly [LN97, Lem. 3.51].

Now progressing onwards, we see moreover that the left-hand vector above itself can be made to relate to the evaluations of the single polynomal

t(X0,,X1) at a number of different places. Indeed, we have:

[φ0(t(φ0(rκ),,φ0(r1)))φ2κ1(t(φ(2κ1)(rκ),,φ(2κ1)(r1)))]=[φ0(t)(rκ,,r1)φ2κ1(t)(rκ,,r1)].
Moreover, given only the left-hand vector above, the verifier can locally "peel off" the
φ{u}
s, in order to obtain the further vector:
[t(φ0(rκ),,φ0(r1))t(φ(2κ1)(rκ),,φ(2κ1)(r1))].

The nice thing about this is that it's just
t(X0,,X1)
evaluated at a number of different points, namely the "componentwise Galois orbit" of
(rκ,,r1)L
. The idea now is that we're going to use [Fig. 3, RR21]-style batching on this set of claims. This will entail a single sumcheck which takes place entirely over
L
.

Summary of Protocol

We will now sketch the end-to-end protocol.

  1. In the commitment phase, the prover commits to the packed polynomial
    t(X0,,X1)L[X0,,X1]1
    just like it usually does.

In the eval phase, the parties have a joint evaluation point

(r0,,r1) and a claimed evaluation
s
. The prover has
t(X0,,X1)K[X0,,X1]1
.

  1. Prover sends verifier
    s0
    , which is claimed to be
    (t(u0,,uκ1,rκ,,r1))uBκ
    .
  2. Verifier checks
    uBκs0,ueq~(r0,,rκ1,u0,,uκ1)=?s
    .
  3. Verifier then sets:
    [s]:=[M][s0],

    understood as an
    L
    -matrix mult. Verifier then moreover sets:
    [s^]:=[φ{u}(s)];

    by this notation, we mean that the verifier applies
    φ{u}
    to each entry of
    s
    —i.e., with the exponent
    {u}
    varying—in order to get
    s^
    .

The prover and verifier now run an [RR21]-style batched sumcheck on the claims

s^u=?t(φ{u}(rκ),,φ{u}(r1)), for
uBκ
. Unwinding what this means, we have:

  1. The verifier samples random challenges
    (r0,,rκ1)
    ; sends them to the prover.
  2. The verifier defines the initial sumcheck claim
    s0:=uBκs^ueq~(r,u)
    .
  3. Meanwhile, prover defines:
    h(X0,,X1):=t(X0,,X1)uBκeq~(r,u)eq~(X,φ{u}(r[κ:]))

    here, we slightly abuse notation by writing
    φ{u}(r[κ:]):=(φ{u}(rκ),,φ{u}(r1))
    .
  4. The prover and verifier now run a normal sumcheck over L on the polynomial
    h(X0,,X1)
    , for
    rounds. In parallel, they also FRI-fold.
  5. At the very end, the verifier has a final sumcheck claim, which we'll call
    s
    , as well as a final FRI oracle, which we'll call
    c
    . At this point, the verifier checks
    s=?cuBκeq~(r,u)eq~(r,φ{u}(r[κ:]));

    here,
    r:=(r0,,r1)
    is the point sampled during the course of the sumcheck.
  6. Finally, the verifier does FRI querying.

Correctness

Assuming the truth of Claim 1, we argue correctness in the following way. If the prover is honest, then

s0=(t(u0,,uκ1,rκ,,r1))uBκ will hold; by Claim 1, the verifier will have
s=(φ{u}(t)(rκ,,r1))uBκ=(φ{u}(t(φ{u}(rκ),,φ{u}(r1))))uBκ
. We see that the verifier will finally in turn have
s^=(t(φ{u}(rκ),,φ{u}(r1)))uBκ
The rest comes down to the correctness of Rothblum-style batching. In particular, you need to convince yourself that
vBh(v)=uBκeq~(r,u)t(φ{u}(r[κ:]))=uBκeq~(r,u)s^u=s0

will hold. The first equality involves interchanging a sum, and is essentially the "content" of [Fig. 3, RR21]'s completeness statement. The second equality holds assuming the prover was honest; this was just calculated above. The third equality is by definition on the verifier's part. Thus if the prover is honest, then its sumcheck claim will be true. The verifier's final step amounts to evaluating
h(r0,,r1)
; here,
c
is used as a standin for
t(r0,,r1)
.

Efficiency

The efficiency of the above protocol is quadratic for the verifier in

2κ. Indeed, you can see this already in the verifier's multipliation
s:=[M]s0
. Actually, it is interesting to ask whether there is a kind of "Galois NTT", which would allow multiplication by
[M]
to be carried out in
O(κ2κ)
time.

A further question is the verifier's computation of

s^:=φ{u}(s) above. It seems that the verifier needs to do "triangularly many" applications of the inverse Frobenius here; i.e., for each
uBκ
, the computation
s^u:=φ{u}(su)
will take
{u}
inverse Frobeniuses. Up to the issue of forward versus inverse (which shouldn't matter, since we can just "invert the triangle", using cyclicity), this will again take quadratic total work in
2κ
.

As far as proof cost: the total cost is a FRI-run on something

-variate over
L
and then also a sumcheck on something
-variate over
L
. So it seems that we are getting 0 embedding overhead whatsoever (for the prover—the verifier needs to do some work).

Security

The fascinating fact is that Claim 2 is necessary for security. As a sketch, we need to argue that if

t(r0,,r1)s, then
V
will reject. First off, assuming that (i.e. that the prover's claim is false), we immediately get the claim that if
s0=(t(u0,,uκ1,rκ,,r1))uBκ
then
V
will reject its check 2. above. We may thus assume that
s0(t(u0,,uκ1,rκ,,r1))uBκ
. In this case, assuming Claim 2, we have that
M
is injective; we may thus assume further that
s(φ{u}(t)(rκ,,r1))uBκ=(φ{u}(t(φ{u}(rκ),,φ{u}(r1))))uBκ
will hold. Thus we finally get that
s^(t(φ{u}(rκ),,φ{u}(r1)))uBκ
will hold, since everything Frobenius-related is invertible. This then comes down to the soundness of (tensor-style) Rothblum batching, and the usual arguments of FRI-Binius. Thus the soundness overhead of this protocol over and above normal FRI-Binius is that of the tensor-style batch, which is
O(κ|L|)
.

References

[LN97]: Lidl, Rudolf and Niederreiter, Harald. Finite Fields. 1997.
[RR21]: Ron-Zewi, Noga and Rothblum, Ron. Local Proofs Approaching the Witness Length. Journal of the ACM.