# Galois FRI-Binius: Joseph Johnston's Idea In this note, I will try to explicate an idea suggested to us by Joseph Johnston. ### Normal FRI-Binius & Notation In [normal FRI-Binius](https://eprint.iacr.org/2024/504), the prover kicks off the sumcheck by sending the verifier $s_0 := \left( t(u_0, \ldots , u_{\kappa - 1}, r_\kappa, \ldots , r_{\ell - 1} ) \right)_{u \in \mathcal{B}_\kappa}$. This information is enough for the verifier to compute $t(r)$; indeed, the verifer can just compute $\sum_{u \in \mathcal{B}_\kappa} s_{0, u} \cdot \widetilde{\texttt{eq}}(u_0, \ldots , u_{\kappa - 1}, r_0, \ldots , r_{\kappa - 1})$ and compare this thing to the claimed evaluation $s$. The tricky observation there is that the quantity $s_0$ above is also nothing other than the column representation of $\varphi_1(t')(\varphi_0(r_\kappa), \ldots , \varphi_0(r_{\ell - 1}))$, which itself equals \begin{equation*}\sum_{v \in \mathcal{B}_{\ell'}} t'(X_0, \ldots , X_{\ell' - 1}) \cdot \widetilde{\texttt{eq}}(\varphi_0(r_\kappa), \ldots , \varphi_0(r_{\ell - 1}), X_0, \ldots , X_{\ell' - 1}).\end{equation*} By running a sumcheck on this above thing, over the algebra, the prover and verifier can thus reduce the validity of $s_0$ to that of $t'(r'_0, \ldots , r'_{\ell' - 1})$. ### Joseph's Idea The first nontrivial claim is that the vector $s_0 := \left( t(u_0, \ldots , u_{\kappa - 1}, r_\kappa, \ldots , r_{\ell - 1} ) \right)_{u \in \mathcal{B}_\kappa}$ above—which, as we already saw, contains just the required data to compute $t(r_0, \ldots , r_{\ell - 1})$—is itself in turn "equivalent", up to a matrix mult, to the further data $s' := \left( \varphi^{\{u\}}(t')(r_\kappa, \ldots , r_{\ell - 1}) \right)_{u \in \mathcal{B}_\kappa}$. Here, we write $\varphi$ for the Frobenius, i.e., the thing that generates $\text{Gal}(L \mathbin{/} K)$. In particular: **Claim 1.** We have the matrix identity: \begin{equation*} \begin{bmatrix} \varphi^0(t')(r_\kappa, \ldots , r_{\ell - 1}) \\ \\ \vdots \\ \\ \varphi^{2^\kappa - 1}(t')(r_\kappa, \ldots , r_{\ell - 1}) \end{bmatrix} = \begin{bmatrix} & & \\ & & \\ \hphantom{\quad \;} & \varphi^{\{u\}}(\beta_v) \vphantom{\vdots} & \hphantom{\quad \;} \\ & & \\ & & \end{bmatrix} \cdot \begin{bmatrix} t(0, \ldots , 0, r_\kappa, \ldots , r_{\ell - 1}) \\ \\ \vdots \\ \\ t(1, \ldots , 1, r_\kappa, \ldots , r_{\ell - 1}) \end{bmatrix}. \end{equation*} *Proof.* It's enough to fix arbitrary $(r_\kappa, \ldots , r_{\ell - 1}) \in L^{\ell'}$ and $u \in \mathcal{B}_\kappa$ and prove that $\varphi^{\{u\}}(t')(r_\kappa, \ldots , r_{\ell - 1}) \stackrel{?}= \sum_{v \in \mathcal{B}_{\kappa}} \varphi^{\{u\}}(\beta_v) \cdot t(v \mathbin{\Vert} r)$. To this end, we carry out the following calculation: \begin{align*} \varphi^{\{u\}}(t')(r_\kappa, \ldots , r_{\ell - 1}) &= \sum_{w \in \mathcal{B}_{\ell'}} \varphi^{\{u\}}(t'(w)) \cdot \widetilde{\texttt{eq}}(r_{[\kappa:]}, w) \\ &= \sum_{w \in \mathcal{B}_{\ell'}} \varphi^{\{u\}}\left( \sum_{v \in \mathcal{B}_\kappa} t(v \mathbin{\Vert} w) \cdot \beta_v \right) \cdot \widetilde{\texttt{eq}}(r_{[\kappa:]}, w) \\ &= \sum_{w \in \mathcal{B}_{\ell'}} \left( \sum_{v \in \mathcal{B}_\kappa} t(v \mathbin{\Vert} w) \cdot \varphi^{\{u\}}(\beta_v) \right) \cdot \widetilde{\texttt{eq}}(r_{[\kappa:]}, w) \\ &= \sum_{v \in \mathcal{B}_\kappa} \varphi^{\{u\}}(\beta_v) \cdot \left( \sum_{w \in \mathcal{B}_{\ell'}} t(v \mathbin{\Vert} w) \cdot \widetilde{\texttt{eq}}(r_{[\kappa:]}, w) \right) \\ &= \sum_{v \in \mathcal{B}_\kappa} \varphi^{\{u\}}(\beta_v) \cdot t(v \mathbin{\Vert} r), \end{align*} which is what we want. The second equality above is just the definition of $t'(X_0, \ldots , X_{\ell' - 1})$. In the third, we distribute the Frobenius, and also use the fact that each $t(v \mathbin{\Vert} w) \in K$ (and is fixed by the Frobenius). **Claim 2.** The matrix $M$—i.e., whose $(u, v)^\text{th}$ entry is $\varphi^{\{u\}}(\beta_v)$—is invertible. *Proof.* This nontrivial fact is exactly [LN97, Lem. 3.51]. Now progressing onwards, we see moreover that the left-hand vector above itself can be made to relate to the evaluations of the _single_ polynomal $t'(X_0, \ldots , X_{\ell' - 1})$ at a number of different places. Indeed, we have: \begin{equation*} \begin{bmatrix} \varphi^{0}\left( t'(\varphi^{-0}(r_\kappa), \ldots , \varphi^{-0}(r_{\ell - 1})) \right) \\ \\ \vdots \\ \\ \varphi^{2^\kappa - 1} \left( t'(\varphi^{-(2^\kappa - 1)}(r_\kappa), \ldots , \varphi^{-(2^\kappa - 1)}(r_{\ell - 1})) \right) \end{bmatrix} = \begin{bmatrix} \varphi^0(t')(r_\kappa, \ldots , r_{\ell - 1}) \\ \\ \vdots \\ \\ \varphi^{2^\kappa - 1}(t')(r_\kappa, \ldots , r_{\ell - 1}) \end{bmatrix}. \end{equation*} Moreover, given only the left-hand vector above, the verifier can locally "peel off" the $\varphi^{\{u\}}$s, in order to obtain the further vector: \begin{equation*} \begin{bmatrix} t'(\varphi^{-0}(r_\kappa), \ldots , \varphi^{-0}(r_{\ell - 1})) \\ \\ \vdots \\ \\ t'(\varphi^{-(2^\kappa - 1)}(r_\kappa), \ldots , \varphi^{-(2^\kappa - 1)}(r_{\ell - 1})) \end{bmatrix}. \end{equation*} The nice thing about this is that it's just $t'(X_0, \ldots , X_{\ell' - 1})$ evaluated at a number of different points, namely the "componentwise Galois orbit" of $(r_\kappa, \ldots , r_{\ell - 1}) \in L^{\ell'}$. The idea now is that we're going to use [Fig. 3, RR21]-style batching on this set of claims. This will entail a _single_ sumcheck _which takes place entirely over $L$_. ### Summary of Protocol We will now sketch the end-to-end protocol. 0. In the commitment phase, the prover commits to the packed polynomial $t'(X_0, \ldots , X_{\ell' - 1}) \in L[X_0, \ldots , X_{\ell' - 1}]^{\preceq 1}$ just like it usually does. In the eval phase, the parties have a joint evaluation point $(r_0, \ldots , r_{\ell - 1})$ and a claimed evaluation $s$. The prover has $t(X_0, \ldots , X_{\ell - 1}) \in K[X_0, \ldots , X_{\ell - 1}]^{\preceq 1}$. 1. Prover sends verifier $s_0$, which is _claimed_ to be $\left( t(u_0, \ldots , u_{\kappa - 1}, r_\kappa, \ldots , r_{\ell - 1} ) \right)_{u \in \mathcal{B}_\kappa}$. 2. Verifier checks $\sum_{u \in \mathcal{B}_\kappa} s_{0, u} \cdot \widetilde{\texttt{eq}}(r_0, \ldots , r_{\kappa - 1}, u_0, \ldots , u_{\kappa - 1}) \stackrel{?}= s$. 3. Verifier then sets: \begin{equation*} \begin{bmatrix} \\ \\ s' \\ \\ \; \end{bmatrix} := \begin{bmatrix} & & \\ & & \\ \hphantom{\quad \;} & M & \hphantom{\quad \;} \\ & & \\ & & \end{bmatrix} \cdot \begin{bmatrix} \\ \\ s_0 \\ \\ \; \end{bmatrix}, \end{equation*} understood as an $L$-matrix mult. Verifier then moreover sets: \begin{equation*} \begin{bmatrix} \\ \\ \widehat{s} \\ \\ \; \end{bmatrix} := \begin{bmatrix} \\ \\ \varphi^{-\{u\}}\left( s' \right) \\ \\ \; \end{bmatrix}; \end{equation*} by this notation, we mean that the verifier applies $\varphi^{-\{u\}}$ to _each_ entry of $s'$—i.e., with the exponent $-\{u\}$ varying—in order to get $\widehat{s}$. The prover and verifier now run an [RR21]-style batched sumcheck on the claims $\widehat{s}_u \stackrel{?}= t'(\varphi^{-\{u\}}(r_\kappa), \ldots , \varphi^{-\{u\}}(r_{\ell - 1}))$, for $u \in \mathcal{B}_\kappa$. Unwinding what this means, we have: 4. The verifier samples random challenges $(r''_0, \ldots , r''_{\kappa - 1})$; sends them to the prover. 5. The verifier defines the initial sumcheck claim $s^{*}_0 := \sum_{u \in \mathcal{B}_\kappa} \widehat{s}_u \cdot \widetilde{\texttt{eq}}(r'', u)$. 6. Meanwhile, prover defines: \begin{equation*}h(X_0, \ldots, X_{\ell' - 1}) := t'(X_0, \ldots , X_{\ell' - 1}) \cdot \sum_{u \in \mathcal{B}_\kappa} \widetilde{\texttt{eq}}(r'', u) \cdot \widetilde{\texttt{eq}}(X, \varphi^{-\{u\}}(r_{[\kappa:]}))\end{equation*} here, we slightly abuse notation by writing $\varphi^{-\{u\}}(r_{[\kappa:]}) := \left( \varphi^{-\{u\}}(r_\kappa), \ldots , \varphi^{-\{u\}}(r_{\ell - 1}) \right)$. 7. The prover and verifier now run a _normal sumcheck over L_ on the polynomial $h(X_0, \ldots, X_{\ell' - 1})$, for $\ell'$ rounds. In parallel, they also FRI-fold. 8. At the very end, the verifier has a final sumcheck claim, which we'll call $s^*_{\ell'}$, as well as a final FRI oracle, which we'll call $c$. At this point, the verifier checks \begin{equation*} s^*_{\ell'} \stackrel{?}= c \cdot \sum_{u \in \mathcal{B}_\kappa} \widetilde{\texttt{eq}}(r'', u) \cdot \widetilde{\texttt{eq}}(r', \varphi^{-\{u\}}(r_{[\kappa:]})); \end{equation*} here, $r' := (r'_0, \ldots , r'_{\ell' - 1})$ is the point sampled during the course of the sumcheck. 9. Finally, the verifier does FRI querying. ### Correctness Assuming the truth of Claim 1, we argue correctness in the following way. If the prover is honest, then $s_0 = \left( t(u_0, \ldots , u_{\kappa - 1}, r_\kappa, \ldots , r_{\ell - 1} ) \right)_{u \in \mathcal{B}_\kappa}$ will hold; by Claim 1, the verifier will have $s' = \left( \varphi^{\{u\}}(t')(r_\kappa, \ldots , r_{\ell - 1}) \right)_{u \in \mathcal{B}_\kappa} = \left( \varphi^{\{u\}}\left( t'(\varphi^{-\{u\}}(r_\kappa), \ldots , \varphi^{-\{u\}}(r_{\ell - 1})) \right) \right)_{u \in \mathcal{B}_\kappa}$. We see that the verifier will finally in turn have $\widehat{s} = \left( t'(\varphi^{-\{u\}}(r_\kappa), \ldots , \varphi^{-\{u\}}(r_{\ell - 1})) \right)_{u \in \mathcal{B}_\kappa}$ The rest comes down to the correctness of Rothblum-style batching. In particular, you need to convince yourself that \begin{equation*} \sum_{v \in \mathcal{B}_{\ell'}} h(v) = \sum_{u \in \mathcal{B}_\kappa} \widetilde{\texttt{eq}}(r'', u) \cdot t'(\varphi^{-\{u\}}(r_{[\kappa:]})) = \sum_{u \in \mathcal{B}_\kappa} \widetilde{\texttt{eq}}(r'', u) \cdot \widehat{s}_u = s^*_0 \end{equation*} will hold. The first equality involves interchanging a sum, and is essentially the "content" of [Fig. 3, RR21]'s completeness statement. The second equality holds assuming the prover was honest; this was just calculated above. The third equality is by definition on the verifier's part. Thus if the prover is honest, then its sumcheck claim will be true. The verifier's final step amounts to evaluating $h(r'_0, \ldots , r'_{\ell' - 1})$; here, $c$ is used as a standin for $t'(r'_0, \ldots , r'_{\ell' - 1})$. ### Efficiency The efficiency of the above protocol is quadratic for the verifier in $2^\kappa$. Indeed, you can see this already in the verifier's multipliation $s' := \big[ M \big] \cdot s_0$. Actually, it is interesting to ask whether there is a kind of "Galois NTT", which would allow multiplication by $\big[ M \big]$ to be carried out in $O(\kappa \cdot 2^\kappa)$ time. A further question is the verifier's computation of $\widehat{s} := \varphi^{-\{u\}}(s')$ above. It seems that the verifier needs to do "triangularly many" applications of the inverse Frobenius here; i.e., for each $u \in \mathcal{B}_\kappa$, the computation $\widehat{s}_u := \varphi^{-\{u\}}(s'_u)$ will take $\{u\}$ inverse Frobeniuses. Up to the issue of forward versus inverse (which shouldn't matter, since we can just "invert the triangle", using cyclicity), this will again take quadratic total work in $2^\kappa$. As far as proof cost: the total cost is a FRI-run on something $\ell'$-variate over $L$ and then also a sumcheck on something $\ell'$-variate over $L$. So it seems that we are getting 0 embedding overhead whatsoever (for the prover—the verifier needs to do some work). ### Security The fascinating fact is that Claim 2 is necessary for security. As a sketch, we need to argue that if $t(r_0, \ldots , r_{\ell - 1}) \neq s$, then $\mathcal{V}$ will reject. First off, assuming that (i.e. that the prover's claim is false), we immediately get the claim that if $s_0 = \left( t(u_0, \ldots , u_{\kappa - 1}, r_\kappa, \ldots , r_{\ell - 1} ) \right)_{u \in \mathcal{B}_\kappa}$ then $\mathcal{V}$ will reject its check 2. above. We may thus assume that $s_0 \neq \left( t(u_0, \ldots , u_{\kappa - 1}, r_\kappa, \ldots , r_{\ell - 1} ) \right)_{u \in \mathcal{B}_\kappa}$. In this case, assuming Claim 2, we have that $M$ is injective; we may thus assume further that $s' \neq \left( \varphi^{\{u\}}(t')(r_\kappa, \ldots , r_{\ell - 1}) \right)_{u \in \mathcal{B}_\kappa} = \left( \varphi^{\{u\}}\left( t'(\varphi^{-\{u\}}(r_\kappa), \ldots , \varphi^{-\{u\}}(r_{\ell - 1})) \right) \right)_{u \in \mathcal{B}_\kappa}$ will hold. Thus we finally get that $\widehat{s} \neq \left( t'(\varphi^{-\{u\}}(r_\kappa), \ldots , \varphi^{-\{u\}}(r_{\ell - 1})) \right)_{u \in \mathcal{B}_\kappa}$ will hold, since everything Frobenius-related is invertible. This then comes down to the soundness of (tensor-style) Rothblum batching, and the usual arguments of FRI-Binius. Thus the soundness overhead of this protocol over and above normal FRI-Binius is that of the tensor-style batch, which is $O\left( \frac{\kappa}{|L|} \right)$. ## References [LN97]: Lidl, Rudolf and Niederreiter, Harald. *Finite Fields.* 1997. [RR21]: Ron-Zewi, Noga and Rothblum, Ron. *Local Proofs Approaching the Witness Length.* Journal of the ACM.