owned this note
owned this note
Published
Linked with GitHub
# Multipoint, multipolynomial batched openings from inner-product arguments
$$
\newcommand{\angled}[1]{\langle {#1} \rangle}
\newcommand{\pows}[1]{\mathsf{pows}\left( {#1} \right)}
\newcommand{\F}{\mathbb{F}}
\newcommand{\G}{\mathbb{G}}
\newcommand{\what}{\widehat}
\newcommand{\commit}[1]{[ {#1} ]}
$$
This explains how to use the inner product argument from [Bulletproofs](https://eprint.iacr.org/2017/1066.pdf) as a multi-point, multi-polynomial commitment scheme.
Let $\mathbb{F}$ be a prime order field and $\mathbb{G}$ a group of order $\left| \mathbb{F} \right|$.
Let $\mathbf{G} \colon \mathbb{G}^d$ be a vector of random group elements.
Suppose we want to commit to polynomials $f_1, \dots, f_n$ of degree $d$ over $\mathbb{F}$. For the purposes of this document, we will think of polynomials as being identifiend with their coefficient vectors, so each $f_i$ is a vector of $\F$ elements of length $d + 1$.
Define
$$\pows{x} := (1, x, x^2, x^3, \dots, x^d)
$$
Note that $\angled{f_i, \pows{\alpha}} = f_i(\alpha)$.
Let $\commit{f_i} := \angled{f_i, \mathbf{G}}$. $\commit{f_i}$ will be the commitment to the polynomial $f_i$. Note at this point that the Bulletproofs IPA scheme would allow us to create polynomial evaluation proofs for $f_i(\alpha) = \beta$ by proving $\angled{{f_i}, \pows{\alpha}} = \beta$ against the commitment $\commit{f_i}$.
Let $\alpha_1 \colon \F$ and $f_i(\alpha_1) = \beta_{i, 1}$. We can batch together all the evaluation proofs $f_i(\alpha_1) = \beta_{i, 1}$ by picking a random $\xi$ and then just running one proof that $(\sum_i \xi^i f_i) (\alpha_1) = \sum_i \xi^i \beta_{i, 1}$. This is possible because the polynomial commitments comprise an $\F$-vector space. This explains how to reduce from many-polynomials and one-point to one-polynomial and one-point.
Now let's explain how to go further: from many polynomials and many points to a single one-polynomial one-point proof. Suppose we want to prove evaluations of all the $f_i$ on some points $\alpha_1, \dots, \alpha_m$. That is, we want to prove $f_i(\alpha_j) = \beta_{i, j}$ for all appropriate $i, j$. Sample $\xi$ at random and let $F := \sum_i \xi^i f_i$.
Sample random $r \colon \F$. Let $F_j := \sum_i \xi^i \beta_{i, j}$. Note that $F_j$ is supposed to be equal to $F(\alpha_j)$. Use the inner product argument to prove that $\angled{F, \sum_j r^j \pows{\alpha_j}} = \sum_j r^j F_j$.
After this, the verifier should be convinced that $F(\alpha_j) = F_j$. Then, from the security of the single-polynomial, multi-point batching, the verifier should be convinced that $f_i(\alpha_j) = \beta_{i, j}$.
### On proof-size when applied to AHP-based SNARKs
A major drawback of this batching technique is that it requires providing an evaluation of every polynomial on every point, which in applications to many AHP-based SNARKs is not required. In Marlin for example, the verifier only requires the evaluation of each polynomial on a single point.
One way to partially mitigate this inefficiency is as follows. In AHP-based SNARKs, the prover will send polynomials $f_{1,k}, \dots f_{1,k_1}$ then the verifier sends a challenge point $\beta_1$. They will then send more polynomials $f_{2, 1}, \dots, f_{2,k_2}$ and the verifier sends a challenge point $\beta_2$. It goes on in this way.
Once the prover is done sending polynomials, for each round, the prover needs to open the polynomials $f_{i, 1}, \dots, f_{i, k_i}$ at $\beta_i$. We would like to batch all these openings together as in the above method, while avoiding having to send as many "cross term" evaluations as possible. That is, we would ideally only like to send $f_{i, j}(\beta_i)$ for all $i, j$.
This can be partially achieved by having the prover modify each $f_{i, j}$ so that $f_{i, j}(\beta_{i'}) = 0$ for $i' < i$. This cuts the number of evaluation points required roughly in half (depending on the size of each round).