adr1anh

@adr1anh

Joined on Aug 27, 2021

  • In Jim Posen's note, the protocol ends with the verifier querying $$ v_{i,j} = M_i(j || r_0, \ldots, r_{\nu -k - 1} ) $$ for all $j\in B_k$. Using these, the verifier computes the evaluations $$ \hat{v}i = \hat{M}i(r_Y, r_0,\ldots,r{\nu -k - 1}) = \sum{j \in D} v_{i,j} \cdot L_{j,D}(r_Y), $$
     Like  Bookmark
  • High-level Traits We add a BatchedRelaxedR1CSSNARKTrait which generalizes the existing RelaxedR1CSSNARKTrait in src/traits/snark.rs. The main difference is that the setup, prove and verify methods take as inputs slices of R1CS shapes S Relaxed R1CS instances U Relaxed R1CS witnesses W We also implement a CompressedSNARK proof for the SuperNova RecursiveSNARK. The main differences with the original are
     Like 1 Bookmark
  • Let's assume our R1CS shape has $2^n$ variables, $\ell$ public inputs (including the initial 1, or $u$), and $2^m$ constraints, where $\ell < 2^n$. Take $N = \max{2^{n+1}, 2^m}$ as the size of the public parameters. Our witness is the vector $W \in \mathbb{F}^{2^{n}}$ considered as a multilinear polynomial in $n$ variables $W(X_0,\ldots,X_{n-1})$. We define $P(X_0,\ldots,X_{n-1})$ as the MLE of the $\ell$ public inputs. From this, we construct the $Z \in \mathbb{F}^{2^{n+1}}$ vector where its MLE has $n+1$ variables $$ Z(X_0,X_1,\ldots,X_{n}) = (1-X_0)\cdot W(X_1,\ldots,X_{n}) + X_0\cdot P(X_1,\ldots,X_{n}) $$ Consider that $W,P$ are valid solutions, but the prover wants to use a malicious output $\tilde{P} \in \mathbb{F}^\ell$. We can craft a malicious $\tilde{W}$ as $$
     Like 2 Bookmark
  • Lagrange Polynomial basics For $0 \leq i < 2^n$, define it's bit representation as $(i_0,\ldots, i_{n-1})$ such that $$ i = \sum_{k=0}^{n-1} i_k \cdot 2^{n-k-1} $$ Note that for $n' < n$, if $i <2^{n'}$ then its bit representation will look like $(0,\ldots,0,i_{n'-n},\ldots, i_{n-1})$, with $n-n'$ zeros in front. The Lagrange polynomials over ${0,1}^n$ are indexed by $0\leq i<2^n$, where $$ \begin{aligned}
     Like  Bookmark
  • $$ \newcommand{\NP}{\mathsf{NP}} \newcommand{\Rel}{\mathcal{R}} \newcommand{\npx}{\mathbf{x}} \newcommand{\npw}{\mathbf{w}} \newcommand{\npX}{\mathbf{X}} \newcommand{\npW}{\mathbf{W}} \newcommand{\acc}{\mathsf{acc}} \newcommand{\bin}{{ 0,1 }} \newcommand{\l}{\ell}
     Like  Bookmark
  • see updated link: https://hackmd.io/fiaKvd0bR1min9d_gp9ARA The original Halo paper explains how the Bulletproofs/IPA commitment scheme can be used to speed up recursive SNARK verification. Use IPA as PCS Notice that IPA is a "split accumuluation scheme" (c.f. Proof-Carrying Data from Accumulation Schemes) Based on Sonic arithmetization, the precursor to PlonK. The hard part of recursive verification is computing a final MSM inside a circuit over the non-native fieldUsing curve cycles, create a SNARK over a curve where the base field corresponds to the MSM's curve. The circuit performing the MSM is really small, and doesn't do anything else.
     Like  Bookmark