# Halo
In this note we describe [Halo](https://eprint.iacr.org/2019/1021.pdf), a recent zero-knowledge proof system. The goal is to walk through the scheme step by step and explain the cryptographic primitives at play.
We look at the problem of proving a very long sequence of statements.
For instance a blockchain-rollup operator needs to accumulate as many transactions as possible before providing a proof of correct execution of all those transactions to the blockchain. The proof has to be succint, verified in constant time (ideally) no matter the number of operations in the sequence.
## Pedersen commitments
Halo relies mainly on the Pedersen commitment scheme, along with an opening technique borrowed from bullet proof (with a little modification from its original description).
:::info
Let $p\neq r$ be prime numbers, $E(\mathbf{F}_p)$ an elliptic curve defined over $\mathbf{F}_p$ such that $E$ contains a group $E[r]$ of $\mathbf{F}_p$ rational points of order $r$. Let $n<<r$ be an integer and $(G_i)_{i\in[n]}$ a list of points in $E[r]$.
Let $P=\sum_{i\in[n]}a_iX^i\in\mathbf{F}_r[X]$ a polynomial.
The Pedersen commitment to $P$ is
$$
\mathcal{PC}_P=\sum_{i\in[n]}a_iG_i
$$
:::
> __Binding property__: Suppose $P=\sum_{i\in[n]}a_iX^i$ and $Q=\sum_{i\in[n]}b_iX^i$ are $2$ different polynomials, with $\mathcal{PC}_P=\mathcal{PC}_Q$.
Since $G_1,\dots,G_n$ are in a cyclic group they can be written as $\lambda_1G,\dots,\lambda_nG$ where $G$ is a generator and $\lambda_i$ are in $\mathbf{F}_r$. $\mathcal{PC}_P=\mathcal{PC}_Q$ implies that $\sum_{i\in[n]}(a_i-b_i)\lambda_iG=O$, so $(a_i-b_i)_{i\in[n]}$ is orthogonal to $(\lambda_i)_{i\in[n]}$, therefore $(a_i-b_i)_{i\in[n]}$ belongs to an hyperplan of $\mathbf{F}_r^n$. The probability that $a-b$ is in this hyperplan by luck is $r^{n-1}/r^n=1/r$.
>
> __Hiding property__: The version that we described is __not__ hiding. To be hiding $\mathcal{PC}_P$ is masked by adding a random $rR$ component R is a random point on the curve and $r\in\mathbf{F}_r$ is a random number. We stick to the non hiding version to not overload notations.
There is a way to open a Pedersen commitment (adapted from a more general technique in bulletproofs) which involves logarithmic (in the degree of the polynomial) communication complexity between a prover and a verifier.
We suppose that the degree of the committed polynomial is a power of $2$, equal to $2^s$.
If $\mathbf{a}=(a_1,\dots,a_n)\in\mathbf{F}_r^n$ and $\mathbf{G}=(G_1,\dots,G_n)\in E(\mathbf{F}_p)[r]^n$ are the coefficients of a polynomial and the points used for computing Pedersen commitments, we write $<\mathbf{a},\mathbf{G}>$ for $\sum_{i\in[n]}a_iG_i$.
We also add the subscript $L$ and $R$ to indicate that we take the left part or the right part (of size $n/2$) of a list of elements.
:::info
__Opening a Pedersen commitment $\mathcal{PC}_P$ at $x$__
Let $\mathbf{x}=(1,x,\dots,x^{n-1})$.
* Verifier sends $x$ and a random point $U\in E(\mathbf{F}_p)[r]$ to the prover
* The prover sends the purported value $P(x)=y$ and the verifier computes $K_0=\mathcal{PC}(P)+yU$
* __loop__ for i from $1$ to $s$:
* The prover computes
$$
L_i=<\mathbf{a}^L,\mathbf{G}^R>+<\mathbf{a}^L,\mathbf{x}^R>U \\
R_i=<\mathbf{a}^R,\mathbf{G}^L>+<\mathbf{a}^R,\mathbf{x}^L>U
$$
and send those to the verifier
* the verifier sends a random $u_i\in\mathbf{F}_r$ to the prover
* The prover sets
$$
\mathbf{a}=u_i\mathbf{a}^L+u_i^{-1}\mathbf{a}^R \\
\mathbf{x}=u_i^{-1}\mathbf{x}^L+u_i\mathbf{x}^R \\
\mathbf{G}=u_i^{-1}\mathbf{G}^L+u_i\mathbf{G}^R
$$
* __end loop__
* At the end of the loop, $a'=\mathbf{a}$ and $x'=\mathbf{x}$ __are of size $1$__, the prover sends them to the verifier.
* The verifier sets $h=\Pi_{i\in[s]}(u_i^{-1}+u_iX^{2^i-1})$ and computes:
$$
x'=h(x)\\
G'=\mathcal{PC}_h\\
K'=K_0+\sum_{i\in[s]}(u_i^2L_i+u_i^{-2}R_i)
$$
* The verifier finally checks that $a'x'U+a'G'=K'$. If the check holds, then the verifier concludes that $y=\sum_{i\in[n]}a_ix^i$.
:::
> __Completeness__: After receiving $u_1$, the prover will divide the size of the vectors $\mathbf{G},\mathbf{x},\mathbf{a}$ by $2$, by folding them like this:
> * $\mathbf{a}'=u_1\mathbf{a}^L+u_1^{-1}\mathbf{a}^R$
> * $\mathbf{G}'=u_1^{-1}\mathbf{G}^L+u_1\mathbf{G}^R$
> * $\mathbf{x}'=u_1^{-1}\mathbf{x}^L+u_1\mathbf{x}^R$
Observe that
$$
<\mathbf{a}',\mathbf{G}'>+<\mathbf{a}',\mathbf{x}'>U=\\
<u_1\mathbf{a}^L+u_1^{-1}\mathbf{a}^R,u_1^{-1}\mathbf{G}^L+u_1\mathbf{G}^R>+<u_1\mathbf{a}^L+u_1^{-1}\mathbf{a}^R,u_1^{-1}\mathbf{x}^L+u_1\mathbf{x}^RU>= \\
u_1^2(<\mathbf{a}^L,\mathbf{G}^R>+<\mathbf{a}^L,\mathbf{x}^R>U)+u_1^{-2}(<\mathbf{a}^R,\mathbf{G}^L>+<\mathbf{a}^R,\mathbf{x}^L>U)+K_0= \\
u_1^2L_1+u_1^{-2}R_1+K_0
$$
>
> If the procedure stops here, the prover sends $\mathbf{a}'$, of size $n/2$, to the verifier, the verifier folds $\mathbf{G},\mathbf{x}$ as the prover did and checks that
$$
K_0+u_1^2L_1+u_1^{-2}R_1=<\mathbf{a}',\mathbf{G}'>+<\mathbf{a}',\mathbf{x}'>U
$$
The full procedure is a repetition of this step $s$ times. The fully folded versions $K',x',\mathbf{G}'$ of $K_0$, $\mathbf{x}$ and $\mathbf{G}$ are respectively $K_0+\sum_{i\in[s]}u_i^2L_i+u_i^{-2}R_i$, $h(x)$, and $\mathcal{PC}_h$, where $h=\Pi_{i\in[s]}(u_i^{-1}+u_iX^{2^i-1})$.
> __Soundness__: The soundness comes form the fact that $L_i,R_i$ are sent prior to receiving the $u_i$.
> If the claimed $y$ is false, call this value $y'$, the verifier starts with $K_0'=<\mathbf{a},\mathbf{G}>+y'U$. To make the proof works, during the first round the prover sends $L_1,R_1$, computed correctly, and after receiving $u_1$, the prover hopes that $K_0'+u_1^2L_1+u_1^{-2}R_1=<\mathbf{a}',\mathbf{G}'>+<\mathbf{a}',\mathbf{x}'>U$, where $\mathbf{a}',\mathbf{G}',\mathbf{x}'$ are the correct folded version of $\mathbf{a},\mathbf{G},\mathbf{x}$. If this happens, the prover continues to send the correct $L_i,R_i$ and he is saved since the folded commitments will coincide after this event. The probability of such a collision is $1/r$, from the binding property of the Pedersen commitment. Since the prover can hope for such a collision at each round, he has $k/r$ chances to win, which is close to none.
It is important to note that Pedersen commitments are homomorphic for $+$: we have
$$
\mathcal{PC}_{P_1}+\mathcal{PC}_{P_2}=\mathcal{PC}_{P_1+P_2}
$$
This property allows a prover to prove a batch opening of several commitments at one point with one proof:
:::info
Let $(\mathcal{PC}_{P_i})_{i\in[l]}$ a list of Pedersen commitments. To prove to a verifier that the commitments open to $(y_i)_{i\in[l]}$ at $\zeta$:
* The prover sends the commitments $(\mathcal{PC}_{P_i})_{i\in[l]}$ and the values $(P_i(\zeta))_{i\in[l]}$ to the verifier
* The verifier samples a random $v$, sends it to the prover
* Prover and verifier engage the bullet proof opening argument to verify that $\sum_{i\in[l]}v^i\mathcal{PC}_{P_i}$ opens at $\sum_{i\in[l]}v^iP_i(\zeta)$
:::
> The completeness follows from the homomorphic property of the Pedersen commitment scheme. The soundness comes the fact that the commitments and purported values are sent prior to receiving $v$, so if one of the $P_i(\zeta)$ is wrong, the prover has $k/r$ chances to win from the soundness proof of the bullet proof opening argument.
## SNARK
A SNARK (Succint Non interactice Argument of Knowledge) allows a prover to convince a verifier that he knows $x$ (private), such that $f(x)=y$, where $y$ is public (think of $f$ as a hash function for instance). The prover doesn't provide $x$, and verifying a proof is much cheaper than computing $f(x)$.
We will indifferentially use the term "circuit" or "constraint system".
We specify some useful notations:
* The circuits are written in capital.
* $\mathcal{Sat}(\_,\_,\_)$ is a boolean function checking __satisfiability__ of a constraint system. It takes as parameters a circuit $\mathcal{C}$, public input $y$ and private input $x$ and returns $1$ or $0$ according to whether or not $\mathcal{C}$ is satisfied when fed with $x,y$.
* $\mathcal{P}(\_)$ is a __proving__ function, which takes a list of $\mathcal{Sat}(\_,\_,\_)$ funcions. It returns proof object $\pi$ if $\mathcal{Sat}(\mathcal{C_i},y_i,x_i)=1$ for all $i$, otherwise it fails.
* $\mathcal{V}(\_,\_)$ is a __verifying__ function. It's a boolean function which takes a proof $\pi$ and a public input $y$, it returns true or false according to whether or not the proof is valid
* $\mathcal{H}$ is a secure hash function
> __Example__: if one has $2$ circuits $\mathcal{C}_1,\mathcal{C}_2$, and associated public/private inputs $(y_1,x_1),(y_2,x_2)$ such that $\mathcal{Sat}(\mathcal{C}_i,y_i,x_i)=1$. We create the proof attesting of this fact like this: $\pi=\mathcal{P}(\mathcal{Sat}(\mathcal{C}_1,y_1,x_1),\mathcal{Sat}(\mathcal{C}_2,y_2,x_2))$. Verifying both claims is done by running $\mathcal{V}(\pi,y_1||y_2)$.
We can now give an informal definition of what is meant by a SNARK in this post:
:::info
We suppose that a SNARK is a tuple $(\mathcal{Sat},\mathcal{P},\mathcal{V})$, with the following properties:
* $\mathcal{P}$ outputs a proof which is of __fixed short size__ (say a few kB to fix ideas)
* The memory for running $\mathcal{Sat}(\mathcal{C},y,x)$ is $\mathcal{O}(|y|+|x|)$
* The memory __and__ CPU time for running $\mathcal{P}((\mathcal{Sat}(\mathcal{C}_i),y_i,x_i)_i)$ is $\mathcal{O}(\sum_i(|x_i|+|y_i|))$
* The memory __and__ CPU time for running $\mathcal{V}(\pi,y)$ is $\mathcal{O}(|y|)$ with a __small constant__
* $\mathcal{P}$ is __bounded in memory__ and cannot process arbitrarily large computations
:::
The third point means that $\mathcal{V}$ is cheap to run as long as there $|y|$ is small.
For example, if $\mathcal{C}$ is a circuit, and we have associated public/private inputs $(y,x)$ such that $\mathcal{Sat}(\mathcal{C},y,x)=1$. If $\pi=\mathcal{P}(\mathcal{Sat}(\mathcal{C},y,x)$, then checking $\mathcal{Sat}(\mathcal{C},y,x)=1$ and $\mathcal{V}(\pi,y)=1$ attest in both cases that $y,x$ satisfies the circuit, but running $\mathcal{V}(\pi,y)$ takes a few seconds and is very cheap in memory, while checking $\mathcal{Sat}(\mathcal{C},y,x)$ may take a long computational time, but more importantly is extremely memory greedy.
## Proof Carrying Data
Given a Snark as it was defined in the previous section, say we have a large set of circuits $(\mathcal{C}_i)_{i\in[n]}$ with associated inputs $(x_i,y_i)_{i\in[n]}$, and we want to output a proof that $\mathcal{Sat}(\mathcal{C}_i,y_i,x_i)=1$ for all $i$.
### First attempt
The first solution is to feed $\mathcal{P}$ all the circuits at once and compute $\mathcal{P}((\mathcal{Sat}((\mathcal{C}_i,y_i,x_i)_{i\in[n]})$.
The issue is that $\mathcal{P}$ is bounded in memory, so brute-force proving all the statements at once will eventually exhaust the memory of $\mathcal{P}$. Moreover the complexity (in time) $\mathcal{P}$ grows linearly with the cumulated number of constraints of all circuits $\mathcal{C}_i$, so the time to compute a gigantic circuit needs a big computational power (typically it cannot be ran on a laptop).
### Second attempt
We know that $\mathcal{V}$ is an algorithm that is cheap to implement. The idea is to express this algorithm as a r1cs, the language supported by our SNARKs, and see $\mathcal{Sat}(\mathcal{V},\_,\_)$ as a particular satisfiability problem (constraining $\mathcal{V}$ so that it returns $1$) to be fed to the prover algorithm $\mathcal{P}$. If one has a proof $\pi$, and associated public input $y$ such that $\mathcal{V}(\pi,y)$ returns $1$, then $\mathcal{P}(\mathcal{Sat}(\mathcal{V},\pi||y,\_)$ (there are no private inputs) outputs a proof $\pi'$ attesting that $\mathcal{Sat}(\mathcal{V},\pi,y)=1$, otherwise said $\pi'$ is a proof attesting that a proof is correct.
An important observation is that $\mathcal{Sat}(\mathcal{V},\_,\_)$ does not take a lot of memory when fed to $\mathcal{P}$ by assumption, since $\mathcal{V}$ is cheap and $\mathcal{Sat}$ grows linearly in memory with the size of $\mathcal{V}$.
Having observed this, the idea is to sequentially handle the circuits $(\mathcal{C}_i)_{i\in[n]}$ to output a proof that $\mathcal{Sat}(\mathcal{C}_i,y_i,x_i)=1$, __and__ that the proof of the previous statement is correct. More precisely, if $\pi$ is a proof of a previous statement, with associated public input $y$, then at step $i$, we run
$$
\mathcal{P}((\mathcal{Sat}(\mathcal{C}_i,y_i,x_i),\mathcal{Sat}(\mathcal{V},\pi||y,\_))
$$
The new generated proof $\pi'$ attests that $\mathcal{Sat}(\mathcal{C}_i,y_i,x_i)=1$ __and__ that $\mathcal{V}(\pi,y)=1$. So we have a way to build a proof that not only attests the validity of a circuit, but also attests recursively that all previous proofs are correct. It's an example of Proof Carrying Data (PCD) infrastructure. There are several different constructions of PCD, the most general one is that instead of having a sequence of circuits, we have an acyclic graph. Here we stick to the sequential version.
There is a last problem that arises when doing that. When accumulating proofs, the list of public inputs grows with the number of proofs. Here is an illustration of that, starting with $\mathcal{C}_0$:
* $\pi_0=\mathcal{P}(\mathcal{Sat}(\mathcal{C}_0,y_0,x_0)$ (there is no previous proof statement)
* $\pi_1=\mathcal{P}(\mathcal{Sat}(\mathcal{C}_1,y_1,x_1),\mathcal{Sat}(\mathcal{V},\pi_0||y_0,\_))$
* $\pi_2=\mathcal{P}(\mathcal{Sat}(\mathcal{C}_2,y_2,x_2),\mathcal{Sat}(\mathcal{V},\pi_1||y_1||\pi_0||y_0,\_))$
* $\pi_3=\mathcal{P}(\mathcal{Sat}(\mathcal{C}_3,y_3,x_3),\mathcal{Sat}(\mathcal{V},\pi_2||y_2||\pi_1||y_1||\pi_0||y_0,\_)$
* ...
We see that the number of public inputs increases at each step, for instance verifying $\pi_3$ is done by running $\mathcal{V}(\pi_3,y_3||\pi_2||y_2||\pi_1||y_1||\pi_0||y_0)$.
Since the CPU time and memory consumption of $\mathcal{V}$ grow linearly with the size of the public input this will quickly become problematic. The solution to counter that is to hash all the public inputs $(\pi,y)$ using $\mathcal{H}$, and add a circuit checking that $\mathcal{H}(\pi,y)$ gives the expected hash result $h$. $h$ becomes the public input and $\pi,y$ become private inputs. We name this circuit $\mathcal{CH}$ for "Correct Hash", and $\mathcal{Sat}(\mathcal{CH},y,x)$ returns $1$ if $\mathcal{H}(x)=y$.
Here is the final construction:
* $\pi_0=\mathcal{P}(\mathcal{Sat}(\mathcal{C}_0,y_0,x_0))$ (there is no previous proof statement)
* compute $h_0=\mathcal{H}(\pi_0,y_0)$
* $\pi_1=\mathcal{P}(\mathcal{Sat}(\mathcal{CH},h_0,\pi_0||y_0),\mathcal{Sat}(\mathcal{C}_1,y_1,x_1),\mathcal{Sat}(\mathcal{V}=1,\_,\pi_0||y_0))$
* Hash the public inputs by computing $h_1=\mathcal{H}(\pi_1,h_0,y_1)$
* $\pi_2=\mathcal{P}(\mathcal{Sat}(\mathcal{CH},h_1,\pi_1||h_0||y_1),\mathcal{Sat}(\mathcal{C}_2,y_2,x_2),\mathcal{Sat}(\mathcal{V}=1,\_,\pi_1||h_0||y_1))$
* Hash the public inputs by computing $h_2=\mathcal{H}(\pi_2,h_1,y_2)$
* $\pi_3=\mathcal{P}(\mathcal{Sat}(\mathcal{CH},h_2,\pi_2||h_1||y_2),\mathcal{Sat}(\mathcal{C}_3,y_3,x_3),\mathcal{Sat}(\mathcal{V}=1,\_,\pi_2||h_1||y_2))$
* ...
We see that after $\pi_2$, the number of public inputs does not increase anymore, this number only depends on the number of public inputs of the $i$-th circuit.
## Instantiating Halo with PLONK
Halo is the application of the construction described in the previous section on a version of PLONK in which the Kate commitment scheme has been replaced by the Pedersen commitment scheme, along with the bullet-proof opening argument.
Let's recall roughly the workflow of PLONK between a prover and a verifier, where the proof is in fact a list of commitments:
:::info
* Prover sends a PLONK proof $(\mathcal{PC}_{P_i^0})_{i\in[l]}$ to the verifier
* Verifier queries the values $(P_i^0(\zeta_0))_{i\in[l]}$ and performs various checks on them
* Verifier samples $v$ and checks a batch opening proof for $\sum_{i\in[l]}v^i\mathcal{PC}_{P_i^0}$ at $\zeta_0$:
1. sample a random $u^0=(u^0_1,\dots,u^0_k)$, send them to the prover
2. compute $h_0(X)=\sum_{i\in[s]}(u_i^{0^{-1}}+u_i^0X^{2^i-1})$
3. evaluate $h_0$ at $\zeta_0$
4. compute $\mathcal{PC}_{h_0}$ (__costly operation!__)
5. perform consistancy checks between $h_0(\zeta_0)$, $\mathcal{PC}_{h_0}$
:::
For the construction described in the previous section to work, the verification procedure needs to be cheap (at most logarithmic in the size of the circuit).
__The computation of $\mathcal{PC}_h$ does not match this property__, since the cost is linear in $n$.
To circumvent that, we outsource the computation of $\mathcal{PC}_h$ to the prover:
:::info
* Prover sends a PLONK proof $(\mathcal{PC}_{P_i^0})_{i\in[l]}$ to the verifier
* Verifier queries the values $(P_i^0(\zeta_0))_{i\in[l]}$ and performs various checks on them
* Verifier samples $v$ and checks a batch opening proof for $\sum_{i\in[l]}v^i\mathcal{PC}_{P_i^0}$ at $\zeta_0$:
1. sample a random $u^0=(u^0_1,\dots,u^0_k)$, send them to the prover
2. compute $h_0(X)=\sum_{i\in[s]}(u_i^{0^{-1}}+u_i^0X^{2^i-1})$
3. evaluate $h_0$ at $\zeta_0$
* Prover computes $\mathcal{PC}_{h_0}$ and sends it to the verifier
4. verifier performs consistancy checks between $h_0(\zeta_0)$, $\mathcal{PC}_{h_0}$
:::
The verifier now no longer has to compute $\mathcal{PC}_{h_0}$, but the trade off is that for the final checks, he does not know if $\mathcal{PC}_{h_0}$ is correct!
To check that $\mathcal{PC}_{h_0}$ is correct, a solution would be to ask the prover to open it, but an opening proof yields another Pedersen commitment to compute from the verifier... But remember that we prove a sequence of circuit, and the prover's job is exactly to provide opening proofs. __So this check is forwarded to the next proof__, where $\mathcal{PC}_{h_0}$ and $u^0$ become public inputs. The workflow between the prover and the verifier in the next proof becomes, at step $1$:
:::info
* Proof at stage $0$ is correct, up to the correctness of $\mathcal{PC}_{h_0}$
* Prover sends a new PLONK proof $(\mathcal{PC}_{P_i^1})_{i\in[l]}$ to the verifier
* Verifier queries the values $(P_i^1(\zeta_1))_{i\in[l]}$ and performs various checks on them
* Verifier queries $u^0$ from the previous proof and computes $h_0(\zeta_1)$
* Verifier samples $v$ and checks a batch opening proof for $\sum_{i\in[l]}v^i\mathcal{PC}_{P_i^1}+v^{l+1}\mathcal{PC}_{h_0}$ at $\zeta_1$:
1. sample a random $u^1=(u^1_1,\dots,u^1_k)$, send them to the prover
2. compute $h_1(X)=\sum_{i\in[s]}(u_i^{1^{-1}}+u_i^1X^{2^i-1})$
3. evaluate $h_1$ at $\zeta_1$
* Prover computes $\mathcal{PC}_{h_1}$ and sends it to the verifier
4. verifier performs consistancy checks between $h_1(\zeta_1)$, $\mathcal{PC}_{h_1}$
* $\mathbf{PC}_{h_0}$ is now validated, and the correctness of $\pi_0$ is fully proven
* $\pi_1$ is correct, up to the correctness of $\mathcal{PC}_{h_1}$
:::
The computations of $\mathcal{PC}_{h_i}$ are therefore forwarded from proofs to proofs. The soundness comes from the fact that the prover does not know $u^i$ in advance, so if he sends a random $\mathcal{PC}_{h_i}$ he has very little chances to match the check of the opening proof.
With the construction from the previous section, and this forwarding system, one can obtain a chain of __arbitray length__ of proofs, in which the verification of the $i$-th step guarantees that __all the previous proofs were correct__!
If we use Fiat-Shamir, we can write a compact version of the scheme:
:::info
* Proof $\pi_0$ is correct, up to the correctness of $\mathcal{PC}_{h_0}$
* Prover sends a new proof $\pi_1$:
1. $(\mathcal{PC}_{P_i^1},P_i^1(\zeta_1))_{i\in[l]}$
2. $\mathcal{PC}_{h_1}$
3. A proof for the opening of $\sum_{i\in[l]}v^i\mathcal{PC}_{P_i^1}+v^{l+1}\mathcal{PC}_{h_0}$
* Verifier performs various checks on $(P_i^1(\zeta_1))_{i\in[l]}$
* Verifier checks the batch opening proof, validating the correctness of $\mathcal{PC}_{h_0}$ and therefore $\pi_0$
* $\pi_1$ is correct, up to the correctness of $\mathcal{PC}_{h_1}$
:::
## Caveat
A circuit $\mathcal{C}$ in PLONK reasons about variables which live in a field $\mathbf{F}_r$. If one wants to deal with variables in another field $\mathbf{F}_p$, one needs to encode $\mathbf{F}_p$ arithemetic in a language where variables are in $\mathbf{F}_r$, and it is extremely inefficient.
However, from the previous section, we saw that a proof $\pi$ consists of:
* $(\mathcal{PC}_{P_i},P_i(\zeta))_{i\in[l]}$: it's a set of points on $E(\mathbf{F}_p)$ plus some values on $\mathbf{F}_r$
* $\mathcal{PC}_h$: it's a point on $E(\mathbf{F}_p)$
* A proof for the opening of $\sum_{i\in[l]}v^i\mathcal{PC}_{P_i}+v^{l+1}\mathcal{PC}_h$: a set of points on $E(\mathbf{F}_p)$
:::info
So a proof $\pi$ is splitted in $2$ components $(\pi^p,\pi^r)$ living respectively in $\mathbf{F}_p$ and $\mathbf{F}_r$.
:::
And a the verifier's job consits of:
* performing various checks on $(P_i(\zeta))_{i\in[l]}$ ($\mathbf{F}_r$ operations)
* computing $h(\zeta)$ ($\mathbf{F}_r$ operations)
* performing consistency checks between $h(\zeta)$ and a folded commitment ($\mathbf{F}_p$ operations)
:::info
So $\mathcal{V}$ is splitted in $2$ components $(\mathcal{V}^p,\mathcal{V}^r)$ reasonning respectively in $\mathbf{F}_p$ and $\mathbf{F}_r$.
:::
So we see that writing $\mathcal{V}$ in a language which does not support $\mathbf{F}_p$ operations is problematic.
The solution is to find another curve $E'$ defined on $\mathbf{F}_r$ and which contains a group of order $p$, and bounce back and forth between the $2$ curves, by carefully separating the proofs and verification circuits in their respective field.
Here is a description step by step of a full sequence, where we omit the inputs to not overload notations, and write $\mathcal{P}(\mathcal{C})$ for generating the proof of circuit $\mathcal{C}$:
:::info
* On $\mathbf{F}_r$
1. $\mathcal{P}(\mathcal{C}_0)\rightarrow\pi_0=(\pi_0^r,\pi_0^p)$
2. $\pi_0^r$ is saved
3. $\pi_0$ attests that $\mathcal{C}_0$ is satisfied, up to the correctness of a Pedersen commitment
* On $\mathbf{F}_p$:
1. $\mathcal{P}(\mathcal{C}_1,\mathcal{V}^p(\pi_0^p))\rightarrow\pi_1=(\pi_1^r,\pi_1^p)$
2. $\pi_1^p$ is saved
3. $\pi_1$ attests that $\mathcal{C}_1$ is satisfied, up to the correctness of a Pedersen commitment
* On $\mathbf{F}_r$:
1. $\mathcal{P}(\mathcal{C}_2,\mathcal{V}^r(\pi_0^r),\mathcal{V}^r(\pi_1^r))\rightarrow\pi_2=(\pi_2^r,\pi_2^p)$
2. $\pi_2^r$ is saved
3. $\pi_2$ attests that $\mathcal{C}_2$ is satisfied, up to the correctness of a Pedersen commitment __and__ that $\pi_0$ is correct
* On $\mathbf{F}_p$:
1. $\mathcal{P}(\mathcal{C}_3,\mathcal{V}^p(\pi_1^p),\mathcal{V}^p(\pi_2^p))\rightarrow\pi_3=(\pi_3^r,\pi_3^p)$
2. $\pi_3^p$ is saved
3. $\pi_3$ attests that $\mathcal{C}_3$ is satifsifed, up to the correctness of a Pedersen commitmtent __and__ that $\pi_1$ iscorrect
* $\dots$
:::
---
<img style="float:left;width:90px;margin-right:20px;"
src="https://i.imgur.com/tYQ8i9r.png">*Author: [Thomas Piellard](https://www.linkedin.com/in/thomas-piellard-53b881129/).*
*I'm part of a research team at [ConsenSys](https://consensys.net/). If you are interested in our work ([fast finite field arithmetic](https://github.com/consensys/goff), [elliptic curve pairings](https://github.com/consensys/gurvy), and [zero-knowledge proofs](https://github.com/consensys/gnark)), [give us a shout](mailto:zkteam@consensys.net).*