# IPA Commitment Scheme Specification
###### tags: `aztec-3, honk`
In this document, we try to answer the following questions:
1. How the Inner Product Argument (IPA) Polynoimial Commitment Scheme (PCS) works?
2. How are we going to use IPA PCS in Aztec 3?
## How Does the IPA PCS Work?
We consider the IPA PCS described in Section 3 of the [Halo](https://eprint.iacr.org/2019/1021.pdf) paper. We don't need it to be zero-knowledge. In high level, this is because the IPA PCS will be used at the last stage of Aztec 3 (similar to the smart contract/root verifier as in Aztec connect).
**Notation:** We denote vectors by bold letters, group elements by capital letters, and scalars by small letters. Multiplication of a scalar $m$ with a group element $G$ i.e., $G$ added to itself $m$ times, is denoted by $[m]G$.
Now we describe the IPA PCS.
Let $p(X)=\sum_{i=0}^{d-1} a[i]X^i$ be a polynopmial of degree $d-1$ where $d=2^k$ for some integer $k$. The vector $\mathbf{a}$ stores the coefficients of the polynomial $p(x)$. Let $x \in \mathbb{F}_p$ be a random evaluation point and $v$ the evaluation of $p(X)$ at $x$. Let the vector $\mathbf{b}$ represent the consecutive powers of $x$ i.e., $\mathbf{b} \triangleq (1,x,x^2,\cdots,x^{d-1})$. Then we have $v = \langle \mathbf{a}, \mathbf{b} \rangle$.
Let $\mathbf{G} \triangleq (G_0, G_1, \cdots, G_{d-1})$ denote the publicly known vector of group elements (srs points to be precise). Then the commitment to the polynomial $p(x)$ is defined as (we are ignoring the blinding factor as we do not need zero-knowledge)
\begin{align}
C \triangleq \langle \mathbf{a}, \mathbf{G} \rangle.
\end{align}
Given $(C,x,v)$ as public input ($\mathbf{G}, \mathbf{b}$ are also known publicly), the IPA PCS enables a prover to convince the verifier that $C$ is indeed the commitment to the polynomial $p(x)$ and $v$ is indeed the evaluation of $p(x)$ on the challenge point $x$ i.e. it is a perfect special honest verifier zero-knowledge argument of knowledge for the relation:
\begin{align}
\{((C,x,v);\mathbf{a}): C = \langle \mathbf{a}, \mathbf{G} \rangle \wedge v = \langle \mathbf{a}, \mathbf{b} \rangle \}.
\end{align}
The steps of the protocol is described below.
1. First, the verifier sends a random group element $U$ (referred as `aux_generator` in the code) to the prover. Both prover and verifier compute the quantity
\begin{align}
C' = C + [v]U.
\end{align}
We are coupling the commitment of the polynomial with the evaluation of the polynomial to simultaneously show that $C$ and $v$ are computed correctly. Notice that $C'$ can be alternatively expressed as
\begin{align}
C' = \langle \mathbf{a}, \mathbf{G} \rangle + [\langle \mathbf{a}, \mathbf{b} \rangle] U. \tag{1}
\end{align}
IPA PCS is an argument of knowledge where the prover proves the knowledge of a vector $\mathbf{a}$ such that equation (1) holds with a communication cost logarithmic to the length of $\mathbf{a}$. As the prover does not know $U$ in advance, this proves that
(a) $C$ is the commitment to the polynomial $p(x)$ at the srs points $\mathbf{G}$ i.e., $C = \langle \mathbf{a}, \mathbf{G} \rangle$ and
(b) $v$ is the evaluation of $p(x)$ at $x$ i.e. $v = \langle \mathbf{a}, \mathbf{b} \rangle$. The next steps are as follows.
2. Let $d=2^k$. We shall iterate over $j$ starting from $k$ to 1. We initialize
\begin{align}
\mathbf{G}^{(k)} \triangleq \mathbf{G}, \mathbf{a}^{(k)} \triangleq \mathbf{a}, \mathbf{b}^{(k)} \triangleq \mathbf{b}.
\end{align}
The prover computes the quantities
\begin{align}
L_j &= \langle \mathbf{a}^{(j)}_{lo}, \mathbf{G}^{(j)}_{hi} \rangle + [\langle \mathbf{a}^{(j)}_{lo}, \mathbf{b}^{(j)}_{hi} \rangle] U, \\
R_j &= \langle \mathbf{a}^{(j)}_{hi}, \mathbf{G}^{(j)}_{lo} \rangle + [\langle \mathbf{a}^{(j)}_{hi}, \mathbf{b}^{(j)}_{lo} \rangle] U,
\end{align}
and sends to the verifier. Here, the vector $\mathbf{a}^{(j)}_{lo}$ denotes the lower half over the vector $\mathbf{a}^{(j)}$ and the vector $\mathbf{a}^{(j)}_{hi}$ denotes the upper half of the vector $\mathbf{a}^{(j)}$. The verifier generates a Fiat-Shamir challenge $u_j$ and sends to the prover. For the next $(j-1)$ th round, the prover computes the following quantities
\begin{align}
\mathbf{a}^{(j-1)} &= \mathbf{a}^{(j)}_{lo}*u_j + \mathbf{a}^{(j)}_{hi}*u_j^{-1}, \\
\mathbf{b}^{(j-1)} &= \mathbf{b}^{(j)}_{lo}*u_j^{-1} + \mathbf{b}^{(j)}_{hi}*u_j, \\
\mathbf{G}^{(j-1)} &= \mathbf{G}^{(j)}_{lo}*u_j^{-1} + \mathbf{G}^{(j)}_{hi}*u_j.
\end{align}
At the end ($j=1$), the prover has computed the following quantities
\begin{align}
(\mathbf{L}, \mathbf{R}, G^{(0)}, a^{(0)}, b^{(0)}),
\end{align}
where the vectors $\mathbf{L}, \mathbf{R}$ contain respectively the elements $L_0,\cdots,L_k, R_0,\cdots,R_k$, $k$ group elements each. $G^{(0)}, a^{(0)}, b^{(0)}$ represent the $k$ times folded versions of $\mathbf{G}$, $\mathbf{a}$, $\mathbf{b}$ and are a group and a couple of field elements respectively.
3. At the begining, the verifier has computed
\begin{align}
C' = \langle \mathbf{a}, \mathbf{G} \rangle + [\langle \mathbf{a}, \mathbf{b} \rangle] U = C +[v]U. \tag{2}
\end{align}
We have $C^{(k)} = C'$, $\mathbf{a}^{(k)} = \mathbf{a}$, $\mathbf{b}^{(k)} = \mathbf{b}$, $\mathbf{G}^{(k)} = \mathbf{G}$ as initialized above. For the $(k-1)$th round, we have
\begin{align}
C^{(k-1)} &= \langle \mathbf{a}^{(k-1)}, \mathbf{G}^{(k-1)} \rangle + [\langle \mathbf{a}^{(k-1)}, \mathbf{b}^{(k-1)} \rangle] U \\
&= \langle \mathbf{a}^{(k)}_{lo}*u_k + \mathbf{a}^{(k)}_{hi}*u_k^{-1}, \mathbf{G}^{(k)}_{lo}*u_k^{-1} + \mathbf{G}^{(k)}_{hi}*u_k \rangle \\
&+ [\langle \mathbf{a}^{(k)}_{lo}*u_k + \mathbf{a}^{(k)}_{hi}*u_k^{-1}, \mathbf{b}^{(k)}_{lo}*u_k^{-1} + \mathbf{b}^{(k)}_{hi}*u_k \rangle] U \\
&= \left(\langle \mathbf{a}^{(k)}_{lo}, \mathbf{G}^{(k)}_{lo} \rangle + \langle \mathbf{a}^{(k)}_{hi}, \mathbf{G}^{(k)}_{hi} \rangle\right) + u_k^2 \left(\langle \mathbf{a}^{(k)}_{lo}, \mathbf{G}^{(k)}_{hi} \rangle + [\langle \mathbf{a}^{(k)}_{lo}, \mathbf{b}^{(k)}_{hi} \rangle]U\right) \\
&+ u_k^{-2} \left(\langle \mathbf{a}^{(k)}_{hi}, \mathbf{G}^{(k)}_{lo} \rangle + [\langle \mathbf{a}^{(k)}_{hi}, \mathbf{b}^{(k)}_{lo} \rangle]U\right) + [\langle \mathbf{a}^{(k)}_{lo}, \mathbf{b}^{(k)}_{lo} \rangle + \langle \mathbf{a}^{(k)}_{hi}, \mathbf{b}^{(k)}_{hi} \rangle] U \\
&= \langle \mathbf{a}^{(k)}, \mathbf{G}^{(k)} \rangle + [u_k^2]L_k + [u_k^{-2}]R_k + [\langle \mathbf{a}^{(k)}, \mathbf{b}^{(k)} \rangle] U \\
&= C^{(k)} + [u_k^2]L_k + [u_k^{-2}]R_k.
\end{align}
Following the same approach, we can inductively write,
\begin{align}
C^{(0)} = C' + \sum_{j=1}^k ([u_j^2]L_j) + \sum_{j=1}^k ([u_j^{-2}]R_j). \tag{3}
\end{align}
Following equation (2), we can alternatively express $C^{(0)}$ as,
\begin{align}
C^{(0)} &= [a^{(0)}]G^{(0)}+[a^{(0)}*b^{(0)}]U, \\
&= [a^{(0)}](G^{(0)}+[b^{(0)}]U).
\end{align}
Suppose, the prover provides $G^{(0)}$ and $b^{(0)}$ along with $\mathbf{L}$ and $\mathbf{R}$ to the verifier. The verification proceeds as follows.
(i) The verifier computes $C^{(0)}$ as in equation (3).
(ii) The verifier checks if $b^{(0)}$ provided by the prover is computed correctly.
(iii) The verifier checks if $G^{(0)}$ provided by the prover is computed correctly.
(iv) The prover proves knowledge of $a^{(0)}$ such that
\begin{align}
C^{(0)} = [a^{(0)}](G^{(0)}+[b^{(0)}]U), \tag{4}
\end{align}
holds. Below we describe steps (ii), (iii), and (iv) in more details.
### Steps (ii) and (iii)
From the section 3.1 of the Halo paper, we define the vector $\mathbf{s}$ as
\begin{align}
\mathbf{s} = (&u_1^{-1}*u_2^{-1}*\cdots*u_k^{-1}, \\
&u_1*u_2^{-1}*\cdots*u_k^{-1}, \\
&u_1^{-1}*u_2*\cdots*u_k^{-1}, \\
&u_1*u_2*\cdots*u_k^{-1}, \\
&\quad \quad \quad \vdots \\
&u_1*u_2*\cdots*u_k). \\
\end{align}
Because of the binary structure of the folding of $\mathbf{G}, \mathbf{b}$, we have
\begin{align}
b^{(0)} = \langle \mathbf{s}, \mathbf{b} \rangle, G^{(0)} = \langle \mathbf{s}, \mathbf{G} \rangle.
\end{align}
As observed in the Halo paper (Equation 3, https://eprint.iacr.org/2019/1021.pdf, we notice that the equation $g(X,u_1,u_2,\cdots,u_k) = \prod_{i=1}^{k} (u_i+u_i^{-1}X^{2^{i-1}})$ is wrong), $b^{(0)}$ can be efficiently computed by the verifier by defining the polynomial,
\begin{align}
g(X,u_1,u_2,\cdots,u_k) = \prod_{i=1}^{k} (u_i^{-1}+u_iX^{2^{i-1}}),
\end{align}
and evaluating it at x, i.e.,
\begin{align}
b^{(0)} = \langle \mathbf{s}, \mathbf{b} \rangle = g(x,u_1,u_2,\cdots,u_k).
\end{align}
The above can be computed in logarithmic time. However, computing $G^{(0)} = \langle \mathbf{s}, \mathbf{G} \rangle$ still requires a linear-time ($O(d)$) multiscalar multiplication (MSM). However as discussed in the paper, this cost can be amortized by delegating all these step (iv) verifications at the last step of $m$ IPA verifications. We discuss more on this in the subsequent section.
:::info
Q: How to check $b^{(0)} = \langle \mathbf{s}, \mathbf{b} \rangle = g(x,u_1,u_2,\cdots,u_k)$?
A: Rather than giving a formal inductive proof, we give a small example for a better intuition. Let $d=4$, $\mathbf{b} = \mathbf{b}^{(2)} = (b_0,b_1,b_2,b_3)$. The round challenges are $u_2$ and $u_1$. Then for the first round,
\begin{align}
\mathbf{b}^{(1)} &= \mathbf{b}^{(2)}_{lo}*u_2^{-1} + \mathbf{b}^{(2)}_{hi}*u_2, \\
&= (b_0,b_1)u_2^{-1} + (b_2,b_3)u_2, \\
&= (b_0u_2^{-1}+b_2u_2, b_1u_2^{-1}+b_3u_2).
\end{align}
And for the next and the last round we have,
\begin{align}
\mathbf{b}^{(0)} &= \mathbf{b}^{(1)}_{lo}*u_1^{-1} + \mathbf{b}^{(1)}_{hi}*u_1, \\
&= (b_0u_2^{-1}+b_2u_2)u_1^{-1} + (b_1u_2^{-1}+b_3u_2)u_1, \\
&= b_0u_1^{-1}u_2^{-1} + b_1u_1u_2^{-1} + b_2u_1^{-1}u_2 + b_3u_1u_2, \\
&= \langle (u_1^{-1}u_2^{-1}, u_1u_2^{-1}, u_1^{-1}u_2, u_1u_2), (b_0,b_1,b_2,b_3) \rangle, \\
&=\langle \mathbf{s}, \mathbf{b} \rangle.
\end{align}
Similarly we can show that $G^{(0)} = \langle \mathbf{s}, \mathbf{G} \rangle$. Next lets verify $\mathbf{b}^{(0)}=g(x,u_1,u_2,\cdots,u_k)$.
\begin{align}
g(x,u_1,u_2) &= \prod_{i=1}^{2} (u_i^{-1}+u_ix^{2^{i-1}}) \\
&= (u_1^{-1} + u_1x)(u_2^{-1} + u_2x^2) \\
&= u_1^{-1}u_2^{-1} + u_1u_2^{-1}x + u_1^{-1}u_2x^2 + u_1u_2x^3.
\end{align}
Given $x$ the evaluation point, we have $\mathbf{b} = (b_0,b_1,b_2,b_3) = (1,x,x^2,x^3)$, which verifies the claim.
:::
### Step (iv)
In step (iv), the prover proves knowledge of $a^{(0)}$ such that
\begin{align}
C^{(0)} = [a^{(0)}](G^{(0)}+[b^{(0)}]U),
\end{align}
holds. The paper uses a *generalized Schnorr protocol* to achieve this.
(a) The prover samples random $d \in \mathbb{F}_p$. Then she computes $R \triangleq [d](G^{(0)}+[b^{(0)}]U)$. The prover sends $R$ to the verifier.
(b) The verifier samples a random challenge $c$ from $\mathbb{F}_p$ and sends it to the prover.
( c) The prover computes $z = a^{(0)}c+d$ and sends it to the verifier. The verifier accepts if
\begin{align}
[c]C^{(0)} + R \stackrel{?}{=} [z](G^{(0)}+[b^{(0)}]U).
\end{align}
However, as we are not concerned with zero knowledge, in code the prover directly reveals $a^{(0)}$ as a part of the proof.
## Summary of IPA-PCS
Let the underlying field be $\mathbb{F}_p$, the number of points on the curve is $r$.
* Public inputs: $\mathbf{G}$, $\mathbf{b}$, $(C,x,v)$.
* Prover's witness: $\mathbf{a}$.
* Proof: $(\mathbf{L}, \mathbf{R}, a^{(0)})$.
* Verifier's work:
1. Compute: $C' = C + [v]U$.
2. Compute: $C^{(0)} = C' + \sum_{j=1}^k ([u_j^2]L_j) + \sum_{j=1}^k ([u_j^{-2}]R_j)$.
3. Check: $b^{(0)} \stackrel{?}{=} g(x,u_1,u_2,\cdots,u_k) = \prod_{i=1}^{k} (u_i^{-1}+u_ix^{2^{i-1}})$ (field operation in $\mathbb{F}_r$ as we need to check modulo $r$ for all quantities).
4. Check: $G^{(0)} \stackrel{?}{=} \langle \mathbf{s}, \mathbf{G} \rangle$ (group operation in $\mathbb{F}_p$ as the underlying $x$ and $y$ co-ordinates are the elements of the field $\mathbb{F}_p$).
5. Check: $C^{(0)} \stackrel{?}{=} [a^{(0)}](G^{(0)}+[b^{(0)}]U)$.
## Accumulation of IPA-PCS and Recursive Process in Aztec 3
The below idea was described by Zac in Istanbul.
In the diagram below, we sketch the idea of recursive process in Aztec 3. Suppose circuit $A$ and $B$ operate over the BN254 curve and circuit $C$ operates over the Grumpkin curve. As depicted in the diagram, let the BN254 curve be designed over $\mathbb{F}_p$ and the number of points in the elliptic curve group is $r$. As a result, operations over $\mathbb{F}_p$ is expensive and operations over $\mathbb{F}_r$ is cheap.
For the Grumkin curve, the opposite is true, i.e., it is designed over $\mathbb{F}_r$ and the number of points in the elliptic curve group is $p$. As a result, operations over $\mathbb{F}_r$ is expensive and operations over $\mathbb{F}_p$ is cheap.
**Note**: In the code, we refer a scalar in the field $\mathbb{F}_r$ as `barretenberg::fr` or `grumpkin::fq`. Also a scalar in the field $\mathbb{F}_q$ is referred as `barretenberg::fq` or `grumpkin::fr`.

Suppose circuit $A$ produces a IPA proof $\Pi_A$ and circuit $B$ verifies it i.e., it produces a proof $\Pi_B$ which attests that $\Pi_A$ is verified successfully. As discussed in the previous section, to verify $\Pi_A$, circuit $B$ needs to perform
1. Field opeartions in $\mathbb{F}_r$ (step 3 in the above section).
2. Group operations in $\mathbb{F}_p$ (step 4 in the above section).
As mentioned above, the second operation is very expensive for circuit $B$. The idea is to delegate step 2 in the following way.
We construct another circuit $C$ over the Grumpkin curve, which accumulates $m$ (say) $\Pi_A$ proofs and produces another IPA proof that linear MSMs of all $\Pi_A$s are computed correctly. To elaborate on this, let $G^{(0)}_1, G^{(0)}_2, \cdots, G^{(0)}_m$ be the $m$ group elements of $m$ IPA proofs. As described in Section 3.2 of the Halo paper, checking the correctness of these quantities can be batched by taking a linear combination of these elements and calling the IPA once.
Now circuit $B$ needs to fully verify $\Pi_C$. However as circuit $C$ is defined over the Grumpkin curve instead of the BN254 curve, circuit $B$ needs to do,
1. Field opeartions in $\mathbb{F}_p$ (step 3 in the above section),
2. Group operations in $\mathbb{F}_r$ (step 4 in the above section),
which are cheaper than group operations in $\mathbb{F}_p$ for circuit $B$ as it is designed over the BN254 curve (designed over $\mathbb{F}_p$, # points $=r$).
## Optimizations:
There can be some optimizations over the above protocol which follows the Halo paper. We write them below.
### (1)
In the above protocol, in the $k$ rounds, the prover computes
\begin{align}
\mathbf{a}^{(j-1)} &= \mathbf{a}^{(j)}_{lo}*u_j + \mathbf{a}^{(j)}_{hi}*u_j^{-1}, \\
\mathbf{b}^{(j-1)} &= \mathbf{b}^{(j)}_{lo}*u_j^{-1} + \mathbf{b}^{(j)}_{hi}*u_j, \\
\mathbf{G}^{(j-1)} &= \mathbf{G}^{(j)}_{lo}*u_j^{-1} + \mathbf{G}^{(j)}_{hi}*u_j.
\end{align}
Instead of the above, we define the quantities for the next round as
\begin{align}
\mathbf{a}^{(j-1)} &= \mathbf{a}^{(j)}_{lo} + \mathbf{a}^{(j)}_{hi}*u_j, \\
\mathbf{b}^{(j-1)} &= \mathbf{b}^{(j)}_{lo} + \mathbf{b}^{(j)}_{hi}*u_j^{-1}, \\
\mathbf{G}^{(j-1)} &= \mathbf{G}^{(j)}_{lo} + \mathbf{G}^{(j)}_{hi}*u_j^{-1}.
\end{align}
This was originally proposed in Figure 3 of the Halo Infinite [paper](https://eprint.iacr.org/2020/1536.pdf). As we can see, this reduces the number of scalar multiplication by the prover. As a result, For the $(k-1)$th round, we have
\begin{align}
C^{(k-1)} &= \langle \mathbf{a}^{(k-1)}, \mathbf{G}^{(k-1)} \rangle + [\langle \mathbf{a}^{(k-1)}, \mathbf{b}^{(k-1)} \rangle] U \\
&= \langle \mathbf{a}^{(k)}_{lo} + \mathbf{a}^{(k)}_{hi}*u_k, \mathbf{G}^{(k)}_{lo} + \mathbf{G}^{(k)}_{hi}*u_k^{(-1)} \rangle \\
&+ [\langle \mathbf{a}^{(k)}_{lo} + \mathbf{a}^{(k)}_{hi}*u_k, \mathbf{b}^{(k)}_{lo} + \mathbf{b}^{(k)}_{hi}*u_k^{-1} \rangle] U \\
&= \left(\langle \mathbf{a}^{(k)}_{lo}, \mathbf{G}^{(k)}_{lo} \rangle + \langle \mathbf{a}^{(k)}_{hi}, \mathbf{G}^{(k)}_{hi} \rangle\right) + u_k^{-1} \left(\langle \mathbf{a}^{(k)}_{lo}, \mathbf{G}^{(k)}_{hi} \rangle + [\langle \mathbf{a}^{(k)}_{lo}, \mathbf{b}^{(k)}_{hi} \rangle]U\right) \\
&+ u_k \left(\langle \mathbf{a}^{(k)}_{hi}, \mathbf{G}^{(k)}_{lo} \rangle + [\langle \mathbf{a}^{(k)}_{hi}, \mathbf{b}^{(k)}_{lo} \rangle]U\right) + [\langle \mathbf{a}^{(k)}_{lo}, \mathbf{b}^{(k)}_{lo} \rangle + \langle \mathbf{a}^{(k)}_{hi}, \mathbf{b}^{(k)}_{hi} \rangle] U \\
&= \langle \mathbf{a}^{(k)}, \mathbf{G}^{(k)} \rangle + [u_k^{-1}]L_k + [u_k]R_k + [\langle \mathbf{a}^{(k)}, \mathbf{b}^{(k)} \rangle] U \\
&= C^{(k)} + [u_k^{-1}]L_k + [u_k]R_k.
\end{align}
Following the same approach, we can inductively write,
\begin{align}
C^{(0)} = C' + \sum_{j=1}^k ([u_j^{-1}]L_j) + \sum_{j=1}^k ([u_j]R_j). \tag{5}
\end{align}
**Note:** It also changes the $\mathbf{s}$ and $g(X)$. Below we compute them.
:::info
To compute $\mathbf{s}$ and $g(X,u_1,u_2,\cdots,u_k)$ such that $b^{(0)} = \langle \mathbf{s}, \mathbf{b} \rangle = g(x,u_1,u_2,\cdots,u_k)$. Again, rather than giving a formal inductive proof, we give a small example for a better intuition.
Let $d=4$, $\mathbf{b} = \mathbf{b}^{(2)} = (b_0,b_1,b_2,b_3)$. The round challenges are $u_2$ and $u_1$. Then for the first round,
\begin{align}
\mathbf{b}^{(1)} &= \mathbf{b}^{(2)}_{lo} + \mathbf{b}^{(2)}_{hi}*u_2^{-1}, \\
&= (b_0,b_1) + (b_2,b_3)u_2^{-1}, \\
&= (b_0+b_2u_2^{-1}, b_1+b_3u_2^{-1}).
\end{align}
And for the next and the last round we have,
\begin{align}
\mathbf{b}^{(0)} &= \mathbf{b}^{(1)}_{lo} + \mathbf{b}^{(1)}_{hi}*u_1^{-1}, \\
&= (b_0+b_2u_2^{-1}) + (b_1+b_3u_2^{-1})u_1^{-1}, \\
&= b_0 + b_1u_1^{-1} + b_2u_2^{-1} + b_3u_1^{-1}u_2^{-1}, \\
&= \langle (1, u_1^{-1}, u_2^{-1}, u_1^{-1}u_2^{-1}), (b_0,b_1,b_2,b_3) \rangle.
\end{align}
From this we have $\mathbf{s} = (1, u_1^{-1}, u_2^{-1}, u_1^{-1}u_2^{-1})$. For general $k$, we get,
\begin{align}
\mathbf{s} = (&u_1^{0}*u_2^{0}*\cdots*u_k^{0}, \\
&u_1^{-1}*u_2^{0}*\cdots*u_k^{0}, \\
&u_1^{0}*u_2^{-1}*\cdots*u_k^{0}, \\
&u_1^{-1}*u_2^{-1}*\cdots*u_k^{0}, \\
&\quad \quad \quad \vdots \\
&u_1^{-1}*u_2^{-1}*\cdots*u_k^{-1}). \\
\end{align}
Next lets compute $g(x,u_1,u_2,\cdots,u_k)$ such that $\mathbf{b}^{(0)}=g(x,u_1,u_2,\cdots,u_k)$. By observation we have (by change in variable in the previous equation) $g(x,u_1,u_2) = \prod_{i=1}^{k} (1+u_i^{-1}x^{2^{i-1}})$. For the example, we verify this below,
\begin{align}
g(x,u_1,u_2) &= \prod_{i=1}^{2} (1+u_i^{-1}x^{2^{i-1}}) \\
&= (1 + u_1^{-1}x)(1 + u_2^{-1}x^2) \\
&= 1 + u_1^{-1}x + u_2^{-1}x^2 + u_1^{-1}u_2^{-1}x^3.
\end{align}
Given $x$ the evaluation point, we have $\mathbf{b} = (b_0,b_1,b_2,b_3) = (1,x,x^2,x^3)$, which verifies the claim.
:::
## References
1. Halo paper (https://eprint.iacr.org/2019/1021.pdf).
2. Halo Infinite paper (https://eprint.iacr.org/2020/1536.pdf)
3. ZK Hack whiteboard session (https://www.youtube.com/watch?v=RaEs5mnXIhY)