owned this note
owned this note
Published
Linked with GitHub
---
tags: Folding
---
# CCS
> ++Table of contents++
> [TOC]
# Credits
This document has been possible thanks to the reviews of @asn-d6 and @oskarth as well as the immense help from @arnaucube and Edu(from PSE) who almost completely derived R1CS&Plonkish -> CCS.
Also thanks to Srinath Setty for his help on any question we've had while reading his paper as well as while elaborating this doc.
# CCS and R1CS (an intro)
Due to NP-completeness equivalence, we can easily translate any R1CS statement and translate it into a CCS statement which is completelly equivalent.
To do so, it contais certain information as:
- $m$ = Number of constraints.
- $n$ = Number of columns of matrices.
- $N$ = Number of non-zero entries on rows.
- $l$ = Size of the public input vector.
- $x$ = A vector containing public input instances.
- $w$ = A vector containing the witness that satisfies the instance $\mathbb{S}$.
An R1CS structure $\mathbb{S}$ is satisfied if:
$$(A \cdot z) \circ (B \cdot z) - (C \cdot z) = 0$$ where $z = (w,1,x) \in \mathbb{F}^n$
## CCS in general
In general, a CCS structure $\mathbb{S}$ consists of:
- Size bounds { $m, n, N, l, t, q, d \in \mathbb{N}$ where $n \lt l$ }
- $t$ => Maximum size of the domain of the multisets.
- $q$ => Number of multisets used within the CCS instance.
- $d$ => Represents the maximum cardinality of each multiset. Which translates to the amount of different elements contained inside each multiset.
- A sequence of matrices $M_0..M_{t-1}$ that define the fixed parts of our circuit.
- A sequence of multisets $S_0,...,S_{q-1}$ where $\forall s \in S_i$, $dom(s) = \{0..t-1\}$
- A sequence of constants $[c_0,...,c_{q-1}]$
Some of thesedon't really translate to any property in particular. Indeed they're just parameters that are used to be able to adapt all the arithmetizations into CCS.
:::info
A CCS instance $(\mathbb{S}, x)$ is satisfied if only if:
$$\sum_{i=0}^{q-1} c_i \cdot \bigcirc_{j \in S_i} (M_j \cdot \vec{z}) = \vec{0}$$
If we re-write this a bit, we get:
$$\{c_0 \cdot \{ (M_{S_{0}[0]}\cdot z) \circ (M_{S_{0}[1]} \cdot z) \circ ... \circ (M_{S_{0}[t-1]} \cdot z) \}_{\forall M_i\in \mathbb{S}_0} \cdot z\}
\\ \texttt{+} ... \texttt{+} \\
\{c_{q-1} \cdot \{(M_{S_{q-1}[0]} \cdot z )\circ (M_{S_{q-1}[1]} \cdot z) \circ ... \circ (M_{S_{q-1}[t-1]} \cdot z) \}_{\forall M_i\in \mathbb{S}_{q-1}} \cdot z\} = 0$$
:::
## R1CS -> CCS
Let's actually see how we can translate R1CS into CCS:
:::warning
To do so, we need to set some parameters:
- $m$ = $m$ in the R1CS instance.
- $n$ = $n$ in the R1CS instance.
- $N$ = $N$ in the R1CS instance.
- $l$ = $l$ in the R1CS instance.
- $t$ = 3
- $q$ = 2
- $d$ = 2
- $\vec{M}$ = $[M_A, M_B, M_C]$ (R1CS matrices A, B and C)
- $\vec{\mathbb{S}}$ = $[S_0 = \{0,1\}, S_1 = \{2\}]$
- $\vec{c}$ = $\{c_0=1, c_1=-1\}$
:::
If we apply these constants to the aformentioned equation of a CCS instance:
:::info
$$1 \cdot \{A \cdot z \circ B\cdot z \} \texttt{-} 1\cdot \{C \cdot z \} = 0$$
:::
Let's actually see a sage example done by @arnaucube:
```python
# R1CS-to-CCS (https://eprint.iacr.org/2023/552) Sage prototype
# utils
def matrix_vector_product(M, v):
n = M.nrows()
r = [F(0)] * n
for i in range(0, n):
for j in range(0, M.ncols()):
r[i] += M[i][j] * v[j]
return r
def hadamard_product(a, b):
n = len(a)
r = [None] * n
for i in range(0, n):
r[i] = a[i] * b[i]
return r
def vec_add(a, b):
n = len(a)
r = [None] * n
for i in range(0, n):
r[i] = a[i] + b[i]
return r
def vec_elem_mul(a, s):
r = [None] * len(a)
for i in range(0, len(a)):
r[i] = a[i] * s
return r
# end of utils
# can use any finite field, using a small one for the example
F = GF(101)
# R1CS matrices for: x^3 + x + 5 = y (example from article
# https://www.vitalik.ca/general/2016/12/10/qap.html )
A = matrix([
[F(0), 1, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0],
[0, 1, 0, 0, 1, 0],
[5, 0, 0, 0, 0, 1],
])
B = matrix([
[F(0), 1, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0],
])
C = matrix([
[F(0), 0, 0, 1, 0, 0],
[0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 1],
[0, 0, 1, 0, 0, 0],
])
print("R1CS matrices:")
print("A:", A)
print("B:", B)
print("C:", C)
z = [F(1), 3, 35, 9, 27, 30]
print("z:", z)
assert len(z) == A.ncols()
n = A.ncols() # == len(z)
m = A.nrows()
# l = len(io) # not used for now
# check R1CS relation
Az = matrix_vector_product(A, z)
Bz = matrix_vector_product(B, z)
Cz = matrix_vector_product(C, z)
print("\nR1CS relation check (Az ∘ Bz == Cz):", hadamard_product(Az, Bz) == Cz)
assert hadamard_product(Az, Bz) == Cz
# Translate R1CS into CCS:
print("\ntranslate R1CS into CCS:")
# fixed parameters (and n, m, l are direct from R1CS)
t=3
q=2
d=2
S1=[0,1]
S2=[2]
S = [S1, S2]
c0=1
c1=-1
c = [c0, c1]
M = [A, B, C]
print("CCS values:")
print("n: %s, m: %s, t: %s, q: %s, d: %s" % (n, m, t, q, d))
print("M:", M)
print("z:", z)
print("S:", S)
print("c:", c)
# check CCS relation (this is agnostic to R1CS, for any CCS instance)
r = [F(0)] * m
for i in range(0, q):
hadamard_output = [F(1)]*m
for j in S[i]:
hadamard_output = hadamard_product(hadamard_output,
matrix_vector_product(M[j], z))
r = vec_add(r, vec_elem_mul(hadamard_output, c[i]))
print("\nCCS relation check (∑ cᵢ ⋅ ◯ Mⱼ z == 0):", r == [0]*m)
assert r == [0]*m
```
Output:
![](https://hackmd.io/_uploads/S1puIvAi2.png)
# CCS and Plonkish (intro)
Before starting, there are a few new things to notice here:
We define a plonkish instance $\mathbb{S}$ to have the following:
- $m$ = Number of constraints.
- $n$ = Number of public and private witness.
- $l$ = Size of the public input vector.
- $e$ = Size of the selectors vector (amount of different selector values).
- $t$ = Domain of the $\mathbb{S}$ instance. Translates to the length of a $T_i$ instance. ie. Number of columns involved on a constraint inside $\mathbb{T}$.
- $g$ = A multivariate polynomial with $t$ variables which is represented as a sum of $q$ monomials of at most degree $d$.
- $s$ = A vector of constants called selectors of size $e$.
- $\mathbb{T}$ = A matrix which contains the indexes of values at $z$ which build up to $g$.
## Plonkish -> CCS
Let's actually see how we can translate Plonkish into CCS:
Before starting, there are a few new things to notice here:
Differently from R1CS translation which is direct, we have some work to do which isn't simply set $\vec{\mathbb{S}}$ and $\vec{c}$.
:::info
Note that the vanilla plonk equation is: $q_m a b + q_la +q_rb + q_oc + q_c = 0$
:::
$$
Plonkish = \begin{bmatrix}
qm & ql & qr & qo & qc & a & b & c\\
1 & 0 & 0 & -1 & 0 & x_0 & x_0 & x_0\\
1 & 0 & 0 & -1 & 0 & x_1 & x_1 & x_1\\
0 & 2 & 2 & -1 & 0 & x_2 & x_3 & x_4\\
0 & 0 & 0 & 0 & 0 & x_5 & x_5 & x_5\\
\end{bmatrix}
$$
:::info
Note that the Vanilla Plonk on the top is doing:
$r_0$ -> Bool check of $x_0$
$r_1$ -> Bool check of $x_1$
$r_2$ -> Addition check: $2 \cdot a\ \texttt{+}\ 2 \cdot b\ \texttt{=}\ c$
$r_3$ -> Null constrait (this checks nothing)
:::
After seeing the Plonkish we want to translate, we alreay can infer some things:
- $m$ = 4 (4 rows in our plonkish = 4 constraints).
- $n$ = $\mathbb{F}^n$ = 6 (We have 6 different witness variables $\{x_0...x_5\}$.
We will go through each $T_i$ and each value of the row $i$ is the index of the vector $s$ from which we will get a value from to set it in our matrix.
- $t$ = $8$ (Number of Plonkish instance columns).
- $e$ = $5$ (Number of different values ammong the selectors).
- $d$ = $3$ Note that $q_mab$ has degree 3.
To start, we need to define our `selectors` $s$ instance and also, our witness instance $w$.
$$
\vec{s} = [0,1,-1,2] \\
\vec{w} = [x_0, x_1, x_2, x_3, x_4, x_5]
$$
With this, we can define our instance $z$.
Remember that in the paper, we have 2 $z$ instances defined. The plonkish one and the CCS one.
If we focus on the Plonkish one, we can see the definition: $z = (w,\ x,\ s) \in \mathbb{F}^{n+e}$.
:::danger
Hence, this means we can define $z$ as the union of $w$ and $s$ (and also $x$ but we don't have public witness in this scheme).
$$
z_{Plonkish} = [x_0, x_1, x_2, x_3, x_4, x_5, 0, 1, -1, 2]
$$
Note that we have also our $z$ definition for CCS. Which will also be used here.
$$
z_{CCS} = \{w,1,x\} \in \mathbb{F}^n\\
z_{CCS} = [x_0, x_1, x_2, x_3, x_4, x_5, 1]
$$
:::
Remember that we can see $g$ as a multivariate polynomial from it's definition if we consider each of the plonkish instance columns one variable:
$$
g(q_m,\ q_m,\ q_l,\ q_o,\ q_c,\ a,\ b,\ c)= q_mab+ q_la+ q_rb+ q_oc+ q_c
$$
The next step is to define $T$. Which remember, indexes into $g$ elements of $z_{Plonkish}$ (which is collecting all the values that are used in the plonkish instance) and setting them into $g$.
With that, we define $T$ for this example as:
$$T = \begin{bmatrix}
0 & 0 & 0 & 7 & 6 & 6 & 8 & 6\\
1 & 1 & 1 & 7 & 6 & 6 & 8 & 6\\
2 & 3 & 4 & 6 & 9 & 9 & 7 & 6\\
5 & 5 & 5 & 6 & 6 & 6 & 6 & 6\\
\end{bmatrix}$$
### Matrix derivation
The following rust code summarizes the actual algorithm to derive the matrixes for the CCS instance
```python
# The following CCS instance values have been provided by Carlos
# (https://github.com/CPerezz) and Edu (https://github.com/ed255),
# and this sage script was made to check the CCS relation.
T = [
[0, 0, 0, 7, 6, 6, 8, 6],
[1, 1, 1, 7, 6, 6, 8, 6],
[2, 3, 4, 6, 9, 9, 7, 6],
[5, 5, 5, 6, 6, 6, 6, 6]
]
# z_plonkish = [F(1), 0, 1, 2, 3, 10, 42, 0,1,-1,2]
# z_ccs = [F(1), 0, 1, 2, 3, 10, 42, 1]
z_plonkish = [x_0,x_1,x_2,x_3,x_4,x_5,0,1,-1,2,]
z_ccs = [1,x_0,x_1,x_2,x_3,x_4,x_5]
## Checks performed by this Plonk/CCS instance:
# - binary check for x0, x1
# - 2*x2 + 2*x3 == x4
# CCS parameters
q=5
d=3
S = [[3,0,1], [4,0], [5,1], [6,2], [7]]
c = [1, 1, 1, 1, 1]
t = 8
n = 6
l = 0
m=4
# Initialize the result matrixes
matrixes = [[[0 for _ in range(t-1)] for _ in range(m)] for _ in range(t)]
# Iterate over i and j
for i in range(m):
for j in range(t):
k_j = T[i][j]
if k_j >= n:
matrixes[j][i][n - l - 1] = z_plonkish[k_j - n]
elif k_j >= n - l - 1:
matrixes[j][i][k_j + 1] = 1
else:
matrixes[j][i][k_j] = 1
```
**Let's run the first iteration of this step explaining it here. Then the rest can be inferred.**
#### Step i=0, j=0..t-1
Here, we select the row $i$ from $T$ and call it $T_i$.
$T_0 = [0,\ 0,\ 0,\ 7,\ 6,\ 6,\ 8,\ 6]$
- When $j=0$ -> $k_j = 0$.
:::info
Note that $z_{Plonkish}[0] = 0$ (which indexes to $x_0$ in $z$).
:::
So following the algorithm above, sice $k_j < n$ we select $1$.
Then we can set the coeff of $M[0]_{0,0} = 1$ -> $M[0]_{00} = 1$.
- When $j=1$ -> $k_j = 0$ -> We therefore select $1$.
Then we set $M[1]_{0,0} = 1$
- When $j=2$ -> $k_j = 0$. -> We therefore select $1$.
Then we set $M[2]_{0,0} = 1$
- When $j=3$ -> $k_j = 7$. **Note now we fall into the $k_j \ge n$. Hence we take $k_j - n$ as result.**
:::info
$z_{Plonkish}[7-6] = 1$ **(Note here that this 1 points to the second position of $s$ which actually corresponds to the value 1 of the selector)**.
:::
Then we set $M[3]_{i_0} = s[k_j - n]$ -> $M[3]_{0,0} = 1$.
- When $j=4$ -> $k_j = 6$ -> $s[6-6] = s[0] = 0$
Then we set $M[4]_{0,0} = 0$
- When $j=5$ -> Happens the same as above.
Then we set $M[5]_{0,0} = 0$
- When $j=6$ -> $k_j = 8$ -> $s[8-6] = s[2] = -1$ (Assume $-1$ in some $GF$ is essentially a positive number (p-1)).
Then we set $M[6]_{0,0} = -1$
- When $j=7$ -> $k_j = 6$ -> $s[6-6] = s[0] = 0$
Then we set $M[7]_{0,0} = 0$
**Up to this point, we should have the following matrixes:**
$$M_0 = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}$$
$$M_1 = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}$$
$$M_2 = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}$$
$$M_3 = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}$$
$$M_4 = \begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}$$
$$M_5 = \begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}$$
$$M_6 = \begin{bmatrix}
-1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}$$
$$M_7 = \begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}$$
:::warning
Note that in the paper is mentioned that any coefficient of the matrixes contained in $\vec{M}$ that is not set is equal to 0!
:::
#### Step i=3, j=7 (end of the algorithm)
After the 4th iteration (So $i=3$, $j=7$) the algorithm stops and we should obtain the following matrix vector $\vec{M}$ as a result:
$$M_0 = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1
\end{bmatrix}$$
$$M_1 = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1
\end{bmatrix}$$
$$M_2 = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 1 & 0\\
0 & 0 & 0 & 0 & 0 & 1
\end{bmatrix}$$
$$M_3 = \begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0\\
1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}$$
$$M_4 = \begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
2 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}$$
$$M_5 = \begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
2 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}$$
$$M_6 = \begin{bmatrix}
-1 & 0 & 0 & 0 & 0 & 0\\
-1 & 0 & 0 & 0 & 0 & 0\\
-1 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}$$
$$M_7 = \begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0
\end{bmatrix}$$
Now that we have obtained $\vec{M}$ it's time to get $\vec{c}$ and $\vec{S}$. And it's also time to add the `1` fixed instance for which we have padded our matrixes to our vector $z$.
Leading on $z = [1, x_0, x_1, x_2, x_3, x_4, x_5, 0, 1, -1, 2]$
### Obtaining c
$c$ is defined in a CCS instance as a vector of constants.
In the paper, we read:
> For $i \in\{0,1,...,q-1\}$, set $c_i$ to the coefficient of the $i$th monomial of $g$.
This is easy to see. If we represent the multivariate polynomial $g$ understanding each column name as a variable:
$$
g= q_mab + q_la + q_rb + q_oc + q_c
$$
where the monomials that form it are:
$$
\{q_mab,\ q_la,\ q_rb,\ q_oc,\ q_c\}
$$
:::info
We can easily see that we don't have any constants/coefficients multiplying any of the monomials.
**Hence, we conclude that they're all $1$.**
:::
$$
\vec{c} = [1,\ 1,\ 1,\ 1,\ 1]
$$
### Obtaining S
$\vec{S}$ is defined as multisets of domain $t-1$.
This has a really easy translation if we actually name things.
Let's name each variable of the $g$ polynomial (same as naming each column of the plonkish instance).
In this case, if we recall, the plonkish instance was:
$$
Plonkish = \begin{bmatrix}
a & b & c &qm & ql & qr & qo & qc \\
x_0 & x_0 & x_0 & 1 & 0 & 0 & -1 & 0 & \\
x_1 & x_1 & x_1 & 1 & 0 & 0 & -1 & 0\\
0 & 2 & 2 & -1 & 0 & x_2 & x_3 & x_4\\
0 & 0 & 0 & 0 & 0 & x_5 & x_5 & x_5\\
\end{bmatrix}$$
We will name/index all the columns in the $\mathbb{PLONKISH}$ matrix from $\{0..7\}$.
Naming therefore -> $\{q_m = 0, q_l=1, q_r=2, q_o=3, q_c=4, a=5, b=6, c=7\}$.
Then we read in the paper:
> For $i \in\{0,1,...,q-1\}$, set if the $i$th monomial contains a variable $j$, where $j \in\{0,1,...,t-1\}$, add $j$ to the multiset $S_i$ with multiplicity equal to the degree of the variable.
Note that if we represent the multivariate polynomial $g$ with the indexes instead of the string-based representation of them we obtain:
$$
\vec{S} = [[3,\ 0,\ 1],\ [4,\ 0],\ [5,\ 1]\ [6,\ 2],\ [7]]
$$
### Code to verify the previous example
@arnaucube provided a modified version of the above code in which we can verify the instance used for the example on it.
```python
# Plonk-CCS (https://eprint.iacr.org/2023/552) Sage prototype
# utils
def matrix_vector_product(M, v):
n = M.nrows()
r = [F(0)] * n
for i in range(0, n):
for j in range(0, M.ncols()):
r[i] += M[i][j] * v[j]
return r
def hadamard_product(a, b):
n = len(a)
r = [None] * n
for i in range(0, n):
r[i] = a[i] * b[i]
return r
def vec_add(a, b):
n = len(a)
r = [None] * n
for i in range(0, n):
r[i] = a[i] + b[i]
return r
def vec_elem_mul(a, s):
r = [None] * len(a)
for i in range(0, len(a)):
r[i] = a[i] * s
return r
# end of utils
# can use any finite field, using a small one for the example
F = GF(101)
# The following CCS instance values have been provided by Carlos
# (https://github.com/CPerezz) and Edu (https://github.com/ed255),
# and this sage script was made to check the CCS relation.
## Checks performed by this Plonk/CCS instance:
# - binary check for x0, x1
# - 2*x2 + 2*x3 == x4
M0 = matrix([
[F(0), 1, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1],
])
M1 = matrix([
[F(0), 1, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 1],
])
M2 = matrix([
[F(0), 1, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 1],
])
M3 = matrix([
[F(1), 0, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
])
M4 = matrix([
[F(0), 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[2, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
])
M5 = matrix([
[F(0), 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[2, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
])
M6 = matrix([
[F(-1), 0, 0, 0, 0, 0, 0],
[-1, 0, 0, 0, 0, 0, 0],
[-1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
])
M7 = matrix([
[F(0), 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
])
z = [F(1), 0, 1, 2, 3, 10, 42]
print("z:", z)
assert len(z) == M0.ncols()
# CCS parameters
n = M0.ncols() # == len(z)
m = M0.nrows()
t=8
q=5
d=3
S = [[3,0,1], [4,0], [5,1], [6,2], [7]]
c = [1, 1, 1, 1, 1]
M = [M0,M1,M2,M3,M4,M5,M6,M7]
print("CCS values:")
print("n: %s, m: %s, t: %s, q: %s, d: %s" % (n, m, t, q, d))
print("M:", M)
print("z:", z)
print("S:", S)
print("c:", c)
# check CCS relation (this is agnostic to Plonk, for any CCS instance)
r = [F(0)] * m
for i in range(0, q):
hadamard_output = [F(1)]*m
for j in S[i]:
hadamard_output = hadamard_product(hadamard_output,
matrix_vector_product(M[j], z))
r = vec_add(r, vec_elem_mul(hadamard_output, c[i]))
print("\nCCS relation check (∑ cᵢ ⋅ ◯ Mⱼ z == 0):", r == [0]*m)
assert r == [0]*m
```
:::success
The most important thing for the code above is if we use the example with symbols rather than `GF` elements. If we do this symbolic analisis, we get the following `r` relation as a result of applying all the Haddamard products.
`r = [x0^2 - x0, x1^2 - x1, 2*x2 + 2*x3 - x4, 0]`
If we see the following `r` expresion that results from each of it's elements represent exactly the constraints our plonkish instance had.
If you recall, our plonkish instance was defined as:
$r_0$ -> Bool check of $x_0$
$r_1$ -> Bool check of $x_1$
$r_2$ -> Addition check: $2 \cdot a\ \texttt{+}\ 2 \cdot b\ \texttt{=}\ c$
$r_3$ -> Null constrait (this checks nothing)
So as you see, each row of Plonkish has translated into an element of the vector `r` which represents the symbolic result of $S_{CCS}$.
- The first and the second elements of $r$ actually encode the same operation: `x^2 - x` **Which is the boolean constraint!**.
- The third element of $r$ actually contains the third constraint in the plonkish instance. Is obvious that `2*x2 + 2*x3 - x4` is doublings of two variables and equaling it to another one as a constraint.
- Finally, the last is a null constraint, hence the last element of `r` equals 0.
:::
# SuperSpartan
SuperSpartan translates to simply to the classical Spartan protocol but using CCS as arithmetization system.
To check the correctness of $(I, W) \in R_{CCS}$ (instance $(I,W)$ actually satisfies the RCCS structure) is based on checking that the following identity holds:
$$
0 \stackrel{?}{=} \sum_{a \in \{0,1\}^{log_m}}\widetilde{eq}(\tau, a)
\cdot
\sum_{i=0}^{q-1} c_i \cdot \prod_{j \in S_i}(
\sum_{y \in \{0,1\}^{log_m}} \widetilde{M_j}(a,y) \cdot \widetilde{Z}(y))\ \ (1)
$$
What this is saying, is that if $(I, W) \in R_{CCS}$, then, we can apply the sumcheck protocol to the polynomial:
$$
g(a) := \widetilde{eq}(\tau, a) \cdot \sum_{i=0}^{q-1} c_i \prod_{j \in S_i}(\sum_{y \in \{0,1\}^{log_m}} \widetilde{M_j}(a,y) \cdot \widetilde{Z}(y))\ \ (2)
$$
This is esentially saving to the verifier the costs of computing the right hand side of the first equation of this section to actually evaluate $g$ at a random input $r_a$.
So once we are here, we can see that the problem can actually be translated to compute $t$ sumchecks in parallel for:
$$
\forall j \in \{0,..,t-1\} \ \ \ \sum_{y \in \{0,1\}^{log_m}} \widetilde{M_j}(r_a,y) \cdot \widetilde{Z}(y)\ \ (3)
$$
Note that here $\widetilde{Z}$ is defined as:
$$
\widetilde{Z}(X_0,..,X_{log\ n-1}) = (1-X_0) \cdot \widetilde{W}(X_1,..,X_{log\ n-1}) \texttt{+} X_0 \cdot \widetilde{(1,x)}(X_1,..,X_{log\ n-1})\ \ (4)
$$
**In Spartan we use a trick to actually move from N sumchecks to 2 using fiat shamir. But here we have $t$. But here there's no mention on how we do reduce the sumcheck number so that we can actually lower the work load**
```mermaid
sequenceDiagram
Prover->>Verifier: MLE(W), $\widetilde{Z}$.
Verifier-->>Verifier: Sample $\tau$
Verifier->>Prover: $\tau$
Verifier-->Prover: Apply Sumcheck reduction to eq (2)
Verifier-->>Verifier: Sample $\r_a$
Verifier->>Prover: $r_a$
Prover->>Verifier: $g(r_a)$
Verifier-->>Verifier: Sample $\gamma$
Verifier->>Prover: $\gamma$
Verifier-->Prover: Apply sumcheck protocol again to eq (4)
Verifier-->Verifier: Sample $r_y$ for each $M_i$
Verifier->>Prover: $[r_{0_{y}},...,r_{t-1_{y}}]$
Prover->>Verifier: $[MLE(M_0(r_{0_{y}})) \cdot MLE(Z(r_{0_{y}})),...,MLE(M_{t-1}(r_{t-1_{y}})) \cdot MLE(Z(r_{0_{y}}))]$
Verifier-->>Verifier: For each $M_i$ check: MLE(M_i) = v_i
Verifier-->>Verifier: Check MLE(Z_{r_y}) = v_z
```
## SuperSpartan - IOP for CCS to Snark for CCS
In this section we get a clear description of the protocol which I think is worth rephrasing here:
:::info
### Prover's work
1. Commit to an $O(log\ n)$-variate polynomial $\widetilde{Z}$.
2. Performs the sumcheck of equation:
$$
\widetilde{Z}(X_0,..,X_{log\ n-1}) = (1-X_0) \cdot \widetilde{W}(X_1,..,X_{log\ n-1}) \texttt{+} X_0 \cdot \widetilde{(1,x)}(X_1,..,X_{log\ n-1})
$$
3. Proves the evaluations of $\widetilde{Z}$ at particular random points sampled by the Verifier.
4. If the CCS instance is not uniform (Different $M_i$'s within the same $R_{CCS}$), prove one evaluation at a random point sampled by de Verifier ($r_y, r_a$) of each of the $[M_0,..,M_{t-1}]$ instances.
_Note that it's possible depending on the proving system to actually batch together all the $M$ instances so that a single evaluation can be performed instead of $t-1$_
### Verifier's work
1. Participates in the sumcheck protocol over:
$$
\widetilde{Z}(X_0,..,X_{log\ n-1}) = (1-X_0) \cdot \widetilde{W}(X_1,..,X_{log\ n-1}) \texttt{+} X_0 \cdot \widetilde{(1,x)}(X_1,..,X_{log\ n-1})
$$
**I should elaborate more on what that exactly means.**
2. Verifies the proofs of the evaluations of $\widetilde{Z}$.
3. If the CCS instance is not uniform (Different $M_i$'s within the same $R_{CCS}$), verify the evaluation at a random point previously sampled ($r_y, r_a$) for each of the $[M_0,..,M_{t-1}]$ instances.
_Note that it's possible depending on the proving system to actually batch together all the $M$ instances so that a single evaluation can be performed instead of $t-1$_
:::
# CCS+ & SuperSpartan+
The paper also mentions the existance of an extension over SuperSpartan called `SuperSpartan+` and an extension over CCS called `CCS+`.
## CCS+
The description for this is very brief. Indeed it only mentions that in addition to the terms we already know a `CCS` instance has, a `CCS+` instance also contains:
:::info
- A lookup table $T$ which is a set of field elements.
- A sequence of lookup operations $L$ where each entry is in the range $[n]$.
Then, it's mentioned that in addition to all the previous requirements for a $R_CCS$ instance to be correct, now, in addition, $\forall o \in L\ ,\ z[o] \in T$ (for each lookup operation $o$ in $L$, the witness $z[o]$ actually needs to be inside of the lookup table $T$ for the $R_{CCS+}$ instance to actually be valid).
:::
## SuperSpartan+
In order for this to work, the Verifier needs to have oracle access to the polynomials $T$ and $a$ where $T$ is the table that encodes the range or set and $a$ the values claimed to be inside of $T$.
:::info
In order to support lookups, SuperSpartan+ encodes all the lookup operations $L$ in a sparse matrix $M_L$ where $\forall i \in [L]\ ,\ M_L[i][L[i]] = 1$ and all other entries are set to 0.
Then the prover sets the polynomial $\widetilde{a}$ together with $\widetilde{w}$ and $a$ is claimed to be the MLE of the vector $M_L \cdot z$
As done in the original protocol, using the sum-check invocation the problem is reduced to check that the claimed relationship $M_L \cdot z = a$
:::
# Questions left:
- How do we go from $t-1$ sumchecks to just 1 for the $M_i$ instances? I assume doing the same tricks as in Spartan. But it's not clear.
- Is the mermaid diagram acurate? Is it missing anything? Does it relate/map 1:1 to [this section](https://hackmd.io/nQtquuk9QCGiB9EhUPgXvg?both#IOP-for-CCS-to-Snark-for-CCS)?
- How do lookups fit into this? CCS+ does not define any equation that considers it.