# Nova and Memory
:::danger
why can't memory just be treated as part of the input / output
:::
One thing that Nova does not exactly do (but Supernova does?) is supporting VM *memory*. We can think of memory as part of the state of computation and is abstractly just a vector that is "carried" with the computation.
:::danger
Is this just making a new row/vector in the R1CS?
:::
A natural design [^memory] for supporting memory that is folding friendly would be the following:
1. we assume that our memory is a $k$-element array $\mathbf{v} = (v_0, \dots, v_{k - 1})$.
2. we commit the entire memory using KZG commitments. This means we commit to $\mathbf{v}$ with
\begin{equation}
c = [p(X)] = \sum_{i = 0}^{k - 1}v_i \cdot [L_i(X)]
\end{equation}
using some secret point $\{[s^i]\}_{i \in \{0,\dots, k - 1\}}$ as in KZG.
3. we define 3 operations on memory, `READ`, `ADD`, and `WRITE`, and make them folding-friendly.
### `READ`
First, we describe `READ` access. Specifically, this means proving that the $i$-th element of array $\mathbf{v}$, namely, $v_i$, is equal to $y$ for some public value $y$. In case of polynomial commitment, it is equivalent to proving that $p(\omega^i) = y_i$.
The proof for opening at the $i$-th position is computed as
\begin{equation}
w_i = \left[\frac{p(X) - p(\omega^i)}{X -\omega^i}\right].
\end{equation}
And, to verify the correctness of the proof, we simply check that
\begin{equation}
e([s] - [\omega^i], w_i) = e(c - [y_i], [1]).
\end{equation}
Now, we build an R1CS and define its components. Notice that the verification of evaluation at $\omega^i$ require the involvement of $[s], c, [\omega^i], [y_i], w_i$ and $[1]$, with no secret elements, so the witness vector is exactly this information.
|variable|value|
|----|----|
|$\mathbf{w}$ | $\begin{pmatrix} [s] & c& [\omega^i] & [y_i] & w_i & [1] \end{pmatrix}^T.$ |
|$\mathbf{A}$ | $\begin{pmatrix} 1 & 0 & -1 & 0 & 0 & 0 \end{pmatrix}.$ |
|$\mathbf{B}$ | $\begin{pmatrix} 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix}.$ |
|$\mathbf{C}$ | $\begin{pmatrix} 0 & 1 & 0 & -1 & 0 & 0 \end{pmatrix}.$ |
### `ADD`
Suppose we want to change $x_i$ to $x_i + t$. Then we want to commit to a new vector $(x_0, \dots, x_{i - 1}, x_i + t, x_{i + 1}, \dots, x_{k - 1})$, with commitment \begin{align}
c'&= t \cdot [L_i(X)] + \sum_{j}x_j \cdot [L_j(X)]\\
&= c + t \cdot [L_i(X)]
\end{align}
Hence, to update $c$ to $c'$, we simply make the addition for $c$ and $t \cdot [L_i(X)]$.
### `WRITE`
Suppose we want to change $x_i$ to $x_i'$. We first perform a `READ` to prove the value of $x_i$. Then we set $t = x_i' - x_i$, and `ADD` $t$ to $x_i$.
Note that opening a commitment is expensive, so `READ` is more expensive than `ADD`.
## From Oskar:
Supernova is about supporting different types of F, that correspond to e.g. diff opcodes, like ADD, MUL etc, in a VM. This reduces the prover overhead because an additional opcodes doesn't incur additional cost on the circuit. See https://zkresear.ch/t/towards-a-nova-based-zk-vm/105#supernova-vm-10
I don't think there's any else specific to memory in SuperNova.
More for other doc: It might also be useful to make a distinction between stack vs heap memory since they have diff characteristics
## References
[^memory]: Proposal for Handling The Memory of zkVM, https://hackmd.io/vF1jobzsRoubyUASqQG0Zw?both