## Introduction In EVM circuit, `ExecutionState` are not well balanced, some are very complicated and require many lookups, some are not. But the cost of EVM circuit is determined by the worst case, this note shows how we strike the balance by folding the layout and improve the lookup usage efficiency. ## Approach For example, we want to perform 2 lookup (bitwise and and bitwise or) to the same table without too much overhead. ### Naive Layout $$ \begin{array}{|c|c|} \hline \texttt{a1} & \texttt{a2} & \texttt{a3 (a1 & a2)} & \texttt{a4 (a1 | a2)} \\\hline \texttt{0b0101} & \texttt{0b1010} & \texttt{0b0000} & \texttt{0b1111} \\\hline \end{array} $$ The pseudo code looks like: ```python assert subset( input=rlc(Tag::BitwiseAnd, a1, a2, a3), table=rlc(fixed_table), ) assert subset( input=rlc(Tag::BitwiseOr, a1, a2, a4), table=rlc(fixed_table), ) ``` ### Folded Layout In folded layout, we try to enable the lookup by constraining the **compressed input** to the same column of some rotation, to reuse the same subset argument as much as possible. This layout helps reduce the amount of lookups in EVM circuit a lot. $$ \begin{array}{|c|c|} \hline \texttt{a1} & \texttt{a2} & \texttt{a3 (a1 & a2)} & \texttt{a4 (a1 | a2)} & \texttt{a5 (compressed input)} \\\hline \texttt{0b0101} & \texttt{0b1010} & \texttt{0b0000} & \texttt{0b1111} & \texttt{rlc(Tag::BitwiseAnd, a1, a2, a3)} \\\hline \texttt{} & \texttt{} & \texttt{} & \texttt{} & \texttt{rlc(Tag::BitwiseOr, a1, a2, a4)} \\\hline \end{array} $$ The pseudo code looks like: ```python assert a5 == rlc(Tag::BitwiseAnd, a1, a2, a3) assert a5_w == rlc(Tag::BitwiseOr, a1, a2, a4) assert subset( input=a5, table=rlc(fixed_table), ) ``` ## Misc ### Multiple inputs to same table To argue $n$ degree-1 inputs are all subset to the same table, the cost comparison: | | $\mathsf{plonkup}$ | $\mathsf{halo2\ subset\ argument}$ | |:-----------------------:| ------------------ | ---------------------------------- | | $N_{\text{point}}$ | $2+n$ | $1+2n$ | | $N_{\text{evaluation}}$ | $5+n$ | $2+3n$ | | $D$ | $2+n$ | $1+2n$ | Obviously, $\mathsf{plonkup}$ has more advantages in such case. With extra challenge support in halo2, we can roll our own $\mathsf{plonkup}$. #### $\mathsf{plonkup}$ $$ \begin{matrix} \text{Prover} && \text{Transcript} && \text{Verifier} \\\\ \begin{aligned} & \Big\{[f_i]_1\Big\} = \Big\{\mathsf{com}(f_i)\Big\} \\ & \Big\{[t_i]_1\Big\} = \Big\{\mathsf{com}(t_i)\Big\} \end{aligned} && \xrightarrow{\big\{[f_i]_1\big\},\ \big\{[t_i]_1\big\}} \\\\ && \xleftarrow{\theta} && \theta \xleftarrow{$} \mathbb{F} \\\\ \Big\{[h_i]_1\Big\}_{i\in[n+1]} = \Big\{\mathsf{com}(h_i)\Big\}_{i\in[n+1]} && \xrightarrow{\big\{[h_i]_1\big\}_{i\in[n+1]}} \\\\ && \xleftarrow{\beta,\ \gamma} && \beta, \gamma \xleftarrow{$} \mathbb{F} \\\\ [z]_1 = \mathsf{com}(z) && \xrightarrow{[z]_1} \\\\ && \xleftarrow{x} && x \xleftarrow{$} \mathbb{F} \\\\\\ \begin{alignat*}{2} & \Big\{\bar{f}_{i}\Big\} && = \Big\{\mathsf{eval}(f_i, x)\Big\} \\ & \Big\{\bar{t}_{i}\Big\} && = \Big\{\mathsf{eval}(t_i, x)\Big\} \\ & \,\,\, \bar{t}^\prime_{\omega} && = \mathsf{eval}\big(t^\prime, \omega x\big) \\ & \Big\{\bar{h}_{i}\Big\}_{i\in[n+1]} && = \Big\{\mathsf{eval}(h_i, x)\Big\}_{i\in[n+1]} \\ & \,\,\, \bar{h}_{0\omega} && = \mathsf{eval}(h_i, \omega x) \\ & \,\,\, \bar{z} && = \mathsf{eval}(z, x) \\ & \,\,\, \bar{z}_{\omega} && = \mathsf{eval}(z, \omega x) \\ \end{alignat*} && \xrightarrow{ \begin{aligned} & \big\{\bar{f}_{i}\big\},\ \big\{\bar{t}_{i}\big\},\ \bar{t}^\prime_{\omega},\ \big\{\bar{h}_{i}\big\}_{i\in[n+1]},\ \bar{h}_{0\omega},\ \bar{z},\ \bar{z}_{\omega} \\ \end{aligned} } \\\\ \end{matrix} $$ #### $\mathsf{halo2\ subset\ argument}$ $$ \begin{matrix} \text{Prover} && \text{Transcript} && \text{Verifier} \\\\ \begin{aligned} & \Big\{[f_i]_1\Big\} = \Big\{\mathsf{com}(f_i)\Big\} \\ & \Big\{[t_i]_1\Big\} = \Big\{\mathsf{com}(t_i)\Big\} \end{aligned} && \xrightarrow{\big\{[f_i]_1\big\},\ \big\{[t_i]_1\big\}} \\\\ && \xleftarrow{\theta} && \theta \xleftarrow{$} \mathbb{F} \\\\ \left\{ \begin{aligned} & [f_i^\prime]_1 \\ & [t_i^\prime]_1 \end{aligned} \right\}_{i\in[n]} = \left\{ \begin{aligned} & \mathsf{com}(f_i^\prime) \\ & \mathsf{com}(t_i^\prime) \end{aligned} \right\}_{i\in[n]} && \xrightarrow{\big\{[f_i^\prime]_1,\ [t_i^\prime]_1\big\}_{i\in[n]}} \\\\ && \xleftarrow{\beta,\ \gamma} && \beta, \gamma \xleftarrow{$} \mathbb{F} \\\\ [z]_1 = \mathsf{com}(z) && \xrightarrow{[z]_1} \\\\ && \xleftarrow{x} && x \xleftarrow{$} \mathbb{F} \\\\\\ \begin{alignat*}{2} & \Big\{\bar{f}_{i}\Big\} && = \Big\{\mathsf{eval}(f_i, x)\Big\} \\ & \Big\{\bar{t}_{i}\Big\} && = \Big\{\mathsf{eval}(t_i, x)\Big\} \\ & \left\{ \begin{aligned} & \bar{f}^\prime_{i} \\ & \bar{f}^\prime_{i\omega} \\ & \bar{t}^\prime_{i} \end{aligned} \right\}_{i\in[n]} && = \left\{ \begin{aligned} & \mathsf{eval}(f^\prime_i, x) \\ & \mathsf{eval}(f^\prime_i, \omega x) \\ & \mathsf{eval}(t^\prime_i, x) \end{aligned} \right\}_{i\in[n]} \\ & \,\,\, \bar{z} && = \mathsf{eval}(z, x) \\ & \,\,\, \bar{z}_{\omega} && = \mathsf{eval}(z, \omega x) \\ \end{alignat*} && \xrightarrow{ \begin{aligned} & \big\{\bar{f}_{i}\big\},\ \big\{\bar{t}_{i}\big\},\ \big\{\bar{f}^\prime_{i},\ \bar{f}^\prime_{i\omega},\ \bar{t}^\prime_{i}\big\}_{i\in[n]},\ \bar{z},\ \bar{z}_{\omega} \\ \end{aligned} } \\\\ \end{matrix} $$