This is a second article of mine sponsored by [PSE Learning Grant](https://pse.dev/en). I learned how to generate Groth16 proof at [PSE ZK Summer Contribution Program 2023](https://github.com/privacy-scaling-explorations/contribution-program). While it is easy to understand why Groth16 satisfies *completeness*, understanding why it satisfies *knowledge soundness* is relatively not straightforward. In this article, we take a deep dive into the proof that Groth16 satisfies *knowledge soundness*. # Citation - [The original Groth16 paper](https://eprint.iacr.org/2016/260.pdf) - [A paper that proves that Groth16 satisfies weak simulation extractability](https://eprint.iacr.org/2020/811.pdf) In this paper the author proves the same knowledge soundness as the original paper with a different method. # Ad PSE seems to host another round of [PSE ZK Summer Contribution Program](https://pse.dev/en/programs) this year! I highly recommend attending it. # NILP: Non-interactive linear proof [Separation of concerns](https://en.wikipedia.org/wiki/Separation_of_concerns) is useful to break down a big complex problem into a bunch of small problems, each of which is easy to understand. Groth16 is a proof system that uses elliptic curve cryptography, but surprisingly, to understand the meat of its knowledge soundness proof, you can put away the detail of elliptic curve operations. This is due to an abstraction called NILP, which Groth et al uses to argue for Groth16's knowledge soundness. This separation of concerns lets us use less brain to get the proof. NILP is a type of non-interactive proof, where a proof $\pi$ can computed in this formula: $$\pi = \Pi\sigma$$ Each of those letters is a matrix. $\pi$ is a proof. We'll look at the definition of $\Pi$ later. $\sigma$ is a common reference string. Common reference string, or CRS, is an academic jargon for a list of values that we generate during a trusted setup ceremony. If you recall the definiton of Groth16, the proof $\pi$ consists of 3 values $A, B, C$. and the trusted setup ceremony generates these values. ![Screenshot from 2024-03-28 10-09-43](https://hackmd.io/_uploads/B1PtQSz10.png) Which means we can rewrite the equation in this way: $$\begin{bmatrix}A \\ B \\ C\end{bmatrix} = \Pi \begin{bmatrix}\alpha \\ \beta \\ \gamma \\ \delta \\ x^0 \\ \vdots \\ x^{n-1} \\ \frac{\beta u_0(x) + \alpha v_0(x) + w_0(x)}{\gamma} \\ \vdots \\ \frac{\beta u_l(x) + \alpha v_l(x) + w_l(x)}{\gamma} \\ \frac{\beta u_{l+1}(x) + \alpha v_{l+1}(x) + w_{l+1}(x)}{\delta} \\ \vdots \\ \frac{\beta u_m(x) + \alpha v_m(x) + w_m(x)}{\delta} \\ \frac{x^0t(x)}{\delta} \\ \vdots \\ \frac{x^{n-2}t(x)}{\delta}\end{bmatrix}$$ If we expand $\Pi$ and assign a letter to each element, (to make it easy for me to write an explainer) the equation becomes: $$\begin{bmatrix}A \\ B \\ C\end{bmatrix} = \begin{bmatrix}A_\alpha && A_\beta && A_\gamma && A_\delta && A_{x,0} && \dots && A_{x,n-1} && A_{uvw,0} && \dots && A_{uvw,m} && A_{t,0} && \dots && A_{t,n-2} \\ B_\alpha && B_\beta && B_\gamma && B_\delta && B_{x,0} && \dots && B_{x,n-1} && B_{uvw,0} && \dots && B_{uvw,m} && B_{t,0} && \dots && B_{t,n-2} \\ C_\alpha && C_\beta && C_\gamma && C_\delta && C_{x,0} && \dots && C_{x,n-1} && C_{uvw,0} && \dots && C_{uvw,m} && C_{t,0} && \dots && C_{t,n-2}\end{bmatrix} \begin{bmatrix}\alpha \\ \beta \\ \gamma \\ \delta \\ x^0 \\ \vdots \\ x^{n-1} \\ \frac{\beta u_0(x) + \alpha v_0(x) + w_0(x)}{\gamma} \\ \vdots \\ \frac{\beta u_l(x) + \alpha v_l(x) + w_l(x)}{\gamma} \\ \frac{\beta u_{l+1}(x) + \alpha v_{l+1}(x) + w_{l+1}(x)}{\delta} \\ \vdots \\ \frac{\beta u_m(x) + \alpha v_m(x) + w_m(x)}{\delta} \\ \frac{x^0t(x)}{\delta} \\ \vdots \\ \frac{x^{n-2}t(x)}{\delta}\end{bmatrix}$$ Even a malicious prover of Groth16 has to follow this formula to calculate $A, B, C$. Why? because each elements of the CRS of Groth16 is hidden under elliptic curve points. Notice that $\sigma = ([\sigma_1]_1, [\sigma_2]_2)$, not $\sigma = (\sigma_1, \sigma_2)$. ![image](https://hackmd.io/_uploads/rkklvLzyR.png) In an elliptic curve group we can't multiply a point and a point -- because it's not a field, no multiplication operation is defined. Whatever $A, B, C$ the prover submits, the verifier can be sure that none of those contains terms like $\alpha\beta$, because the prover can't multiply $[\alpha]_1$ and $[\beta]_1$. NILP conviniently captures this property of Groth16 -- The impossibility to multiply a value generated during trusted setup, by another value generated during trusted setup. We later use this fact to prove that the prover can't generate valid $A, B, C$ without knowing the valid witnesses. # A, B In Groth16, a proof $\pi = (A, B, C)$ is valid if and only if it satisfies this equation. ![image](https://hackmd.io/_uploads/SksVxDG10.png) What we wanna show is that, except a negligible probability, the prover can't come up with $A, B, C$ that satisfies the above equation unless the prover knows $a_i$ that makes $t(x)$ divide $$\sum_{i=0}^m a_iu_i(x) \cdot \sum_{i=0}^m a_iv_i(x) - \sum_{i=0}^m a_iw_i(x)$$ To see this, we have to check the terms of the lhs $A \cdot B$ one by one. There are many terms that has to be constant. We'll see what those constants are. ## $\alpha$ and $\beta$ First, we check the term involving $\alpha^2$. This term arises because both $A$ and $B$ contains $\alpha$, and we multiply the two. When we go back to the NILP matrix equation, we can see that $\alpha^2$ must be multiplied by both $A_\alpha$ and $B_\alpha$. When we see the Groth16 verification equation, the rhs contains no term involving $\alpha^2$. This proves that, if the verification equation holds, then $A_\alpha$ and/or $B_\alpha$ must be 0. But when we look at the rhs again, it has a term $\alpha\beta$. Thus, there's no such case that both $A_\alpha$ and $B_\alpha$ is 0, because in that case $A \cdot B$ does not contain $\alpha$. The equation will not satisfy. We can confidently say that either one of $A_\alpha$ or $B_\alpha$ is 0. ### Assuming $B_\alpha = 0$ Let's think about the term involving $\alpha \beta$. This term arises because both $A$ and $B$ contains $\alpha$ and $\beta$, and we multiply the two. When we look at the NILP matrix equation, we can see that the term involving $\alpha \beta$ must look like $\alpha\beta(A_\alpha B_\beta + A_\beta B_\alpha)$. When we look at the rhs of the Groth16 verification equation, the rhs contains $\alpha\beta$. This proves that, if the Groth16 verification equation holds, $A_\alpha B_\beta + A_\beta B_\alpha = 1$. But by the assumption $B_\alpha = 0$, so we can say that $A_\alpha B_\beta = 1$. This means $A_\alpha \neq 0$ and $B_\beta \neq 0$. Next up, we look at the term involving $\beta^2$. The lhs in the Groth16 verification equation $A \cdot B$ contains a term involving $\beta^2$, because both $A$ and $B$ contains a term involving $\beta$. The term involving $\beta^2$ must look like $A_\beta B_\beta \beta^2$. This must equal 0 because the rhs does not contain any term involving $\beta^2$. Considering we have already shown that $B_\beta \neq 0$, $A_\beta = 0$. ### Assuming $A_\alpha = 0$ Let's think about the term involving $\alpha \beta$. This term arises because both $A$ and $B$ contains $\alpha$ and $\beta$, and we multiply the two. When we look at the NILP matrix equation, we can see that the term involving $\alpha \beta$ must look like $\alpha\beta(A_\alpha B_\beta + A_\beta B_\alpha)$. When we look at the rhs of the Groth16 verification equation, the rhs contains $1\alpha\beta$. This proves that, if the Groth16 verification equation holds, $A_\alpha B_\beta + A_\beta B_\alpha = 1$. But by the assumption $A_\alpha = 0$, so we can say that $A_\beta B_\alpha = 1$. This means $A_\beta \neq 0$ and $B_\alpha \neq 0$. Next up, we look at the term involving $\beta^2$. The lhs in the Groth16 verification equation $A \cdot B$ contains a term involving $\beta^2$, because both $A$ and $B$ contains a term involving $\beta$. The term involving $\beta^2$ must look like $A_\beta B_\beta \beta^2$. This must equal 0 because the rhs does not contain any term involving $\beta^2$. Considering we have already shown that $A_\beta \neq 0$, $B_\beta = 0$. ### Summing it up We have shown that $$(A_\alpha = 0 \lor B_\alpha = 0) \land \lnot(A_\alpha = 0 \land B_\alpha = 0) \land ((A_\alpha = 0 \land B_\beta = 0) \lor (B_\alpha = 0 \land A_\beta = 0))$$ This shows that $A$ cannot contain both $\alpha$ and $\beta$. $A$ has to contain either $\alpha$ or $\beta$. also $B$ cannot contain both $\alpha$ and $\beta$. $B$ has to contain either $\alpha$ or $\beta$ that is not in $A$. Considering both $\alpha$ and $\beta$ are randomly sampled from the same distibution, without a loss of generality, I will call the random challenge in $A$ $\alpha$, and the random challenge in $B$ $\beta$ from now on. ## $\delta$ ### $\alpha\delta^{-1}$ The lhs $A \cdot B$ contains terms $$A_\alpha\alpha\left( \sum_{i=l+1}^m B_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\delta} + \sum_{i=0}^{n-2} B_{t,i}\frac{x^it(x)}{\delta}\right) = 0$$ We know this equals 0 because the rhs does not contain any term involving $\alpha\delta^{-1}$. We already know $\alpha \neq 0$ and $A_\alpha \neq 0$. Thus we can say the rightmost factor is 0. $$\sum_{i=l+1}^m B_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\delta} + \sum_{i=0}^{n-2} B_{t,i}\frac{x^it(x)}{\delta} = 0$$ I first thought, doesn't the multiplication below also produce terms involving $\alpha\delta^{-1}$? $$\left(\sum_{i=l+1}^mA_{uvw,i}\frac{\alpha v_i(x)}{\delta}\right)\left(TermsThatComeFromB\right)$$ But surprisingly this is not the case. I'll show the reasoning in section of $\delta^{-2}$. ### $\beta\delta^{-1}$ The lhs $A \cdot B$ contains terms $$B_\beta\beta\left( \sum_{i=l+1}^m A_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\delta} + \sum_{i=0}^{n-2} A_{t,i}\frac{x^it(x)}{\delta}\right) = 0$$ We know this equals 0 because the rhs does not contain any term involving $\beta\delta^{-1}$. We already know $\beta \neq 0$ and $B_\beta \neq 0$. Thus we can say the rightmost factor is 0. $$\sum_{i=l+1}^m A_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\delta} + \sum_{i=0}^{n-2} A_{t,i}\frac{x^it(x)}{\delta} = 0$$ I first thought, doesn't the multiplication below also produce terms involving $\beta\delta^{-1}$? $$\left(\sum_{i=l+1}^mB_{uvw,i}\frac{\beta u_i(x)}{\delta}\right)\left(TermsThatComeFromA\right)$$ But surprisingly this is not the case. I'll show the reasoning in section of $\delta^{-2}$. ### $\delta^{-2}$ The term involving $\delta^{-2}$ in the lhs $A \cdot B$ looks like this. $$\left( \sum_{i=l+1}^m A_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\delta} + \sum_{i=0}^{n-2} A_{t,i}\frac{x^it(x)}{\delta}\right)\left( \sum_{i=l+1}^m B_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\delta} + \sum_{i=0}^{n-2} B_{t,i}\frac{x^it(x)}{\delta}\right) = 0$$ We know this equals 0 because the rhs does not have any term involving $\delta^{-2}$. This shows that either the left factor or the right factor is 0. But surprisingly, from here we can show that both the left factor and the right factor has to be 0. #### Assuming that the left factor is 0 $$\left( \sum_{i=l+1}^m A_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\delta} + \sum_{i=0}^{n-2} A_{t,i}\frac{x^it(x)}{\delta}\right) = 0$$ If we see this as a polynomial of $\alpha, \beta, \delta$, we can use schwartz zippel lemma to say that the below holds except negligible probability. $$\sum_{i=l+1}^mA_{uvw,i}\frac{\alpha v_i(x)}{\delta} = 0$$ This is exactly the coefficient of the terms that had to be 0 in the section of $\alpha\delta^{-1}$. Thus we can say that, the left factor being 0 implies that the right factor is 0. #### Assuming that the right factor is 0 $$\left( \sum_{i=l+1}^m B_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\delta} + \sum_{i=0}^{n-2} B_{t,i}\frac{x^it(x)}{\delta}\right) = 0$$ If we see this as a polynomial of $\alpha, \beta, \delta$, we can use schwartz zippel lemma to say that the below holds except negligible probability. $$\sum_{i=l+1}^mB_{uvw,i}\frac{\beta u_i(x)}{\delta} = 0$$ This is exactly the coefficient of the terms that had to be 0 in the section of $\beta\delta^{-1}$. Thus we can say that, the right factor being 0 implies that the left factor is 0. #### Summing it up We have shown that - the left factor = 0 => the right factor = 0 - the right factor = 0 => the left factor = 0 - the left factor = 0 $\lor$ the right factor = 0 This means that both the left factor and the right factor must be 0. $$\sum_{i=l+1}^m A_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\delta} + \sum_{i=0}^{n-2} A_{t,i}\frac{x^it(x)}{\delta} = 0$$$$\sum_{i=l+1}^m B_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\delta} + \sum_{i=0}^{n-2} B_{t,i}\frac{x^it(x)}{\delta} = 0$$ ## $\gamma$ ### $\alpha\gamma^{-1}$ The lhs $A \cdot B$ contains terms $$A_\alpha\alpha\left( \sum_{i=0}^l B_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\gamma}\right) = 0$$ We know this equals 0 because the rhs does not contain any term involving $\alpha\gamma^{-1}$. We already know $\alpha \neq 0$ and $A_\alpha \neq 0$. Thus we can say the rightmost factor is 0. $$\sum_{i=0}^l B_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\gamma} = 0$$ I first thought, doesn't the multiplication below also produce terms involving $\alpha\gamma^{-1}$? $$\left(\sum_{i=0}^lA_{uvw,i}\frac{\alpha v_i(x)}{\gamma}\right)\left(TermsThatComeFromB\right)$$ But surprisingly this is not the case. I'll show the reasoning in section of $\gamma^{-2}$. ### $\beta\gamma^{-1}$ The lhs $A \cdot B$ contains terms $$B_\beta\beta\left( \sum_{i=0}^l A_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\gamma}\right) = 0$$ We know this equals 0 because the rhs does not contain any term involving $\beta\gamma^{-1}$. We already know $\beta \neq 0$ and $B_\beta \neq 0$. Thus we can say the rightmost factor is 0. $$\sum_{i=0}^l A_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\gamma} = 0$$ I first thought, doesn't the multiplication below also produce terms involving $\beta\gamma^{-1}$? $$\left(\sum_{i=0}^lB_{uvw,i}\frac{\beta u_i(x)}{\gamma}\right)\left(TermsThatComeFromA\right)$$ But surprisingly this is not the case. I'll show the reasoning in section of $\gamma^{-2}$. ### $\gamma^{-2}$ The term involving $\gamma^{-2}$ in the lhs $A \cdot B$ looks like this. $$\left(\sum_{i=0}^l A_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\gamma}\right)\left(\sum_{i=0}^l B_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\gamma}\right) = 0$$ We know this equals 0 because the rhs does not have any term involving $\gamma^{-2}$. This shows that either the left factor or the right factor is 0. But surprisingly, from here we can show that both the left factor and the right factor has to be 0. #### Assuming that the left factor is 0 $$\sum_{i=0}^l A_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\gamma} = 0$$ If we see this as a polynomial of $\alpha, \beta, \gamma$, we can use schwartz zippel lemma to say that the below holds except negligible probability. $$\sum_{i=0}^lA_{uvw,i}\frac{\alpha v_i(x)}{\gamma} = 0$$ This is exactly the coefficient of the terms that had to be 0 in the section of $\alpha\gamma^{-1}$. Thus we can say that, the left factor being 0 implies that the right factor is 0. #### Assuming that the right factor is 0 $$\sum_{i=0}^l B_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\gamma} = 0$$ If we see this as a polynomial of $\alpha, \beta, \gamma$, we can use schwartz zippel lemma to say that the below holds except negligible probability. $$\sum_{i=0}^lB_{uvw,i}\frac{\beta u_i(x)}{\gamma} = 0$$ This is exactly the coefficient of the terms that had to be 0 in the section of $\beta\gamma^{-1}$. Thus we can say that, the right factor being 0 implies that the left factor is 0. #### Summing it up We have shown that - the left factor = 0 => the right factor = 0 - the right factor = 0 => the left factor = 0 - the left factor = 0 $\lor$ the right factor = 0 This means that both the left factor and the right factor must be 0. $$\sum_{i=0}^l A_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\gamma} = 0$$$$\sum_{i=0}^l B_{uvw,i}\frac{\beta u_i(x) + \alpha v_i(x) + w_i(x)}{\gamma} = 0$$ ## AB Conclusion Removing all the terms we have shown to be 0, we can say $$A \cdot B = (A_\alpha\alpha + \sum_{i=0}^{n-1} A_{x,i} x^i + A_\delta \delta)(B_\beta\beta + \sum_{i=0}^{n-1} B_{x,i} x^i + B_\delta \delta) = \alpha\beta + \sum_{i=0}^l a_i(\beta u_i(x) + \alpha v_i(x) + w_i(x)) + C \cdot \delta$$ # C By transposing the rhs, we get $$(A_\alpha\alpha + \sum_{i=0}^{n-1} A_{x,i} x^i + A_\delta \delta)(B_\beta\beta + \sum_{i=0}^{n-1} B_{x,i} x^i + B_\delta \delta) - \alpha\beta + \sum_{i=0}^l a_i(\beta u_i(x) + \alpha v_i(x) + w_i(x)) - C \cdot \delta = 0$$ This equation determines how the prover must generate C. To see this, we have to see the lhs of the above equation as a polynomial of $\alpha, \beta, \gamma$. By Schwartz-Zippel lemma, we can say that the below 3 holds except negligible proability. - The term involving only $\alpha$ out of the 3 random challenges = 0 - The term involving only $\beta$ out of the 3 random challenges = 0 - The term involving none of the 3 random challenges = 0 ## The term involving only $\alpha$ $$A_\alpha\alpha \left( \sum_{i=0}^{n-1} B_{x,i} x^i \right) - \alpha\sum_{i=0}^l a_i v_i(x) - \alpha\sum_{i=l+1}^mC_{uvw,i} v_i(x) = 0$$ The last term arises from $C\cdot \delta$. It must be multiplied by $\alpha$ by definition, and since the Groth16 verification equation multiplies $C$ by $\delta$, it also must be multiplied by $\delta^{-1}$ to cancel it out. Out of the CRS, only the below is multiplied by both $\alpha$ and $\delta^{-1}$. Thus we know that the above equation must hold. $$\sum_{i=l+1}^mC_{uvw,i} \frac{\alpha v_i(x)}{\delta}$$ ## The term involving only $\beta$ $$B_\beta\beta \left( \sum_{i=0}^{n-1} A_{x,i} x^i \right) - \beta\sum_{i=0}^l a_i u_i(x) - \beta\sum_{i=l+1}^mC_{uvw,i} u_i(x) = 0$$ The last term arises from $C\cdot \delta$. It must be multiplied by $\beta$ by definition, and since the Groth16 verification equation multiplies $C$ by $\delta$, it also must be multiplied by $\delta^{-1}$ to cancel it out. Out of the CRS, only the below is multiplied by both $\beta$ and $\delta^{-1}$. Thus we know that the above equation must hold. $$\sum_{i=l+1}^mC_{uvw,i} \frac{\beta u_i(x)}{\delta}$$ ## The term involving no random challenge $$\left( \sum_{i=0}^{n-1} A_{x,i} x^i \right)\left( \sum_{i=0}^{n-1} B_{x,i} x^i \right) - \sum_{i=0}^l a_i w_i(x) - \sum_{i=l+1}^m C_{uvw,i}w_i(x) - \sum_{i=0}^{n-2}C_{t,i}x^it(x) = 0$$ The last term arises from $C\cdot \delta$. Out of the CRS, only the below is multiplied by $\delta^{-1}$, and multiplied by neither $\alpha$ nor $\beta$. Thus we know that the above equation must hold. $$\sum_{i=l+1}^m C_{uvw,i}\frac{w_i(x)}{\delta} + \sum_{i=0}^{n-2}C_{t,i}\frac{x^it(x)}{\delta}$$ ## C Conclusion We have shown that the below 3 equations hold. $$A_\alpha \left( \sum_{i=0}^{n-1} B_{x,i} x^i \right) = \sum_{i=0}^l a_i v_i(x) + \sum_{i=l+1}^mC_{uvw,i} v_i(x)$$ $$B_\beta \left( \sum_{i=0}^{n-1} A_{x,i} x^i \right) = \sum_{i=0}^l a_i u_i(x) + \sum_{i=l+1}^mC_{uvw,i} u_i(x)$$ $$\left( \sum_{i=0}^{n-1} A_{x,i} x^i \right)\left( \sum_{i=0}^{n-1} B_{x,i} x^i \right) = \sum_{i=0}^l a_i w_i(x) + \sum_{i=l+1}^m C_{uvw,i}w_i(x) + \sum_{i=0}^{n-2}C_{t,i}x^it(x)$$ ![aoeu](https://hackmd.io/_uploads/r1IzUiu-R.svg) $C_{uvw,i}$ determines $A_{x,i}$ and $B_{x,i}$. The product of $A_{x,i}$ and $B_{x,i}$ determines $C_{t,i}$. The witnesses within the rhs constrains the lhs. In turn the product in the lhs constrains the rhs back. To make the 3 equations hold at the same time, we have to assign $\Pi$ in a way that the below holds $$C_{uvw,i} = a_i \hspace{1cm} \forall i \in (l+1) \dots m$$$$\sum_{i=0}^{n-1} A_{x,i}x^i = \sum_{i=0}^m a_i u_i(x)$$$$\sum_{i=0}^{n-1} B_{x,i}x^i = \sum_{i=0}^m a_i v_i(x)$$$$\sum_{i=0}^{n-2}C_{t,i}x^it(x) = h(x)t(x)$$ This is exactly how the honest prover behaves! ![image](https://hackmd.io/_uploads/r1WCXk6kA.png) Now we can see why even a malicious prover has to follow the proof generation formula stated in the original Groth16 paper! Now we know that, what the lhs of the Groth16 verification equation is doing, is to multiply the left wire of each multiplication gate and the right wire of each multiplication gate, just like in the QAP verification equation! $$\sum_{i=0}^m a_iu_i(x) \cdot \sum_{i=0}^m a_iv_i(x) - \sum_{i=0}^m a_iw_i(x) \equiv 0 \mod t(x)$$ All we have left to do is to include the output of each multiplication gate $\sum_{i=0}^ma_iw_i(x)$ in the rhs $C$, and $h(x)t(x)$ which shows $\equiv 0 \mod t(x)$. Thus, we can say, unless the prover knows valid $a_i$ that satisfies QAP, Groth16 verification does not pass. # Memo ## Soundness a prover can't come up with a valid proof unless a valid witness exist in this universe ## Knowledge Soundness a prover can't come up with a valid proof unless he knows a valid witness. (Assumption here is that the prover does not know any other proof) ## Simulation Extractable a prover can't come up with a valid proof unless he knows a valid witness. (Even if he has seen other valid proofs)