Disclaimer: This documentation was written on sometime in 2020. It is intended to give readers a high-level understanding. The codebase is the canonical source of truth, and over time this document might fall behind the implementation details of the code.
In order for PLONK to be honest-verifier zero-knowledge, we require witness indistinguishability. A verifier, when provided with a randomly-generated proof transcript and a set of random blinding factors, should be able to derive a satisfying witness, where the proof transcript is indistinguishable from a transcript generated with a structured witness.
This requires the addition of 2 random blinding factors into each witness polynomial, and 3 for the permutation polynomial (the permutation polynomial is evaluated at two distinct points vs one for the witnesses).
For Turbo Plonk, each witness requires 3 random blinding scalars, like the permutation polynomial they are evaluated at two distinct points.
In the paper, we add zero-knowledge by generating random degree-2 and degree-3 polynomials, and compute the product of these with the vanishing polynomial
The subgroup
For efficiency reasons, we define the subgroup
We use elliptic curves specifically constructed such that
E.g. the BN254 curve that we use, has valid roots of unity for al
This restriction forces the circuit size,
The addition of random degree-2 blinding polynomials will increase the degree of the permutation polynomial
To perform fast fourier transforms of
If
To summarise: adding zero knowledge in this manner would double the evaluation domain size of the FFT transformations required by the prover
Instead of increasing the degree of our witness polynomials, we instead modify the vanishing polynomial
For example, consider a subgroup
We can now fill the final 3 gates of our circuit with random witnesses and still produce a satisfying proof.
Consider a Plonk circuit with
Where
We define our new zero-knowledge witness
Where
Similarly, we can transform the permutation polynomial
Using the modified vanishing polynomial, all values of
Note: To add randomness to witness polynomial
where
polynomial_arithmetic::divide_by_pseudo_vanishing_polynomial
must be modified to support the modified form of prover::execute_preamble_round
to add randomness into the witness polynomialspermutation_widget::compute_round_commitments
(in prover/proof_system/widgets/random_widgets/permutation_widget_impl.hpp
) to add randomness into the permutation polynomialpolynomial_arithmetic::get_lagrange_evaluations
such that the vanishing polynomial returned is the new modified vanishing polynomialdivide_by_pseudo_vanishing_polyonmial
to divide_by_vanishing_polynomial
. Semantically, permutation_widget::compute_quotient_contribution and permutation_widget::compute_quotient_evaluation_contribution
. In addition polynomial_arithmetic::get_lagrange_evaluations
returns a struct lagrange_evaluations
(in polynomial_arithmetic.hpp
) with a member variable l_n_minus_1
. We should rename this to l_end
, and change polynomial_arithmetic::get_lagrange_evaluations
to compute this value correctly (i.e. for some value l_1
should be renamed to l_start
.Ideally the above modifications should be performed in a generic manner, where the number of roots cut out of
Add tests into polynomial_arithmetic_test.cpp
that validates the following:
divide_by_vanishing_polynomial
to compute polynomial The scalar multiplications performed by the prover are implemented using a work queue. The reason is to improve the performance of our web-based provers. Most internet browsers do not support multithreaded WASM - to utilize multi-threading in this context, we instead push computationally-heavy tasks into a work queue. We then, using our javascript SDK, spawn multiple processes (using multiple threads) to perform the tasks in the work queue.
Because of this, we don't want to change the underlying data structures utilized by the work queue, as any modifications of the data structure will require changes made to our javascript sdk.
We need to enable the following. When performing a work item of type SCALAR_MULTIPLICATION
, have the ability to conditionally alter the number of points being multiplied.
In work_queue.hpp
, in process_queue
, we can see that we by default call pippenger_unsafe
to perform a multiexponentiation of size n
(which is equal to small_domain.size
). We need to be able to call pippenger_unsafe
over a size n + 1
, without altering the work_item
struct (line 23)
We can do this by utilizing the variable work_item::constant
. This variable is normally used when performing fourier transforms via the work queue, so we can repurpose it.
When the prover is pushing tasks onto the work queue to compute the quotient polynoimal - we need to, when the width is 3, set work_item::constant
to equal 1
for the final tasks pushed onto the queue (there will be 3).
In work_queue::process_queue
we can then check the value of constant
. If it is 1
, we call pippenger_unsafe
with a size parameter of small_domain.size + 1
instead of small_domain.size