Goal: Distributed key pair \(P\) amongst \(n\) parties such that
How: basic version is usually \(n\) instances Verifiable Secret Sharing (VSS) in parallel
Party \(i\) do the following:
The goal is to have a proof of correct
We can do this in a SNARK !
One can write these checks in constraints. Each constraint is of the form
Each proof can have public inputs and private inputs (witness), in \(F_r\)
For Groth16:
This one is easy:
Solution is to use a proof where
Now we have 2 circuits:
Now that we have circuits in \(F_q\), we can do ellitptic curve point arithmetic easily.
For point addition (\(P + G\)) for example:
For scalar multplication (\(rG\)) is "double and add":
First transform \(r\) to bits and then
Transforming \(r \in F_r\) to bits is not "free" because we need to constraint each "bit" in the witness is really boolean.
TODO: put the constraints
\(\{rG, H(rPK_i) + s_i \} \in E(F_q) \times F_r\)
\(rG\) is scalar multiplication as in the commitment part.
Problem 1: \(H(rPK_i) + s_i\) happens in \(F_r\), but circuit is in \(F_q\) now!
Problem 1: \(H(rPK_i) + s_i\) happens in \(F_r\), but circuit is in \(F_q\) now!
Potential optimization #1:: Look which one is best
Problem 2: Hashing with Poseidon takes inputs as \(F_q\) but need to hash a point in \(E(F_q)\)
Detail: We therefore need to do \(H(x,y)\) for a point \((x,y) \in E(F_q)\)
However point computation are in projective form in circuit.
Library does the conversion automatically -> cost some constraints.
Potential optimization #2:: hash directly in projective form
\(\{rG, H(rPK_i) + s_i \} \in E(F_q) \times F_r\)
Optimization Implemented:: No need to have a distinct \(r\) for each encryption
The same prover is doing all the encryptions, so it doesn't matter if the same r is re_used.
(I think this is ok :D )
We go from \(n\) scalar multiplication to \(1\) for \(rG\) in circuit.
These are elements the verifier needs to provide.
In Groth16, verifier needs to compute many scalar multiplication:
\(\sum_i a_iV_i\)
where \(a_i \in F_q\) is the public input and \(V_i \in E(F_q)\) is a point.
The more public inputs there are, the longer the verification time is.
So far our public inputs would be:
Problem: Going at (medium) scale, i.e. 1000, means doing at least a \(3000\) sized scalar multiplication. That's too long.
Solution: Commit to the inputs and give that as public input
Given we are using all inputs, hashing is enough.
Problem: Some inputs are in \(F_q\) (coefficients) but some are in \(F_r\) !
Solution:In the curve we are using, \(r < q\) so we can embed a \(F_r\) element inside a \(F_q\) element.
Status: now a share is represented as bits AND as \(F_q\) element.
Need more constraint to make sure both are equal
This costs >256 constraints to pass from the embedded \(F_q\) to the bits representation and check equality, per scalar.
Cons:
Pros:
Potential optimization #3:: More compact representation of public inputs & compare
Very basic benchmark, done on a AWS machine with 48 cores and 192GB of memory
Memory needed by trusted setup.
Threshold is set each time at 50% of \(n\).
Curves: BLS12-377 for proofs on \(F_r\) and BW6 for proofs on \(F_q\)
For points in the witness, I don't check correct subgroup membership.
16 millions for \(n = 800\)
600k for \(n=800\)
8mn for \(n=800\)
Including the hash of the commitment
117ms for \(n=800\)
Regardless of how many participants to we allow, we need:
Because we have only 1 public input, Snarkpack the proofs can be efficient.
One proof per participant,all bundled together. Verification times will be bounded by computing the public input commitment hash though.
(from Rosario)
We could also use Groth16 still but only do the DKG with a DLOG based curve
JubJub curve is what is used in Zcash (and many other places)
Assume we weights participants with different number of shares
Ex: A has 3 shares, B has 10 shares etc
Solution 1: Need to go to thousands of shares in a SNARK
We distribute more shares to those with more weights
Potential optimization #4:: use roots of unity and FFTs when doing the polynomial evaluation
Cons: big SNARK if we stay in the Groth16 model
Solution 2: Allow multiple SNARKs per prover
Cons: Granularity of weight defined by number of snarks prover can do.