Try   HackMD

Zero Knowledge Proofs

Introduction

Zero Knowledge Proofs (ZKP) protocols allows one person to convince another one of some fact without reveling any information about the proof. e.i. A person "A" or "prover" convince a person "B" or "verifier" that \(x\) possess some specific property, but at the end of the protocol, the verifier has no idea of how to prove by itself that indeed \(x\) possess such a property. There exists many different flavours of these protocols. One criteria of classification are the schemes who are interactive and those who are non-interactive.

Nowadays Non-interactive ZKP schemes have generated a great interest, since they eliminate steps of exchange of information between the parties, this accelerates the protocol and might reduce the probabilities of flaw due to sending messages through channels that can either be or turned insecure. On this branch there are several classes of non-interactive ZKPs that fulfil that criteria, including:

  • Non-interactive Zero-Knowledge Arguments (NIZKs)
  • Succinct Non-interactive Zero-Knowledge Arguments (SNARGs)
  • Succinct Non-interactive Zero-Knowledge Arguments of Knowledge (SNARKs or sometimes zkSNARKs)

The last class, zkSNARKs has become very popular among the cryptocurrencies due to their succinctness and efficiency, however is important to mention that these kinds of protocols necessitate a initial setup, usually called "Common Reference String (CRS)" as a public parameter for proving and verifying. This CRS must be generated in advance by a trusted party. The information used to create the CRS, called ‘toxic waste’ needs to be destroyed as soon as the CRS is created. Otherwise, it can be used by adversaries to forge fraudulent proofs.

The colloquially referred to as “Groth16” was one of the first in gain popularity, but it has one problem it is not universal but fixed to a single NP-relation. In other words, proofs are specific to a given program. Changing the program means starting over, throwing out the old parameters, and generating new ones. Thus, the flexibility of these zkSNARKs is limited. Thinking of how to get over that drawback arose \(\mathcal{P}\)lon\(\mathcal{K}\), standing for the unwieldy quasi-backronym "Permutations over Lagrange-bases for Oecumenical Noninteractive arguments of Knowledge". PlonK still requires a trusted setup procedure similar to that needed for others zkSNARKs, nevertheless it is a "universal and updateable" trusted setup (This is also known as Structured Reference String or SRS in PlonK scope). This means two things: first, instead of there being one separate trusted setup for every program you want to prove things about, there is one single trusted setup for the whole scheme after which you can use the scheme with any program (up to some maximum size chosen when making the setup). Second, there is a way for multiple parties to participate in the trusted setup such that it is secure as long as any one of them is honest. This is a multi-party computation ceremony. The full set of participants does not even need to be known ahead of time; new participants could just add themselves to the end. This makes it easy for the trusted setup to have a large number of participants, making it quite safe in practice.

Because of all the advantages, it seems that zkSNARKs are not going away anytime soon. But the reality is that the security of a system based on zkSNARKs largely boils down to how the CRS was generated. It is very important to do not disdain this step, and of course doing so without compromising the ideals of privacy-preserving blockchain-based systems: (security and decentralization). The generation of public parameters for zkSNARKs is called the “setup ceremony” because of its importance and (as we will see) the need for multiple independent parties to contribute to the process, the ceremony is completely motivitated by the trust of the community.

Multi Party Computation Ceremony

So far, the favorite technique for setup ceremonies has been multi-party computation (MPC). Setup ceremony MPC schemes are interactive protocols involving multiple public parties who contribute randomness and information entropy increase to iteratively construct the CRS. Key to this technique is that all parties need to keep the inputs (their sampled randomness) hidden. In fact, honest participants should delete this “toxic waste,” immediately. Otherwise, a malicious party with knowledge of these inputs could exploit the underlying mathematical structure of the CRS to create unsound proofs.

A typical ceremony consists of \(N\) number of players or particpats it is worth to say that as much as players are invlolved in the ceremony it gets better in terms of trust. With in the set of participants there are two special roles in the ceremony, the coordinator, and the verifier. The MPC protocols are always of a round-robin nature, where a player \(P_i\) receives a single message from player \(P_{i-1}\). Player \(P_i\) adds their input to accumulated randomness before passing it onto Player \(P_{i+1}\). In the end, the final result is the CRS. In the intermediate state, as it is being passed between players, the message is referred to as the “transcript.”

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

The first family of MPC protocols for ZKPs was proposed by Ben-Sasson et al. in [BCGTV15]. The authors prove that the CRS generated with these protocols is secure as long as at least one contributing party is honest. Since then, the goal of setup ceremonies has been to maximize the number of honest and independent contributors. If there are many, independent participants, then intuitively the likelihood that all are dishonest is reduced to the point of negligibility.

The successfully running of a setup ceremony boils down to the logistics and efficiency of coordinating participants, who ideally are unrelated to one another. This ensures a lower (real and perceived) likelihood that all contributors might maliciously collude. But it can be challenging to coordinate all the virually and geographically distributed participants who must remain available for the entire duration of the ceremony.

In 2017, Bowe et al. introduced a second family of MPC protocols [BGM17] specifically for pairing-based zk-SNARKs like Groth16 and PLONK. This paper aimed to address some of the drawbacks of prior schemes. In their proposed protocol called MMORPG, a central “coordinator” manages messages between the participants. The CRS is generated in two phases. The first phase referred to as “Powers of Tau”, produces generic setup parameters that can be used for all circuits of the scheme, up to a given size. The second phase converts the output of the Powers of Tau phase into an NP-relation-specific CRS.

Math details

Just for didactic purposes we are going to develop a toy example of how the SRS can be used to commit polynomials.

If \(\mathbb{F}_p = \mathbb{F}\) is a field of prime order, we can denote by \(\mathbb{F}_{<d}[X]\) the set of univariate polynomials over F of degree smaller than d. The implementation of KZG polynomial commitment scheme \cite{kate2010constant} relies on a bilinear map \(e:G_1 \times G_2 \longrightarrow G_t\), where \(G_1,G_2,G_t\) are cyclic groups of prime order \(p\), and \(g_1\) is a generator fro \(G_1\) and \(g_2\) is a generator for \(G_2\).

The SRS generated is composed by two groups of elements, the first one with elements in \(G_1\) and second one with elements in \(G_2\). For a random scalar \(s \in \mathbb{F}_p\) the set containing the two groups is the following \(\left\{g_1, g_1s, g_1s^2, \ldots, g_1s^n, g_2, g_2s \right\}\).

Let \(f(x) \in \mathbb{F}_p[X]\) a polynomial over \(\mathbb{F}_p\), where \[f(x) = a_0 + a_1 x + \ldots + a_n x^n\] let's define the commitment of \(f(x)\) as \(comm f(x) = g_1f(s)\), hence \[comm(f(x)) = a_0 g_1 + a_1 g_1 s + \ldots + a_n g_1 s^n\]

Let's suppose that the prover wants to prove \(f(x)\) valuated at \(x_0\) equals \(y\); the prover knows \[f(x_0) = y \Longleftrightarrow (x - x_0) | f(x) - y \Longleftrightarrow (x - x_0) | f(x) - f(x_0)\]
i.e. \(\exists h(x) \in \mathbb{F}_p[X]\), such that \[f(x) = h(x) (x - x_0).\]
Indeed, the prover can compute \[h(x) = \frac{f(x) - y}{(x - x_0)} = \frac{f(x) - f(x_o)}{(x - x_0)}\]

Thus, the commitment of the polynomial \(h(x)\) is defined as above \[comm(h(x)) = g_1h(s) = b_0 + b_1s + \ldots + b_k s^k; \; k < n.\]

Finally, to verify that \(f(x_0) = y\), the verifiyer can verify instead \[e(comm(f(x)), g_2) = e(comm(h(x)), g_2(s - x_0))\] since \[e(g_1f(s),g_2) = e(g_1(h(s)(s - x_0)), g_2) = e(g_1h(s), g_2(s - x_0))\]

The max degree of the polynomial that can be committed is computed according to the number of constraints in the system.

In a real scenario the CRS/SRS is generated not just selecting one single random scalar, but as we previously saw it is generated as a multiparty ceremony, where each participant selects a random scalar \(s_i \in \mathbb{F}_p\) for \(i \in {1,\ldots,m}\). Where \(m\) is the number of parties and \[s = \Pi_{i = 1}^m s_i = s_1 s_2 \cdots s_m\]

Whirlpool

We devote this section to introduce the reader to Mixer applications and to detail the functioning of Whirlpool's main features.

Whirlpool is structured as a mixer based on Non-Interactive Zero-Knowledge (NIZK) proofs, specifically zk-SNARKs [@groth2016size].
See [@sec:zk] for a more detailed explanation on zk.SNARKs.

Mixer

A mixer is any kind of privacy enhancing protocol that agglutinates transactions and, as the name implies, shuffles those transactions to detach inputs and outputs.
The goal of a mixer is to break the linkability pseudo-anonymous networks exhibit.
Mixers have been around for a while [@CryptoMixer], [@CoinJoin] and different approaches have been used to tackle the mixing of transactions, however, the current state of the art is based in the following properties:

  • Smart contract management as opposed to centralized control. All the logic is automatically handled by a smart contract. Code is law and the human interactions is reduced to a minimum.
  • NIZK based as the mechanism that enables users to prove that they made a deposit, and are entitled to withdraw, without compromising their privacy.

A mixer is based in two simple asynchronous operations: deposit and withdraw.
Both methods are completely public.
Privacy is derived by the impossibility, of any observer and the protocol itself, of linking a withdrawal operation with a deposit.
The more active users in the mixer, the more difficult to link transactions.
This set of active users that protects the privacy of the mixer is called anonymity set. See [@sec:anonymity] for a more detailed analysis.
We now present a simple summary of the processes, and the associated data representation carried out at the smart contract level.

Deposit

A deposit is a simple transaction that is tied to a hidden secret, called commitment.
Cryptographic commitments allow the user to commit to a value, without revealing the actual value.
When the value is revealed, it becomes straightforward to verify the validity of the commitment.
The cryptographic commitment acts as an anchor to which future withdrawals can point towards,
to ensure that all output transactions have its corresponding input. More precisely:

  • The user generates two secret 32-byte scalars: \(r, s\).
  • He computes a commitment, of the concatenated secrets, of the form \(C = H_1(r,s)\). Where \(H_1\) is a circuit friendly cryptographic hash function. E.g: [@grassi2021poseidon], [@albrecht2016mimc].
  • The user sends a transaction with the commitment \(C\) and the quantity he wants to deposit in the mixer.

The complete deposit process can be seen in [@fig:deposit].

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →
{#fig:deposit}

Internal Data Representation

The smart contract requires to manage the internal state of the Mixer and keep track of withdrawals and deposits.
The selected data structure is Merkle Trees, for its accumulator properties and proof-of-inclusion support[@becker2008merkle].
Whirlpool implementation holds two kinds of merkle trees: one used for deposits and a second one for storing nullifier hashes on withdrawals.
Nullifier hashes are employed to keep track of already withdrawn funds.
They can be seen as "already spent" notes used to avoid a malicious user from trying to withdraw multiple times using the same deposit.

Mixers do not allow custom amount deposits for two reasons: using discrete and fixed deposit quantities (e.g: 0.1, 1, 10, 100 $KSM) increases the size of the anonymity sets, hence improving the overall level of privacy, and, a distracted user could use a weird custom among such as 123.456 $KSM that will be easily traced.
Hence, Whirlpool does not simply maintain a pair of Merkle Trees (one for commitments and the other for nullifiers), it manages multiple pairs, one for each fixed deposit quantity that it wants to support.
Please also note that Merkle trees have a limitation in the amount of commitments they can store.
The limit is \(2^h\) being \(h\) the height of the tree.
For this reason, once a Merkle tree is full, its root \(T_i\) is saved, and we create a new and empty one.
Although we use the abstraction of a single pair of Merkle trees per supported deposit quantity, it is actually a list of them.
As illustrated by [@fig:internalstate], the contract manages multiple trees.
We store the last 100 roots to ensure users have enough time to claim their funds.

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →
{#fig:internalstate}

Withdrawal

In order to make a withdrawal, the obvious solution would be to make the user reveal the secrets \(r,s\) used to create the commitment \(C\).
It would allow for a fast and direct verification of the fact that the user is trying to withdraw funds from a previous deposit.
A deposit for which, assuming a cautious user, he is the only one that knows the secrets.
The glaring problem with this approach is that it destroys any kind of privacy the user could have gained from using Whirlpool, anyone inspecting the blockchain would be able to link the deposit and withdrawal transactions.
To bypass this limitation, we make use of NIZK to prove the user made a deposit without compromising his privacy.

For a user to craft a proof \(\pi\), he should have access to all leaves (commitments) in the tree \(T_i\) that stores commitment \(C\).
Furthermore, the user needs his secret values \(r,s\) for the zero-knowledge proof.
Moreover, we assume he chooses his fee and withdrawal address as he wishes, and are, securely, encoded as 32-byte scalars \(f\) and \(A\).
Then, it computes the following zero-knowledge proof \(\pi\), proving that it knows \(r,s\) such that:

  • There is a commitment \(C = H_1(r,s)\) stored in \(T_i\). This is called the witness-proof \(WP = ZKP(r,s,T_i)\).
  • With corresponding nullifier hash \(NH = H_2(s)\).
  • And fee address hash \(FH = H_3(s, f, A)\).

Please note that we employ different hash functions. \(H_1\) could be equal to \(H_2\) and \(H_3\), but then domain separation should be employed.
In [@fig:scheme], we can observe the general scheme of Whirlpool Cash, withdrawal process included.
The yield generation layer in charge of managing idle funds from the mixer, can also be seen. See [@sec:yield] for more information about the yield generation.

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →
{#fig:scheme}

Please also note that in the withdrawal process, Whirlpool's smart contract also needs to verify:

  • That \(NH\) does not exist in any of the nullifier trees. Reject otherwise.
  • Verify the validity of the proof \(\pi\), using \(NH, FH, f, A\) as public inputs to the circuit. Reject otherwise.
  • Check the validity of the fee \(f\) and the withdrawal address \(A\). Reject otherwise.

Refresh Commitments

In many scenarios (e.g: claim yield rewards, reward users) the ability to proof that a deposit has been in the mixer for long enough is pretty useful.
In order to do so, without compromising the privacy of the users, we employ a redeeming mechanism inspired from [@le2021amr].
With this approach, users are able to redeem rewards based on time, without having to withdraw the deposited funds.
They simply prove they had an older commitment, get the rewards, and refresh the state with a new commitment.
To leverage this system, we implement a new Merkle tree that contains roots that are at least \(b\) block old.
Being \(b\) an (or many) arbitrary time(s) that unlocks the right for rewards. The complete process can be seen in [@fig:redeem]

  • Similarly, as in the withdrawal process, the user prepares a proof \(\pi\) that acknowledges he knows \(r,s : H_1(r,s) \in \tau_i\). Being \(\tau_i\) the root of a Merkle tree at least \(b\) blocks old.
  • He also presents the nullifier \(NH = H_2(s)\) that invalidates the original commitment \(C\).
  • If the proof is valid, he obtains the reward to the specified address \(A\).
  • He sends a new "refreshed" commitment \(C' = H_1(r', s')\), equivalent to \(C\) in the sense that it enables him to withdraw his funds in the future.

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →
{#fig:redeem}

This mechanism benefits both the user and Whirlpool: users are rewarded by their loyalty to the protocol and get a better degree of privacy since they spend more time in the mixer.
The protocol increases the size of its anonymity sets and obtains a better degree of liquidity.

Prototype

Our prototype aims to implement the mechanisms that perform deposits and withdrwals as described before to this end we have selected PlonK [ref Plonk] as the ZKP scheme and Poseidon hash [ref poseidon] as hash algorithm, to fulfil the design of the mixer architecture. [ref to prototype]

The first step clearly is to construct the regarding Plonk circuits, once this step is acomplish, next is to generate the setup, i.e. generate the public parameters necessary to proof and verify the statements claimed for those circuits.

"The Public Parameters can also be referred to as the Structured Reference String (SRS). It is available to both the prover and verifier and allows the verifier to efficiently verify and make claims about polynomials up to and including a configured degree."

In a first approach the public parameters, specifically the CRS/SRS, are generated unilaterally by the prototype itself. Furthermore, the generation of those parameters is in charge of this function in the code:

    /// Setup generates the public parameters using a random number generator.
    /// This method will in most cases be used for testing and exploration.
    /// In reality, a `Trusted party` or a `Multiparty Computation` will be used
    /// to generate the SRS. Returns an error if the configured degree is less
    /// than one.
    pub fn setup<R: RngCore + CryptoRng>(
        max_degree: usize,
        mut rng: &mut R,
    ) -> Result<PublicParameters, Error> {
        // Cannot commit to constants
        if max_degree < 1 {
            return Err(Error::DegreeIsZero);
        }

        // Generate the secret scalar beta
        let beta = util::random_scalar(&mut rng);

        // Compute powers of beta up to and including beta^max_degree
        let powers_of_beta = util::powers_of(&beta, max_degree);

        // Powers of G1 that will be used to commit to a specified polynomial
        let g = util::random_g1_point(&mut rng);
        let powers_of_g: Vec<G1Projective> =
            util::slow_multiscalar_mul_single_base(&powers_of_beta, g);
        assert_eq!(powers_of_g.len(), max_degree + 1);

        // Normalise all projective points
        let mut normalised_g = vec![G1Affine::identity(); max_degree + 1];
        G1Projective::batch_normalize(&powers_of_g, &mut normalised_g);

        // Compute beta*G2 element and stored cached elements for verifying
        // multiple proofs.
        let h: G2Affine = util::random_g2_point(&mut rng).into();
        let beta_h: G2Affine = (h * beta).into();

        Ok(PublicParameters {
            commit_key: CommitKey {
                powers_of_g: normalised_g,
            },
            opening_key: OpeningKey::new(g.into(), h, beta_h),
        })
    }

Nevertheless, as we have pointed out we need to generate the parameters in a trusted fashion and the straightforward manner is through the MPC ceremony. It is important to do this in a showy, elegant and trustworthy way to produce a heightened trust in the community.

For this task we might be inspired by perpetual powers of tau repository which is also iincluded in the Tornado cash repository phase2-bn254, and other two Tornado cash repositories.

actually perputual powers of tau is also used by Tornado cash, since it is the first phase of the two phases.

For our prototype we have modified the first repository to use bls12-381 elliptic curve, since it is the curve we use to construct the pairing and over it base PlonK, so here is the repository phase2-bls12-381 with such modifications. Among some of them these files were mofdified.

  • beacon_constrained.rs
  • compute_constrained.rs
  • new_constrained.rs
  • prepare_phase2.rs
  • reduce_powers.rs
  • verify_transform_constrained.rs

Remark: Here is the odcumentation about how the curve elements are serialised.

BLS12-381 serialization

  • \(\mathbb{F}_p\) elements are encoded in big-endian form. They occupy 48 bytes in this form.
  • \(\mathbb{F}_{p^2}\) elements are encoded in big-endian form, meaning that the \(\mathbb{F}_{p^2}\) element \(c_0 + c_1 \cdot u\) is represented by the \(\mathbb{F}_p\) element \(c_1\) followed by the \(\mathbb{F}_p\) element \(c_0\). This means \(\mathbb{F}_{p^2}\) elements occupy 96 bytes in this form.
  • The group \(\mathbb{G}_1\) uses \(\mathbb{F}_p\) elements for coordinates. The group \(\mathbb{G}_2\) uses \(\mathbb{F}_{p^2}\) elements for coordinates.
  • \(\mathbb{G}_1\) and \(\mathbb{G}_2\) elements can be encoded in uncompressed form (the x-coordinate followed by the y-coordinate) or in compressed form (just the x-coordinate). \(\mathbb{G}_1\) elements occupy 96 bytes in uncompressed form, and 48 bytes in compressed form. \(\mathbb{G}_2\) elements occupy 192 bytes in uncompressed form, and 96 bytes in compressed form.

The most-significant three bits of a \(\mathbb{G}_1\) or \(\mathbb{G}_2\) encoding should be masked away before the coordinate(s) are interpreted. These bits are used to unambiguously represent the underlying element:

  • The most significant bit, when set, indicates that the point is in compressed form. Otherwise, the point is in uncompressed form.
  • The second-most significant bit indicates that the point is at infinity. If this bit is set, the remaining bits of the group element's encoding should be set to zero.
  • The third-most significant bit is set if (and only if) this point is in compressed form and it is not the point at infinity and its y-coordinate is the lexicographically largest of the two associated with the encoded x-coordinate.

Next we will to copy the README.md files of each of the three repositories but with some additional information that helps to compile and run the examples.

1. Powers of tau

This is a multi-party computation (MPC) ceremony which constructs partial zk-SNARK parameters for all circuits up to a depth of \(2^{21}\). It works by taking a step that is performed by all zk-SNARK MPCs and performing it in just one single ceremony. This makes individual zk-SNARK MPCs much cheaper and allows them to scale to practically unbounded numbers of participants.

This protocol is described in a forthcoming paper. It produces parameters for an adaptation of Jens Groth's 2016 pairing-based proving system using the BLS12-381 elliptic curve construction. The security proof relies on a randomness beacon being applied at the end of the ceremony.

Contributions

Extended to support Ethereum's BN256 curve and made it easier to change size of the ceremony. In addition proof generation process can be done in memory constrained environments now. Benchmark is around 1.3 Gb of memory and 3 hours for a 2^26 power of tau on BN256 curve on my personal laptop

Instructions

Instructions for a planned ceremony will be posted when everything is tested and finalized.


To run the ceremony on your laptop:

  1. Preparation:
rustup update # tested on rustup 1.17.0
cargo build
  1. To generate the initial challenge run:
cargo run --release --bin new_constrained challenge1 $SIZE $BATCH
  1. Next you have to run this:
cargo run --release --bin compute_constrained challenge1 response1 $SIZE $BATCH
  1. Then the following command
cargo run --release --bin verify_transform_constrained challenge1 response1 challenge2 $SIZE $BATCH

Terminar esto

  1. Put response file from the previous ceremony to root directory.
  2. To generate new_challenge run:
cargo run --release --bin verify_transform_constrained # this will generate new_challenge from response file
  1. Backup old files and replace challenge file:
mv challenge challenge_old
mv response response_old
mv new_challenge challenge
  1. Run ceremony:
cargo run --release --bin compute_constrained # generate response file

Put your hash from output response to private gist (example: https://gist.github.com/skywinder/c35ab03c66c6b200b33ea2f388a6df89)

Here there is a test script:

#!/bin/sh

rm challenge*
rm response*
rm transcript
rm phase1radix*
rm tmp_*

set -e

SIZE=15
BATCH=256

echo -e "================Step 0 begins...=================\n\n"
echo -e "================Generation of the new challenge=================\n\n"
sleep 5
cargo run --release --bin new_constrained challenge1 $SIZE $BATCH
yes | cargo run --release --bin compute_constrained challenge1 response1 $SIZE $BATCH
cargo run --release --bin verify_transform_constrained challenge1 response1 challenge2 $SIZE $BATCH
echo "\n\n=====Step 0 has been completed new challenge was generated successfully===========\n\n"
read any_key

echo -e "\n\n Step 1 begins\n"
sleep 5
yes | cargo run --release --bin compute_constrained challenge2 response2 $SIZE $BATCH
cargo run --release --bin verify_transform_constrained challenge2 response2 challenge3 $SIZE $BATCH
echo "\nStep 1 has been completed\n"

echo -e "\n\n================Step 2 begins===================\n\n"
sleep 5
yes | cargo run --release --bin compute_constrained challenge3 response3 $SIZE $BATCH
cargo run --release --bin verify_transform_constrained challenge3 response3 challenge4 $SIZE $BATCH
echo "\nStep 2 has been completed\n"

echo -e "\n\n==============Beacon conistrained begins...\n"
sleep 5
cargo run --release --bin beacon_constrained challenge4 response4 $SIZE $BATCH 0000000000000000000a558a61ddc8ee4e488d647a747fe4dcc362fe2026c620 10

echo -e "\n\n==============Verify transform constrained  begins...\n"
sleep 5
cargo run --release --bin verify_transform_constrained challenge4 response4 challenge5 $SIZE $BATCH


echo -e "\n\n=============== Prepare Phase 2  begins...\n"
sleep 5
cargo run --release --bin prepare_phase2 response4 $SIZE $BATCH
  1. Reboot laptop to clean up toxic waste.

  2. Save response file and give it to the next participant.

Recommendations from original ceremony

Participants of the ceremony sample some randomness, perform a computation, and then destroy the randomness. Only one participant needs to do this successfully to ensure the final parameters are secure. In order to see that this randomness is truly destroyed, participants may take various kinds of precautions:

  • putting the machine in a Faraday cage
  • rebooting the machine afterwards
  • rebooting the machine afterwards and disconnecting RAM
  • destroying the machine afterwards
  • running the software on secure hardware
  • not connecting the hardware to any networks
  • using multiple machines and randomly picking the result of one of them to use
  • using different code than what we have provided
  • using a secure operating system
  • using an operating system that nobody would expect you to use (Rust can compile to Mac OS X and Windows)
  • using an unusual Rust toolchain or alternate rust compiler
  • lots of other ideas we can't think of

It is totally up to the participants. In general, participants should beware of side-channel attacks and assume that remnants of the randomness will be in RAM after the computation has finished.

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally
submitted for inclusion in the work by you, as defined in the Apache-2.0
license, shall be dual licensed as above, without any additional terms or
conditions.

2. phase2-wasm

This demo generates contributions for phase 2 of trusted setup MPC in a browser using WebAssembly.

Note: It was necessary to change the version of the dependency "blake2" from 0.6.1 to 0.8.0, (it worked with that version) current version is 0.10.4, but some methods in the code are deprecated and incpatibles with the last version. so it did not work using that version.

First it is necessary to build the wasm project

cd phase2-bn254/phase2
wasm-pack build --release -- --no-default-features --features wasm

How to install

git clone --recursive https://github.com/tornadocash/phase2-wasm
npm install

How to run in debug mode

# Builds the project and opens it in a new browser tab. Auto-reloads when the project changes.
npm start

How to build in release mode

# Builds the project and places it into the `dist` folder.
npm run build

Project structure

  • webpack.config.js config that is used to build .wasm and other project files
  • phase2-bn254/phase2 trusted setup crate, we build .wasm module from it
  • js/index.js main frontend script that calls .wasm to generate the contribution
  • static/index.html empty index file that just includes .js
  • static/params.bin example previous contribution

This example uses static previous contribution file and outputs new contribution to console. On prod this should be handled by the server.

3. trusted-setup-server

Tornado.cash Trusted Setup Ceremony app

zk-SNARKs require a pre-existing setup between the prover and verifier. A set of public parameters define the “rules of the game” for the construction of zk-SNARKs. This app allows everyone to contribute with your source of entropy so that tornado.cash can be trustless.

Environment variables

The app can use .env.development and .env.production. What file will be used depends on NODE_ENV variable.
For command yarn dev the .env.development is used. The yarn start uses .env.production.

ENV_VAR Description
DISABLE_S3 Disable contributions uploading to AWS S3. true or false
AWS_ACCESS_KEY_ID AWS access key
AWS_SECRET_ACCESS_KEY AWS secret key
AWS_S3_BUCKET AWS S3 bucket where the contributions will be uploaded
MYSQL_USER Mysql user the app uses. Notice, you don't need mysql db for development. The app will use local sqlite db in dev mode. Local db is stored in db.development.sqlite file.
MYSQL_PASSWORD Mysql password for MYSQL_USER
MYSQL_DATABASE Mysql database
TWITTER_CONSUMER_KEY Twitter consumer API key. Twitter app
TWITTER_CONSUMER_SECRET Twitter consumer API secret
TWITTER_CALLBACK_URL Twitter callback URL. The app handles the /api/oauth_callback/twitter endpoint. Feel free to change domain name and protocol though
GITHUB_CLIEND_ID Github client id. How to create Github OAuth app
GITHUB_CLIENT_SECRET Github client secret
GITHUB_CALLBACK_URL Github callback URL. The app handles the /api/oauth_callback/github endpoint. Feel free to change domain name and protocol though
SESSION_SECRET A random string that will be used by express-session to sign the session ID cookie.

Development setup

$ yarn install

# Edit all necessary environment variables. See the explanation above.
$ cp .env.example .env.development

# serve with hot reload at localhost:3000
$ yarn dev

Production setup

Follow instructions in the Initialize ceremony section to generate current.params ceremony file.

# Edit all necessary environment variables. See the explanation above.
$ cp .env.example .env.production

# Run Nginx + Letsencrypt containers to serve https requests to the app
$ cd frontend 
$ docker-compose up -d
$ cd ..

# Set VIRTUAL_HOST and LETSENCRYPT_HOST variables in the app's docker-compose.yml file
# Run the app and mysql database containers. It will use the MYSQL_USER, MYSQL_PASSWORD and MYSQL_DATABASE vars you specified in .env.production file.
$ docker-compose up -d

# Note. At start it builds client side stuff. It takes 30 seconds or so, during this time you will get 502 error.

In case of WASM module changes

  1. go to phase2 folder in phase2-bn254 and run the following command:
  2. wasm-pack build --release --target web -- --no-default-features --features wasm
  3. it will generate wasm modules in pkg folder, then you need to copy it to this project
  4. cp -r pkg/* <path_to_current_project>/lib/phase2 && cp pkg/phase2_bg.wasm <path_to_current_project>/static/_nuxt/lib/phase2/

Example: wasm-pack build --release --target web -- --no-default-features --features wasm && cp -r pkg/* ../../trusted-setup-nuxt/lib/phase2 && cp pkg/phase2_bg.wasm ../../trusted-setup-nuxt/static/_nuxt/lib/phase2/

Initialize REAL ceremony

  1. Choose what contribition to use for the ceremony (it should already exist). Also choose what hash of future ethereum block we will use, tweet about it and calculate the VDF.

  2. Make sure your machine has at least 150 GB RAM and 200 GB SSD.

  3. Download the response file of the contribution. You can use aria2c accelerator for it.

  4. git clone https://github.com/tornadocash/phase2-bn254 && cd phase2-bn254

  5. cd powersoftau

  6. cargo run --release --bin beacon_constrained <challenge_file> last_response 28 256 <VDF output>

  7. cargo run --release --bin prepare_phase2 last_response 28 256 it will generate radix* files. You can abort execution after phase1radix2m15 calculation.

  8. cd ../phase2

  9. Make sure sure that withdraw.circom has additional constaints

  10. wget https://github.com/tornadocash/tornado-core/releases/download/v2.0/withdraw.json -O circuit.json

  11. cp ../powersoftau/phase1radix2m15 .

  12. cargo run --release --bin new circuit.json current.params

  13. The current.params file is your initial challenge file.

  14. copy current.params, circuit.json and phase1radix* to ./server/snark_files folder.

  15. Then the phase2 goes. see Production setup

  16. Before next step you can download all contributions and verify all of them localy.

  17. Copy last contribution to phase2-bn254/phase2 folder as result.params

  18. npx snarkjs setup --protocol groth

  19. cargo run --release --bin export_keys result.params vk.json pk.json

  20. cargo run --release --bin copy_json proving_key.json pk.json transformed_pk.json

  21. cargo run --release --bin generate_verifier result.params Verifier.sol

  22. git clone git@github.com:tornadocash/tornado-core.git

  23. cd tornado-core && git checkout phase2

  24. Copy transformed_pk.json, vk.json, circuit.json and Verifier.sol to tornado-core project to the build/circuits folder.

  25. mv transformed_pk.json withdraw_proving_key.json

  26. mv vk.json withdraw_verification_key.json

  27. mv circuit.json withdraw.json

  28. npm i

  29. npm run build:circuit:bin

  30. That's it you can use Verifier.sol, withdraw.json, withdraw_verification_key.json and withdraw_proving_key.bin to deploy contract and the UI.

Note.

  1. Your also need to use special version of websnark lib on the UI.
  2. update WASM module.

Initialize ceremony (current.params file creation) OUTDATED:

  1. git clone https://github.com/tornadocash/phase2-bn254 && cd phase2-bn254
  2. git checkout ceremony
  3. go to ./powersoftau/src/bn256/mod.rs and change REQUIRED_POWER to 15 (it's going to fit 36k constaints snark)
  4. cd powersoftau
  5. run ./test.sh. After this step you will get many phase1radix* files.
  6. Download withdraw.json for required circuit to ./phase2 folder
  7. cd ../phase2
  8. cp ../powersoftau/phase1radix* .
  9. cargo run --release --bin new withdraw.json current.params
  10. The current.params file is your initial challenge file.
  11. copy current.params, withdraw.json and phase1radix* to ./server/snark_files folder.
  12. mv withdraw.json circuit.json