Try   HackMD

Nouns DAO Private Voting - Contraint Count Analysis

tags: Noir UltraPlonk Benchmark Nouns DAO Private Voting

This document analyzes the Noir constraint count of Aragon & Aztec's Nouns DAO Private Voting Joint-Proposal from a single voter's perspective.

Computational Needs

Summary

An estimation of a voting proof generated by a voter in UltraPlonk would contain:

Primitive Runs Estimated Constraints
RLP 105? TBC
Keccak256 105 1,890,000
Merkle Traversing 7 (depth) TBC
ECDSA secp256k1 Verification 1 36,000

Elaboration

UltraPlonk Constraint Counts

Proving one instance of primitives in UltraPlonk comes with the following number of constraints based on previous tests:

Primitive Constraints
RLP TBC
Keccak256 18,000
Merkle Traversing TBC
ECDSA secp256k1 Verification 36,000

Ethereum Storage Proofs

In the current design specs, each voter would have to generate 3 storage proofs (1 for proving ownership, 1 for proving delegation and 1 for proving non-nullified).

An estimation of computation needs could be assuming maximum storage trie depths of 7[1]:

  • Keccak256: 5 per layer * layers * 3 = 5 * 7 * 3 = 105 Keccak256s = 1,890,000 constraints
  • Merkle Traversing: 7 layers

We need 5 rounds of one Keccak256 per layer as each layer has >500 bytes of data. Keccak256 is a sponge based hash with a round size of 136, so to get to 500 bytes we need to do 5 rounds. (Slack thread for more context / discussion).

Worst Case Scenario

Nouns' requirements specified that proposals are expected to support 10,000 voters. The worst Ethereum Merkle-Patricia Tries that each storage proof could be traversing in can be of depths of 14 in order to support ≤

214 (i.e. 16,384) voters.

This then leads to the worst-case need of computing:

  • Keccak256: 5 per layer * layers * 3 = 5 * 14 * 3 = 210 Keccak256s = 3,780,000 constraints
  • Merkle Traversing: 14 layers

Ownership Verification

The proposal aims at verifying voters' account ownerships by verifying their ECDSA signature.

This then leads to the need of also computing:

  • ECDSA secp256k1 Verification: 1 = 36,000 constraints

Constraint Count Benchmarks

WASM (i.e. works in web browser) has a memory limit of 4096MB (1024MB for phones). To be mindful of circuit sizes helps explore maximum circuit sizes that different proving backends could support in WASM, and inform circuit design with quick assurance on what circuit sizes are provable in a browser context.

Note: Proving circuits using Nargo compiled with the native C++ barretenberg backend would not be restricted by the memory limit, as no use of WASM is involved.

Maximum Constraint Count

To explore the constraint count ceiling of different proving backend based WASMs, here is a benchmarking circuit that was ran across them:

use dep::std;

global N: Field = 30; // adjust to benchmark

fn main() -> pub [u8; 32] {
    let x = [123, 456, 789]; // random bytes
    let mut hash = std::hash::sha256(x);
    for _i in 0..N {
        hash = std::hash::sha256(hash);
    }
    hash
}

TurboPlonk

With WASM-based v0.3.2 Nargo binary on an M2 Macbook Air:

  • N = 17 (constraint count 254263) proves
  • N = 18 (constraint count 268403) crashed

Seemingly TurboPlonk WASM's constraint count ceiling is

218 (262,144).

For the record, the crashing error was:

The application panicked (crashed).
Message:  called `Result::unwrap()` on an `Err` value: RuntimeError { source: Trap(HeapAccessOutOfBounds), wasm_trace: [],  native_trace:    0: <unknown>
   1: <unknown>
   2: <unknown>
   3: <unknown>
 }
Location: /Users/runner/.cargo/git/checkouts/aztec_backend-a697fb631cbad807/2cb523d/barretenberg_wasm/src/lib.rs:93

UltraPlonk

With a local test build of WASM-based Nargo binary:

  • N = 17 (constraint count 76325) proves
  • N = 32 (constraint count 130575) proves
  • N = 33 (constraint count 134450) crashed

UltraPlonk got us a smaller constraint size thus letting us prove more SHA256 hashes in WASM, but the amount of memory used per constraint looks to be around 2x.

Seemingly UltraPlonk WASM's constraint count ceiling is

217 (131,072), which should roughly translate to 7 Keccak256 max.

UPDATE:

An update on the 512k gate stuff. I've done a load of work to store polynomial memory in node space rather then wasm space, in exchange for performance. We can now build a 512k turbo proof in wasm. We can build a 256k ultra proof in wasm. But there seems to be some bug that specifically effects 512k/wasm/ultra combination. We can build the proof now (no running out of mem), but the proof doesn't pass. I may dig more into this if compelled, but I've also updated Suyash in detail so he can investigate this bug.

512k ultra wasm done. That means nouns dao thingy is actually feasible. It's slow PoC and will need quite a bit of cleanup to actually make it's way to master.

[WIP] Ethereum Storage Proof

To facilitate UX optimisations, here is a benchmarking circuit that mimics the Ethereum Storage Proof needs for Nouns DAO's initiative:

use dep::std;

global N: Field = 105;

fn main() -> pub [u8; 32] {
    let x = [123, 456, 789]; // random bytes
    let mut hash = std::hash::keccak256(x);
    for _i in 0..N {
        hash = std::hash::keccak256(hash);
    }
    hash
    
    // TODO: ECDSA
    
    // TODO: Merkle tree traverse
}

  1. Nouns NFT contract analysis and Ethereum mainnet - MPT analysis ↩︎