# ZKP Verification Outlook
Thanks for these great questions from [Maven 11](https://twitter.com/Maven11Capital).
Data Source:
- [ZKP Verification Dashboard](https://dune.com/nebra/zkp-verify-spending)
## Q1: Projection on the shape of future verification demand
> Projection on the shape of future verification demand:
> - l2 verification vs l1
> - rollup vs app.
> How do you think numbers will look? your other partnerships and where is a likely volume in verification coming from on l1 versus l2?
I think L2 verification will be higher by number, however, total spending on verfication on L1 is still higher in the near future (could change tho). [source](https://dune.com/nebra/zkp-verify-spending)
Below is a rough breakdown of gas fees (Groth16):
- Ethereum: 20 USD
- Optimism: 2 USD
Note, L2's gas cost is majorly from the calldata, however, the computation cost on L2 is not "free", normally it is 10x - 100x cheaper than L2. The compute vs calldata ratio will also be changed a bit after protodanksharding is online after Duncun hard fork. For ZKP applications with massive adoption, even 2 USD per verification is too high! We can offer much cheaper verification by using proof aggregation lower the computation cost and using alt-DA such as Celestia/Avail/EigenDA to lower the calldata cost.
## Q2: Competitive Landscape
> Other players trying to do this within their stack, e.g. aggregation on Bonsai/prover networks or hardware providers building similar proof aggregation functionality (possibly per application/customer). Whats your view here?
Known proof aggregation protocol by estabalished players:
- Risc0 Bonsai
- Starknet SHARP
- Polygon LXLY
In principle, all projects who generates a large amount of proofs could do proof aggregation themselves if there is no proof aggregation protocol/external service to use. However, it depends on the technical capbabilities of the project and would in-house proof aggregation could bring them enough "economy of scale" to compensate the expensive ZKP dev cost (Note: the increasing IT staff cost is one of the primary reason people are switching to AWS).
Using NEBRA would have the following advantages:
a. **Technically,** NEBRA's proof aggregation is designed from first principle, i.e. native circuits, number theory based non-native field simulation algorithms, cyclic curves (V2), compared with the proof aggregation using a VM (Bonsai, SHARP), NEBRA is expected to have 1-2 orders of magnitude efficiency advantage in term of proving cost. It is unclear how LXLY is going to be implemented. From my talk to Polygon team, NEBRA is 3 months ahead of the development phase compared with Polygon's proof aggregation engine.
The other difference here is universality, Bonsai is for RISC0 proof, SHARP is for CAIRO proofs, while technically both RISC0 and CAIRO is turing complete, in the sense that you could code a recursive verifier in these langauges, the cost is to take about 4 orders of magnitude performance penalties. See [Wei Dai's talk](https://docs.google.com/presentation/d/15X1fsVMwTNrXYihVLoskHV5BgfMlpFg7VKjxBUBVZTc/edit#slide=id.g1cec9b0acb0_0_130) at https://proofsingularity.xyz/
b. **Functionally,** NEBRA offers the following functionalities none of above system currently offer
- all these schemes above requires the proof generation and the proof aggregation are within **the same party**. This is simply not the case for many privacy preserving primitives such as private DeFi, zkDID, and many ZKP applications that requires privacy.
- all these proof aggregation mentioned above are focused on proofs inside there own ecosystems. However, if you want to aggregate proof accross different ecosystems external composibility, i.e. create bindings of verifiable statement between proof sources across ecosystems, NEBRA's universal proof aggregation protocol is the only choice here.
c. **Economically,** since the above advantage mentioned in b, NEBRA have the economy of scale that other schemes cannot offer.
## Q3: Projection on Gas Cost Savings
> Aggregation of ZKPs almost seem like a similar eye opener to scalable DA for input data like DAS. Would you say the gas cost savings are comparable almost?

Currently, the zkp verification is about 30 mil a year on Ethereum and its L2s, compared with the total Ethereum gas fee (~900 mil), it is about 3%. However, we do project this number will go up since:
- there will be more computation settled in the form of proofs
- there will be more privacy preserving applications growing on Ethereum and its L2s. Currently there is a very high technical entry barrier so that only sophiscated team like Nocturne/Worldcoin can deploy privacy preserving applications on EVM
One thing that is different from DA is, proofs must be ultimately settled on Ethereum. People will less likely to accept proof settlement on alternative L1s or Consensus compared with DA provided by alternatie L1s, i.e. Celestia.
## Q4: Latency
> proof aggregation vs latency and how this relate to different usecases
Our current proof aggregation latency is about 60 sec under the following configuration
- 32 Groth16 Proofs per Batch
- maximum 8 Fr public input
- GPU servers with the following config ([NEBRA's hardware guide](https://hackmd.io/@nebra-one/rkLQt4LS6))
- 1x RTX 4090
- Intel Core i9-13900KF
- 128 GB RAM
Adding 60 seconds end to end latency to applications on Ethereum L1 is quite reasonable since transactions submitted to Ethereum need to stay in mempool and wait for enough number of block confirmation in case of reorg.

Despite adding latency, NEBRA's backend infa is fully **event driven and pipelined**. NEBRA can keep up with Ethereum's block production by adding a single aggregated proof per block.
For the current NEBRA's potential users, such as zkDID, and private DeFi, the latency is not an issue. However, there will be new class of applications which requires much lower latency. NEBRA's second generation proof aggregation will leverage:
- cyclic curves for reduced non-native arithetics
- low overhead recursion (either small field STARK or Folding)
- improved hardware acceleration
As a result, we can expect rouhgly 10x prover speed improvement thus will have much better latency.
## Q5: onchain vs offchain verification
> generally how you sees usecases shape up that require on-chain verification as opposed to non-crypto usecases (and just verify on local server, fxs a government identity setup, would imagine you not going for these and focusing on crypto native usecases)
Most "crypto-native" applications still requires onchain verification, including:
- settle scaled computation onchain
- privacy primitives
- verifiable computation based oracles
We do see an increasing number of non-crypto usecases, for example, cloudflare uses zero-knowledge-proof for private web attestations [[source](https://blog.cloudflare.com/introducing-zero-knowledge-proofs-for-private-web-attestation-with-cross-multi-vendor-hardware/)]. We are very happy to partner with non-crypto users and export our technology for them (under commerical or non-commercial agreement), however, this would not be the focus of the team.