# Are Verkle proofs ZK-friendly?
As far as I know, there are no existing implementations of ZKPs to check Verkle proofs. Here I'll try to roughly sketch what the performance of such ZKPs could look like.
Verkle proofs can be viewed as batch opening arguments, concretely for a Pedersen-style commitment scheme. When verifying a Verkle proof, most of the verifier work is in the batch reduction. This involves combining some (additively homomorphic) commitments and claimed evaluations, both using the same random scalars. See e.g. [Dankrad's note](https://dankradfeist.de/ethereum/2021/06/18/pcs-multiproofs.html).
## Curve choice
Verkle trees are proposed to use the [Bandersnatch curve](https://eprint.iacr.org/2021/1152). Like JubJub, it's defined over BLS12-381's scalar field.
There are some good reasons for the choice of BLS12-381. It's already part of the Ethereum 2.0 design, and will have native EVM support with EIP-2537. It's also pairing-friendly, and is thought to have up to 117~120 bits of security (per [NCC](https://research.nccgroup.com/wp-content/uploads/2020/07/NCC_Group_Zcash2018_Public_Report_2019-01-30_v1.3.pdf)), favoring it over BN-254 which has up to ~100 bits.
There are some downsides, though, which we'll touch on later.
## Multi-scalar multiplication
Since we're folding `k` Pedersen-style commitments, we end up doing one multi-scalar multiplication (MSM) with `k` terms. If we're using an elliptic curve based SNARK with BLS12-381, this MSM will involve "native" arithmetic.
A very naive double-and-add implementation of this MSM would involve `256 k` curve adds and `256 k` doubles. With simultaneous doubling, our amortized doubling cost vanishes, so we're essentially left with `256 k` adds.
With 4-bit windowed multiplication, this drops to `(256/4 + 2^4) k = 80 k` adds. I'm sure other improvements are possible, although certain algorithms like Yao's or Pippenger's might not translate nicely to ZK circuits.
## Non-native arithmetic?
The verifier also does `O(k)` arithmetic in Bandersnatch's *scalar field*, e.g. when folding claimed evaluations. The exact amount of arithmetic in this field will depend on the particular batch opening protocol. (The reduction Dankrad described isn't the only option; see e.g. Izaak Meckler's [note](https://hackmd.io/f6ShwcCLTfmClTmmmMUjuA?view).)
It seems like this will need to be simulated using bignum arithmetic techniques, such as the [casting](https://eprint.iacr.org/2022/1470) method.
This is reminiscent of similar obstacles arising in cycle-of-curve based ZKPs, such as Plonky1 or Pickles. In that setting, we have the option of deferring some non-native arithmetic to the next proof, but the deferral mechanism itself ends up requiring some non-native arithmetic.
Some workaround exist for avoiding arithmetic in the scalar field of a curve whose base field is native. For example, `x + y = z` can be checked as `[x] G + [y] G = [z] G`, while `x * y = z` can be checked as `[x] [y] G = [z] G`. Some Halo implementations use tricks like these to avoid non-native arithmetic altogether. However, the curve multiplications they add are also costly.
`O(k)` non-native field operations isn't a ton, though, so this might not be a huge concern. The MSM might still be the larger circuit bottleneck, depending on implementation details.
## Field size
For a full zkEVM, prover speed is paramount. A lot of the fastest provers are based on small fields, such as Goldilocks (2<sup>64</sup> - 2<sup>32</sup> + 1), BabyBear (2<sup>31</sup> - 2<sup>27</sup> + 1), or a Mersenne field (2<sup>31</sup> - 1 or 2<sup>127</sup> - 1).
Curves like Bandersnatch don't play well these small-field provers. It's possible to find curves that do, such as [EcGFp5](https://eprint.iacr.org/2022/274), but Bandersnatch has no such structure.
This is not to say that elliptic curve based provers can't be competitive. [Barretenberg](https://github.com/AztecProtocol/barretenberg) is competitive with Plonky2, for example, while [Jolt](https://github.com/a16z/jolt/) is competitive with Risc0 and SP1. Both are using ~256 bit elliptic curves, though, not ~384 bit curves which are quite a bit slower.
### Hybrid implementations
What we could do is use a BLS12-381 based proof to check Verkle proofs, and a proof over a smaller field for other zkEVM logic. However, such a hybrid approach would add a lot of complexity.
We also don't know of a great way for two SNARKs using unrelated fields to communicate with one another, other than using a "neutral" hash. We could use an arithmetic hash, but we'd need to pick one field or the other, leading to non-native arithmetic on one side.
Nonetheless, this might be the best option for a zkEVM that supports Verkle tries.
## Comparison to binary tries
How does this compare to binary trie proofs, as we would get with something like EIP-3102 + EIP-2926?
Let's assume the average trie leaf has a 40-bit prefix, beyond which its key is unique. Then checking a binary Merkle proof would simply involve 40 evaluations of a 2-to-1 compression function.
Verkle trees are proposed to have arity 256, so the average Verkle proof would have `40 / 8 = 5` steps, for a total of about `80 * 5 = 400` curve operations and some non-native arithmetic.
Conventional hashes like blake2b are notoriously expensive in classic ZK schemes. Thus at first glance, it seems like the mostly-native field operations involved in checking Verkle proofs would be cheaper. However, there are several factors which complicate the comparison, like the high cost of BLS12-381 arithmetic, and potentially the cost of "cross-field" communication.
Another factor is the recent progress toward making conventional hashes more ZK-friendly. This is happening in a few different ways:
- Modern lookup arguments, such as logUp or cq.
- The Binius line of work, which could greatly improve the efficiency of bit-oriented hashes.
- Clever arithmetizations, like Risc0's SHA-2 circuit, Plonky3's Keccak-f AIR, or STWO's GKR-assisted blake2 implementation.
STWO's blake2 implementation is particularly relevant, since that's the hash EIP-3102 proposed. It's not yet public, but is reportedly able to prove an impressive 20,000 hashes per second on a desktop CPU (see [Shahar's talk](https://www.youtube.com/watch?v=_eha0QqAbIA)).
## Conclusion
The claim that Verkle trees are ZK-friendly certainly has some merit, but ZKPs for Merkle proofs can also be quite efficient if certain modern techniques are employed. It would be very difficult to say which trie format is the most ZK-friendly, as it depends on a bunch of implementation details.
Trie format decisions should probably by driven by other factors, such as complexity, CPU and I/O efficiency, or latency requirements for stateless blocks.