---
tags: PSE
---
# ZKML Research Initiatives
## ZK-friendly ML paradigms
### Background
For ZK Hack Lisbon, I teamed up with a few ML and cryptography veterans to build [Zero Gravity](/nCoxJCMlTqOr41_r1W4S9g), where we proposed a novel approach to efficiently prove the correctness of machine learning model calculations using Weightless Neural Networks (WNNs), which are combinatorial and do not rely on floating point arithmetic. WNNs learn from input-output pairs using RAM cells as lookup tables, making them more amenable to Zero Knowledge Proofs. The project utilizes Bloom filters to make RAM cells scalable and introduces a new non-cryptographic hash function, the "MishMash", to improve performance.
### Research Directions/Tasks
1. Implementing the Zero Gravity paradigm in Plonky2/Halo2
2. Identify and explore other high-accuracy weightless models
In particular, evaluate their "ZK-friendliness" by their potential compatibility with ZKP, taking into consideration their computational complexity and performance on real-world datasets.
e.g. [boolean circuits](https://twitter.com/cronokirby/status/1642963522924711936), [binarized neural networks](https://arxiv.org/abs/1602.02830), [truth table net](https://arxiv.org/abs/2208.08609), ...
3. Benchmarking (see below)
4. Look into other ZK-friendly ML frameworks
e.g. [tinygrad](https://github.com/geohot/tinygrad), ...
### Challenges
* Neural networks are still the norm for ML practioners. Finding high-accuracy ZK-friendly models could be difficult.
* Implementing each of the models in ZKP (in halo2 presumably) could be time consuming, and a steep learning curve for me.
* A lot of time/computational resources might be needed for training our own models using these paradigms.
## Folding using Nova/Sangria
### Background
[Nova](https://eprint.iacr.org/2021/370) is a technique used to merge two instances of relaxed Rank-1 Constraint Systems (R1CS) into a single instance. By combining these two instances, the verifier's work and communication are reduced compared to checking them separately. This technique is essential in creating an efficient Incrementally Verifiable Computation (IVC) scheme that allows for the lightweight verification of complex computations. [Sangria](https://geometry.xyz/notebook/sangria-a-folding-scheme-for-plonk) is the equivalent folding scheme for PLONKish systems.
### Research Directions/Tasks
1. Model folding using Nova/Sangria
Extend [zator](https://github.com/lyronctk/zator)
2. Batch inference using Nova/Sangria
Model folding requires modifying or designing the ML model architecture into repeated blocks of the exact same structure, which could be impractical for ML practioners. Batch inference, meaning applying the same model/circuit over a dataset of many samples, is a much lower entry point to apply folding.
### Challenges
* Folding circom circuits using Nova has been done in [Nova-Scotia](https://github.com/nalinbhardwaj/Nova-Scotia). We would need to design the batch inference such that the inputs/outputs is accumulated at each step, following the IVC construction.
* There is no open-source implementation for folding using Sangria at the moment, although we have heard that a group from OxPARC has done an implementation for halo2.
## Benchmarking
### Background
Current benchmarking of zkml frameworks focuses on efficiency (prover time, verifier time, memory requirement, etc.) The effect of quantization using e.g. [keras2circom](https://github.com/socathie/keras2circom) or [ezkl](https://github.com/zkonduit/ezkl) on model accuracy has not been thoroughly studied.
### Research Directions/Tasks
1. Benchmarking different ZKML architectures
Similar to [what Modulus Labs has done](https://drive.google.com/file/d/1tylpowpaqcOhKQtYolPlqvx6R2Gv4IzE/view), but include folding and recursion paradigms as well.
2. Benchmarking ZK-friendly ML paradigms (see above)
Similar to [what Modulus Labs has done](https://drive.google.com/file/d/1tylpowpaqcOhKQtYolPlqvx6R2Gv4IzE/view), but instead of just measuring CNN with different number of layers or parameters, measure across different model architectures. In particular, focus on model accuracy tradeoffs.
### Challenges
* Prior to benchmarking, a lot of time/computational resources might be needed for training our own models across these paradigms.
## Model secrecy
### Research Directions/Tasks
1. Functional Commitment
The majority of current ZKML inference implementations requires the circuit, hence model architecture, to be public, in order to prove the computation was done using the corresponding circuit. [Functional commitments](https://eprint.iacr.org/2021/1342.pdf) is a cryptographic technique that enables one party to commit to a private function and prove its application without revealing any other information about the function. For ZKML specifically, they enable the prover to demonstrate the use of a committed model, but they do not provide any assurances regarding the specific model that has been committed.
2. MPC (more for federated learning)
A fellow PSE teammate Takamichi has ideated a scheme on [2-party private binding computation using garbled circuits](/XjgfniHATtyQKfqTjmzDAA).
### Challenges
* Not much research has been done on implementing efficient functional commitments.
* For n(>2)-party computation, efficiency of MPC degrades dramatically as n increases.
## Acknowledgement
Thanks Barry, [Nalin](https://twitter.com/nibnalin), [Rishabh](https://twitter.com/rishotics), and many others for comments and feedback.