owned this note
owned this note
Published
Linked with GitHub
# Circom-MPC
Circom-MPC is a PSE Research project that enables the use of the Circom language to develop MPC applications. In this project, we envisioned MPC as a [broader paradigm](#MPC-as-a-Paradigm), where MPC serves as an umbrella for generic techniques such as Zero-Knowledge Proof, Garbled Circuit, Secret-Sharing, or Fully Homomorphic Encryption.
Throughout this research the team produced some valuable resources and insights, including:
- Implementation of [circom-2-arithc](https://github.com/namnc/circom-2-arithc), a fork of the Circom compiler that targets arithmetic circuits, which can be fed into any MPC backend
- Example integration of circom-2-arithc with the popular Secret-Sharing based backend MP-SPDZ in [circom-MP-SPDZ](https://github.com/namnc/circom-mp-spdz).
- Proof of concept application using [MPC-ML](https://hackmd.io/YsWhryEtQ0WwKyerSL8oCw#Circomlib-ML-Patches-and-Benchmarks) with [keras-2-circom-MP-SPDZ](https://github.com/namnc/circom-mp-spdz/blob/main/ML-TESTS.md) which extends keras-2-circom-ZK to [keras-2-circom-MPC](https://github.com/namnc/keras2circom).
- [Modular Layer benchmarks](#Modular-Layer-Benchmark) for the keras model.
We decided to sunset the project for a few reasons:
- The overwhelming amount of effort to fully implement it.
- The low current traction of users (could be due to Circom). Hence a [Typescript-MPC](https://github.com/voltrevo/mpc-framework) variant may be of more public interest.
- The existence of competitors such as [Sharemind MPC into Carbyne Stack](https://cyber.ee/uploads/Sharemind_MPC_CS_integration_a01ca476a7.pdf).
Therefore, we will leave it as a paradigm, and we hope that any interested party will pick it up and continue its development.
In what follows we explain:
- MPC as a Paradigm
- Our Circom-MPC framework
- Our patched Circomlib-ML and modular benchmark to have a taste of MPC-ML
# MPC as a Paradigm
Secure Multiparty Computation (MPC), as it is defined, allows mutually distrustful parties to jointly compute a functionality while keeping the inputs of the participants private.

An MPC protocol can be either application-specific or generic:

While it is clear that Threshold Signature exemplifies application-specific MPC, one can think of generic MPC as an efficient MPC protocol for a Virtual Machine (VM) functionality that takes the joint function as a common program and the private inputs as parameters to the program and the secure execution of the program is within the said VM.
*For readers who are familiar with Zero-Knowledge Proof (ZKP), MPC is a generalization of ZKP in which the MPC consists of two parties namely the Prover and the Verifier, where only the Prover has a secret input which is the witness.*

And yes, Fully Homomorphic Encryption (FHE) is among techniques (along side Garbled-Circuit and Secret-Sharing) that can be used for MPC construction in the most straightforward mental model:

# Programmable MPC
That said, MPC is not a primitive but a [collection of techniques](https://mpc.cs.berkeley.edu/) aimed to achieve the above purpose. Efficient MPC protocols exist for specific functionalities from simple statistical aggregation such as mean aggregation (for ads), Private Set Intersection (PSI) to complex ones such as RAM (called [Oblivious-RAM](https://en.wikipedia.org/wiki/Oblivious_RAM)) and even Machine Learning (ML).

As each technique GC/SS/FHE and specialized MPC has its own advantage, it is typical to combine them into one's privacy preserving protocol for efficiency:

In what follows, we present work that enables the use of Circom as a front-end language for developing privacy-preserving systems, starting with the MP-SPDZ backend.

*[Detailed explanation of Progammable-MPC with Circom-MPC.](https://docs.google.com/presentation/d/1dPvNyrBWyqyX2oTGcnM52ldpISGrhwEhIZXJPwYWE6I/edit#slide=id.g2818c557dad_0_261)*
# Circom-MPC
The Circom-MPC project aims to allow a developer to write a Circom program (a Circom circuit) and run it using an MPC backend.
## The workflow
- A circom program (prog.circom and the included libraries such as circomlib or circomlib-ml) will be interpreted as an arithmetic circuit (a [DAG](https://en.wikipedia.org/wiki/Directed_acyclic_graph) of wires connected with nodes with an input layer and an output layer) using [circom-2-arithc](https://github.com/namnc/circom-2-arithc).
- A transpiler/builder, given the arithmetic circuit and the native capabilities of the MPC backend, translates a gate to a set of native gates so we can run the arithmetic circuit with the MPC backend.
## Circom-MP-SPDZ
[Circom-MP-SDPZ](https://github.com/namnc/circom-mp-spdz/) allows parties to perform Multi-Party Computation (MPC) by writing Circom code using the MP-SPDZ framework. Circom code is compiled into an arithmetic circuit and then translated gate by gate to the corresponding MP-SPDZ operators.
The Circom-MP-SDPZ workflow is described [here](https://hackmd.io/@mhchia/r17ibd1X0).
# Circomlib-ML Patches and Benchmarks
With MPC we can achieve privacy-preserving machine learning (PPML). This can be done easily by reusing [circomlib-ml](https://github.com/socathie/circomlib-ml) stack with Circom-MPC. We demonstrated PoC with [ml_tests](https://github.com/namnc/circom-mp-spdz/tree/main/ml_tests) - a set of ML circuits (fork of [circomlib-ml](https://github.com/socathie/circomlib-ml)).
More info on ML Tests [here](https://github.com/namnc/circom-mp-spdz/blob/main/ML-TESTS.md).
## Patches
### Basic Circom ops on circuit signals
Circom-2-arithc enables direct usage of comparisons and division on signals. Hence the original Circom templates for comparisons or the division-to-multiplication trick are no longer needed, e.g.
- GreaterThan can be replaced with ">"
- IsPositive can be replaced with "> 0"
- x = d * q + r can be written as "q = x / d"
### Scaling, Descaling and Quantized Aware Computation
Circomlib-ML "scaled" a float to int to maintain precision using $10^{18}$:
- for input $a$, weight $w$, and bias $b$ that are floats
- $a$, $w$ are scaled to $a' = a*10^{18}$ and $w' = w*10^{18}$
- $b$ is scaled to $b' = b*10^{36}$, due to in a layer we have computation in the form of $a*w + b$ --> the outputs of this layer is scaled with $10^{36}$
- To proceed to the next layer, we have to "descale" the outputs of the current layer by (int) dividing the outputs with $10^{18}$
- say, with an output $x$, we want to obtain $x'$ s.t.
- $x = x'*10^{18} + r$
- so effectively in this case $x'$ is our actual output
- in ZK $x'$ and $r$ are provided as witness
- in MPC $x'$ and $r$ have to be computed using division (expensive)
For efficiency we replace this type of scaling with bit shifting, i.e.
- instead of $*10^{18}$ ($*10^{36}$) we do $*2^s$ ($*2^{2s}$)where $s$ is called the scaling factor
- The scaling is done prior to the MPC
- $s$ can be set accordingly to the bitwidth of the MPC protocol
- now, descaling is simply truncation or right-shifting, which is a commonly supported and relatively cheap operation in MPC.
- $x' = x >> s$
### The "all inputs" Circom template
Some of the Circomlib-ML circuits have no "output" signals; we patched them to treat the outputs as 'output' signals.
Following circuits were changed:
- ArgMax, AveragePooling2D, BatchNormalization2D, Conv1D, Conv2D, Dense, DepthwiseConv2D, Flatten2D, GlobalAveragePooling2D, GlobalMaxPooling2D, LeakyReLU, MaxPooling2D, PointwiseConv2D, ReLU, Reshape2D, SeparableConv2D, UpSampling2D
***Some templates (Zanh, ZeLU and Zigmoid) are "unpatchable" due to their complexity for MPC computation.***
## Keras2Circom Patches
> keras2circom expects a convolutional NN;
We forked keras2circom and create a [compatible version](https://github.com/namnc/keras2circom).
## Benchmarks
After patching Circomlib-ML we can run the benchmark separately for each patched layer above.
### Docker Settings abd running MP-SPDZ on multiple machines
For all benchmarks we inject synthetic network latency inside a Docker container.
We have two settings with set latency & bandwidth:
1. One region - Europe & Europe
2. Different regions - Europe & US
We used `tc` to limit latency and set a bandwidth:
```bash=
tc qdisc add dev eth0 root handle 1:0 netem delay 2ms
tc qdisc add dev eth0 parent 1:1 handle 10:0 tbf rate 5gbit burst 200kb limit 20000kb
```
Here we set delay to 2ms & rate to 5gb to imitate a running within the same region (the commands will be applied automatically when you run the script).
There's a [Dockerfile](https://github.com/namnc/circom-mp-spdz/blob/main/Dockerfile), as well as different benchmark scripts in the repo, so that it's easier to test & benchmark.
If you want to run these tests yourself:
1. Set up the python environment:
```bash=
python3 -m venv .venv
source .venv/bin/activate
```
2. Run a local benchmarking script:
```bash=
python3 benchmark_script.py --tests-run=true
```
3. Build & Organize & Run Docker container:
```bash=
docker build -t circom-mp-spdz .
docker network create test-network
docker run -it --rm --cap-add=NET_ADMIN --name=party1 --network test-network -p 3000:3000 -p 22:22 circom-mp-spdz
```
4. In the Docker container:
```bash=
service ssh start
```
5. Run benchmarking script that imitates few machines:
```bash=
python3 remote_benchmark.py --party1 127.0.0.1:3000
```
6. Deactivate venv
```bash=
deactivate
```
### Benchmarks
Below we provide benchmark for each different layer separately, a model that combines different layers will yield corresponding combined performance.
| Circuit | Fast LAN (rate 10gb, latency 0.25ms) | LAN (rate 1gb, latency 1ms) | WAN (rate 100mb, latency 50ms) |
| --- | --- | --- | --- |
| DepthwiseConv2D | 4.508590 | 5.333890 | 40.752400 |
| GlobalMaxPooling2D | 1.580060 | 2.121270 | 34.043500 |
| BatchNormalization2D | 4.517530 | 5.289740 | 39.124600 |
| Conv1D | 1.499740 | 1.970370 | 27.505500 |
| ArgMax | 0.727750 | 1.143670 | 18.592200 |
| Conv2D | 1.929560 | 2.499890 | 29.358900 |
| Dense | 1.552070 | 2.187230 | 27.990800 |
| AveragePooling2D | 0.477079 | 0.724241 | 11.612400 |
| SumPooling2D | 0.005776 | 0.007228 | 0.174216 |
| GlobalAveragePooling2D | 0.812070 | 1.236330 | 17.276700 |
| SeparableConv2D | 11.701600 | 12.948200 | 90.974200 |
| ReLU | 1.696460 | 2.404690 | 30.424000 |
| Flatten2D | 0.004507 | 0.007012 | 0.174841 |
| MaxPooling2D | 0.707512 | 1.182670 | 18.457000 |
| PointwiseConv2D | 8.216470 | 9.359570 | 68.186700 |
| Circuit | Data sent (MB) | Rounds | Global data sent (MB) |
| --- | --- | --- | --- |
| DepthwiseConv2D | 66.2456 | 1014 | 132.561 |
| GlobalMaxPooling2D | 0.737164 | 983 | 1.48662 |
| BatchNormalization2D | 63.1446 | 1003 | 126.359 |
| Conv1D | 13.1792 | 647 | 26.3749 |
| ArgMax | 0.270383 | 500 | 0.548958 |
| Conv2D | 19.3392 | 682 | 38.6988 |
| Dense | 12.0876 | 658 | 24.1916 |
| AveragePooling2D | 0.27011 | 327 | 0.548412 |
| SumPooling2D | 0.029897 | 34 | 0.059794 |
| GlobalAveragePooling2D | 0.349556 | 503 | 0.707304 |
| SeparableConv2D | 192.085 | 1947 | 384.371 |
| ReLU | 0.651317 | 890 | 1.31083 |
| Flatten2D | 0.031913 | 34 | 0.063826 |
| MaxPooling2D | 0.440606 | 493 | 0.889404 |
| PointwiseConv2D | 129.147 | 1464 | 258.428 |
Accuracy of the circuits compared to Keras reference implementation
| Circuit | Accuracy (in %) |
| --- | --- |
| GlobalMaxPooling2D | 99.97 |
| BatchNormalization2D | 99.74 |
| Conv1D | 99.68 |
| Conv2D | 99.74 |
| Dense | 99.76 |
| AveragePooling2D | 99.91 |
| GlobalAveragePooling2D | 99.90 |
| MaxPooling2D | 99.94 |
> Our above benchmark only gives a taste of how performance look like for MPC-ML, any interested party can understand approximate performance of a model that combines different layers.