# GPU Acceleration for ZKPs on Mobiles
## What Is a GPU?
Is a GPU just a faster CPU? No. They're architecturally different. A CPU is built to execute complex, sequential tasks as fast as possible. A GPU is built to execute millions of simple tasks at the same time.
A CPU is a latency-oriented design. Most of its transistor budget goes toward large caches (so the processor rarely waits for slow RAM), complex branch prediction (speculatively guessing which `if/else` path to take before the result is known), and out-of-order execution (reordering instructions on the fly to keep functional units busy). The result: 8–32 powerful cores, each capable of handling irregular, branchy logic.
A GPU flips this trade-off. It has thousands of small, simple cores. They can't predict branches or reorder instructions, but they can all run the same simple instruction in lockstep. The NVIDIA H100, for instance, packs about 16,896 CUDA cores. Each core has a tiny cache. Instead of hiding memory latency with large caches like a CPU does, GPUs hide it with *parallelism* — while one group of threads waits for data, another group runs.
---
## Execution Model of a GPU
GPUs use SIMT — Single Instruction, Multiple Threads. Individual threads don't get their own instruction fetch/decode unit (too expensive). Instead, threads are grouped together and share one. They all execute the same instruction at the same time, just on different data.
NVIDIA calls a group of 32 threads a **warp**. A Streaming Multiprocessor (SM) keeps 32–64 warps resident at once. When one warp stalls waiting for memory, the GPU instantly swaps in a ready warp. This is latency hiding through context switching, and it's why GPUs don't need big caches.
The GPU memory hierarchy has two levels that matter for our purposes: Global Memory (VRAM) which is large but slow, and Shared Memory (SRAM) which is small, fast, and shared among threads in a block. We'll see both of these come up repeatedly when we look at how ZKP operations are mapped to GPUs.
---
## The ZKPs
Zero-Knowledge Proofs (ZKPs) let a prover generate a proof $\pi$ that convinces a verifier the prover correctly computed $f(x, w) = y$ — where $f$ is public, $x$ is public input, and $w$ is private input (the "witness") — without revealing anything about $w$.
The reason any of this is relevant to GPUs: generating these proofs is *brutally* compute-intensive.
### Arithmetization
To generate a ZKP, every logic gate and operation in the computation gets converted into polynomial equations over a finite field. This is called arithmetization, and it involves arithmetic on huge prime numbers — 254-bit, 377-bit.
In Groth16 (one of the most widely deployed proof systems), the final proof is under 200 bytes and verifies in less than 1 ms. But producing that proof is dominated by two operations: **Multi-Scalar Multiplication (MSM)** and **Number-Theoretic Transform (NTT)**. Together they account for roughly 95% of the workload, and both are parallelizable. That's what makes GPUs interesting here.
### The Groth16 Pipeline
The computation's public and private inputs ($x$ and $w$) get encoded into polynomials $\vec{a}$, $\vec{b}$, $\vec{c}$, and $\vec{h}$. A series of NTT operations are performed on these polynomials, which consist of large integers (256-bit). The results are then combined with a Proving Key (377-bit integers) using MSM operations to produce the proof $\pi$.
### Finite Fields and Limb Representation
All the integers in MSM and NTT live in a finite field $\mathbb{F}_p$ — the set $\{0, 1, \ldots, p-1\}$ where $p$ is a large prime, with all arithmetic done modulo $p$.
Here's where things get painful for GPUs. These field elements are way wider than what a GPU can handle natively:
| Field | Bit-Width | 32-bit Limbs |
|-------|-----------|-------------|
| BN254 scalar field ($\mathbb{F}_r$) | 254-bit | 8 |
| BLS12-377 base field ($\mathbb{F}_q$) | 377-bit | 12 |
A single 377-bit modular multiplication breaks down into roughly 144 32-bit multiply-and-add operations, plus carry propagation and modular reduction. When I first worked through the limb decomposition math, it made the cost of ZKP proving finally click — every "simple" field multiply is actually a 144-instruction sequence on a 32-bit GPU core.
For NTT, the inputs are field elements in $\mathbb{F}_r$. For MSM, the Proving Key consists of elliptic curve points — each point has 2–4 coordinates, each coordinate a large integer in $\mathbb{F}_q$.
---
## Multi-Scalar Multiplication (MSM)
MSM computes a weighted sum of elliptic curve points:
$$Q = \sum_{i=0}^{n-1} k_i \cdot P_i$$
Multiplying a point $P_i$ by a scalar $k_i$ means adding $P_i$ to itself $k_i$ times using Point-Addition and Point-Doubling formulae. To put the cost in perspective: a single elliptic curve point addition requires 12–16 field multiplications, and each of those is ~144 32-bit ops. One point addition is roughly 2,000× more expensive than a float32 multiply-add. This is not your typical GPU dot product.
### Pippenger's Algorithm
Naively computing MSM would be absurdly slow. Pippenger's Algorithm makes it tractable. Here's how it works:
Take a $\lambda$-bit scalar and split it into $w$ windows of $s$ bits each. Within each window, the $s$-bit scalar segment can take $2^s$ values — these are the **buckets**. Points get sorted into buckets based on their scalar values within that window (**bucket accumulation**), then the buckets are combined with appropriate weights (**bucket reduction**) to produce a partial sum for that window. Finally, the $w$ window partial sums are combined (**window reduction**) to get the final answer.
The parallel structure here is what makes this GPU-friendly. Bucket accumulation and bucket reduction are independent across windows, so you can assign one thread per bucket and process all windows simultaneously. The catch is window reduction — it's inherently serial (each window's contribution depends on positional weighting via point-doublings), so it usually runs on the CPU.
---
## Number-Theoretic Transform (NTT)
NTT is the Fast Fourier Transform, but for finite field elements instead of complex numbers. It maps $\mathbf{a} = [a_0, a_1, \ldots, a_{n-1}]$ to $\mathbf{A} = [A_0, A_1, \ldots, A_{n-1}]$ where:
$$A_i = \sum_{j=0}^{n-1} a_j \cdot \omega^{ij}$$
$\omega$ is the primitive $n$-th root of unity in the field, and powers of $\omega$ are the *twiddle factors*.
Why bother? Multiplying two polynomials in coefficient form is an $O(n^2)$ convolution. NTT converts to a point-value representation where multiplication is element-wise, bringing the total cost down to $O(n \log n)$. The Groth16 prover chains together forward NTTs, element-wise operations, and inverse NTTs to compute the quotient polynomial $\vec{h}$.
### Cooley–Tukey Butterfly
Here's the radix-2 Cooley–Tukey algorithm for an 8-element NTT:

[NTT butterfly diagram — 3 stages with twiddle factors, butterfly operations, and data shuffles]
Each butterfly takes a pair of elements (say $a_0$ and $a_4$) and computes $X = a_0 + \omega^0 \cdot a_4$ and $Y = a_0 - \omega^0 \cdot a_4$ — a modular add, subtract, and multiply. A size-$n$ NTT has $n/2$ butterflies per stage across $\log_2(n)$ stages, with data shuffled between stages.
On a GPU, each thread handles one butterfly. The input vector gets split across thread blocks, with each block performing a radix-$2^r$ FFT internally. The twiddle factors are precomputed and stored in shared memory (SRAM) so threads can look them up without hitting slow global memory. Block sizes are constrained by shared memory capacity, since that's also where the inter-stage data shuffles happen.
---
## GPUs in Mobiles
Here's where the things diverge from Laptops

[Discrete GPU vs. Unified Memory SoC architecture comparison]
A laptop's discrete GPU is a separate computing system with its own VRAM — physically separate from CPU RAM and built for bandwidth. Data has to be explicitly copied between CPU RAM and VRAM over the PCIe bus.
Mobile is different. CPU and GPU sit on the same SoC with unified memory (LPDDR5). The GPU reads CPU data through a pointer. No copies. For ZKPs, where witness generation happens on the CPU and proving happens on the GPU, this is a huge win — the GPU can start bucket accumulation the instant the CPU finishes generating witnesses.
The flip side: unified LPDDR5 has lower bandwidth than dedicated VRAM. Every NTT butterfly stage shuffles $n/2$ element pairs across the input vector, and on mobile, those shuffles compete for limited memory bandwidth. This is the main bottleneck for NTT on phones.
### Programming APIs
| API | Platform | Notes |
|-----|----------|-------|
| CUDA | NVIDIA GPUs | Mature ecosystem. PTX assembly lets you hand-tune limb multiplication loops. |
| Metal | Apple (macOS, iOS) | Unified memory access built-in. |
| Vulkan | Android, Windows, Linux | Cross-platform. |
| WebGPU | Browsers & native apps | Translates to Metal or Vulkan under the hood. |
WebGPU deserves a note. Running ZKP proving in a web browser — no app install, no trust assumptions about a remote server — would be transformative for privacy. But WebGPU has real limitations today: no shared-memory atomics (which you need for efficient bucket accumulation), workgroup sizes capped at 256 threads (vs. 1024 natively), and WGSL doesn't support inline assembly, so you can't hand-optimize the limb multiplication inner loop. In practice, this means 2–5× slower field arithmetic compared to native Metal or Vulkan.
---
## Current Progress
Two projects stand out for mobile ZKP acceleration.
### ICICLE Metal (Ingonyama)
This is the most mature option right now. Ingonyama's IMP1 prover already ships in production apps like Anon Aadhaar and Aptos Keyless. Their approach was ambitious — rather than patching GPU support onto an existing CPU prover, they ported their entire CUDA crypto library to Metal and built ICICLE-SNARK as a ground-up Groth16 prover.
Compare this to RapidSnark, which was the previous gold standard. RapidSnark is excellent but purely CPU-bound — it processes the 12-limb field multiplications for BLS12-377 proving keys sequentially across a few cores. ICICLE-SNARK distributes those same limb-arithmetic chains across thousands of Metal threads organized into warps. It also keeps the proving key (millions of elliptic curve points, each with 2–4 coordinates × 12 limbs) cached in GPU memory between proofs, avoiding repeated transfers of hundreds of megabytes.
The way ICICLE handles MSM is worth understanding. Pippenger's bucket accumulation maps to Metal compute kernels — each threadgroup processes a subset of the $2^s$ buckets within a window, running point-additions across 12-limb coordinates in parallel. Window reduction (the serial part) goes to the CPU, which can start immediately because unified memory means the CPU already has access to the GPU's bucket sums. For NTT, twiddle factors are precomputed and parked in shared memory so butterfly operations avoid repeated global memory reads.
The results are significant: IMP1 proves 3× faster than RapidSnark. On a ZKML circuit with 1M constraints run on an iPhone 16, ICICLE-SNARK hit 2.3 seconds. RapidSnark needed 6–8 seconds for the same circuit, bottlenecked by its inability to parallelize bucket accumulations.
### Mopro Metal MSM v2
Mopro takes a different angle. Their v2 is MSM-only (BN254), not a full prover — more of a research prototype. V3 with CPU+GPU hybrid proving is in progress. What makes it interesting is the algorithm design:
Instead of implementing Pippenger's bucket accumulation with irregular scatter-gather patterns (which cause warp divergence), Mopro reformulates the entire bucket-assignment step as a sparse matrix–vector multiply. Each row of the matrix is a bucket, and the matrix–vector product computes all bucket sums in a single, regular GPU operation. They also auto-tune the window size $s$ per input — balancing the number of buckets ($2^s$, which sets parallelism) against shared-memory pressure. Window reduction still happens on the CPU using Horner's Method over the $w$ partial sums, and unified memory means the GPU's results are readable by the CPU without any copy.
Benchmarks on iPhone 15 Pro ([source](https://hackmd.io/@guard/BJDmuEODke)):

[Mopro Metal MSM v2 benchmarks on iPhone 15 Pro]
The pattern in the data is telling. At $\log_2(N) = 16$ (~65K points), the GPU is actually *slower* — the overhead of dispatching Metal threadgroups and synchronizing bucket accumulation costs more than it saves. The crossover happens around $\log_2(N) = 17$. By $\log_2(N) = 20$ (~1M points, where each window processes thousands of 8-limb scalars), the GPU delivers 1.8–2.25× speedup. More points means more work per bucket, which means more useful parallelism.
### The Amdahl's Law Ceiling
It's worth being realistic about how much speedup is even possible. MSM and NTT are 70–80% of Groth16 proving time. Even if you made them infinitely fast on the GPU, Amdahl's Law caps the overall improvement at roughly 5×. The remaining 20–30% is inherently sequential: window reduction (combining $w$ partial sums through a chain of point-doublings), witness generation, and proof serialization. IMP1's 3× already captures a large fraction of this theoretical ceiling. The gap is mostly window reduction and CPU-side witness work.
---
## Constraints
Three things limit ZKP proving on phones in practice.
**Thermal throttling** hits first. Every bucket accumulation involves thousands of point-additions (12–16 field multiplications each, across 8–12 limbs per multiply). Running these arithmetic loops flat-out across thousands of GPU cores generates serious heat. On an iPhone, the OS cuts clock speed by 40–60% within seconds. A proving operation that starts at peak throughput can slow to half-speed partway through the NTT butterfly stages.
**Memory** is a hard wall. A circuit at $2^{22}$ constraints produces polynomials $\vec{a}, \vec{b}, \vec{c}, \vec{h}$ with ~4M elements each. At 32 bytes per 256-bit element, that's ~512 MB for polynomials alone. Add the proving key (millions of curve points × 2–4 coordinates × 48 bytes per 377-bit coordinate), precomputed twiddle factors, and bucket arrays, and you're looking at roughly 8 GB. Most phones can't hold that after the OS takes its share.
**Battery Drain** compounds everything. Peak proving draws 8–10 W with CPU and GPU both maxed — the GPU churning through butterflies and bucket accumulations, the CPU running window reduction and witness generation. One proof costs roughly 0.1% battery, which sounds small until you imagine a private messaging app running proofs regularly. The battery drops, iOS Low Power Mode kicks in, GPU clocks get halved, and the thermal problem gets worse.
---
## Conclusion
The whole point of understanding GPU acceleration for ZKPs is client-side proving — a shift where users generate proofs on their own device instead of trusting a server. The blog covered how that workload breaks down (MSM's bucket accumulation and 12-limb point-additions, NTT's butterfly operations and twiddle-factor lookups) and how frameworks like ICICLE-SNARK and Mopro are mapping these to Metal's threadgroups and shared memory.
It's not easy. You're cramming 254-bit field arithmetic into 32-bit GPU cores, running them hot enough to trigger thermal throttling, and hoping the working set fits in memory. But ICICLE's 3× improvement over RapidSnark on real circuits shows it's working.
Three things matter going forward. First, optimizing the limb multiplication inner loop — the 8–12 limb modular multiply is the atom of both MSM and NTT, and improvements there cascade through every bucket accumulation and butterfly. Second, using unified memory smartly — window reduction and witness generation are sequential, but unified memory means the CPU can read GPU-computed bucket sums and NTT outputs with zero copy overhead. Third, right-sizing circuits — $2^{22}$ constraint circuits blow past mobile memory budgets, but lighter circuits using algebraic hashes like Poseidon are feasible today.
The "Prover-in-Pocket" isn't a research concept anymore. It's an engineering problem, and people are solving it.
---
## References & Further Reading
### Research on ZK Primitives
1. Pippenger's Exponentiation Algorithm (1976/2002): [cr.yp.to](https://cr.yp.to/papers/pippenger-20020118-retypeset20220327.pdf)
2. cuZK: Accelerating Zero-Knowledge Proof with a Faster Parallel MSM Algorithm (2022): [eprint.iacr.org/2022/1321](https://eprint.iacr.org/2022/1321.pdf)
3. Accelerating Number-Theoretic Transform (NTT) on GPU (2020/2025): [dl.acm.org](https://dl.acm.org/doi/pdf/10.1145/3669940.3707241)
### Implementation Frameworks
1. ICICLE (Ingonyama): [dev.ingonyama.com](https://dev.ingonyama.com/)
2. Mopro (Mobile Prover): [github.com/zkmopro/gpu-acceleration](https://github.com/zkmopro/gpu-acceleration)
3. ZKProphet: Performance Study of ZKPs on GPUs (2025): [arxiv.org/2509.22684v1](https://arxiv.org/html/2509.22684v1)
### Mobile-Specific & Feasibility Insights
1. Client-Side GPU Acceleration for ZK (Privacy & Scaling Explorations): [pse.dev](https://pse.dev/blog/client-side-gpu-everyday-ef-privacy)
2. PipeZK: Accelerating ZKP with a Pipelined Architecture: [ieeexplore.ieee.org](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9499783)
3. Zero-Knowledge Proofs: Theory and Practice (J. Zhang, UC Berkeley): [berkeley.edu](https://www2.eecs.berkeley.edu/Pubs/TechRpts/2025/EECS-2025-20.pdf)