owned this note
owned this note
Published
Linked with GitHub
# ZK Highlights post-AsiaCrypt24
Last December, I had a chance to attend AsiaCrypt24, present a paper, chat with people, and everything else that goes with the crypto conferences normally (a river cruise). The thoughts and highlights from the event are long overdue, but the topics are still more than relevant, I believe you will find them as interesting as I did!
This time there was a good, balanced selection of papers on different topics: signatures, quantum cryptography (including no-cloning ZK), MPC, homomorphic encryption, folding, lattices, and many more. In this blog post we'll focus more on interesting and important zero-knowledge related topics which are perhaps not getting as much attention at the mainstream ZK events due to their early-stage or theoretical nature.
To read about our own publication in AC24, scroll to the end of Part 3!
## Part 1: VOLE-in-the-Head
To start off, we'd like to say a few words about VOLE-based ZK, a paradigm for zero-knowledge that has recently received a lot of attention in the academic community.
VOLE, a _Vector-Oblivious Linear Evaluation_ [[1](https://eprint.iacr.org/2017/617.pdf), [2](https://eprint.iacr.org/2020/1446.pdf)], is a cryptographic primitive that allows the prover owning $\vec w, \vec v$ to let the verifier obliviously evaluate a linear combination $\vec w \cdot \Delta + \vec v$. The crucial privacy notion here is that the prover does not learn $\Delta$ while the verifier does not learn the vectors $\vec w, \vec v$, but only their combination. VOLE is quite a lightweight primitive, and, importantly, it can be realised from post-quantum assumptions: symmetric primitives (AES/SHA, see [SoftSpokenOT](https://eprint.iacr.org/2022/192.pdf)), or lattice assumptions such as LWE (Learning With Errors) and LPN (Learning Parity with Noise). To build ZK proofs from VOLE, one designs an interactive protocol that reduces verification of every computation gate's correctness to verifying a VOLE. This is somewhat akin to the approach with homomorphic commitments: if wire $c$ is supposed to contain $k_1 \cdot a + k_2 \cdot b$ (for $a,b$ being input wires of the gate, and $k_i$ being public constants), then checking $\mathsf{Com}(c) = \mathsf{Com}(a)^{k_1} \cdot \mathsf{Com}(b)^{k_2}$ suffices if the commitment scheme is additively homomorphic. Something similar happens with VOLEs, since they can also be viewed as commitments, except that VOLEs also enable a weak variant of multiplicative homomorphism which can additionally speed up the verification.
To be precise with our terminology, the basic VOLE-based ZKPs are _designated-verifier_, which means that they're designed to verify only w.r.t. a particular pre-chosen party, but this is overcome by the [VOLE-in-the-head](https://eprint.iacr.org/2023/996.pdf) paradigm which yields public verifier proofs and has overall similar performance properties.
Speaking of which, the *performance trade-off* for VOLE-based proofs is that they have low memory consumption, fast provers, and are fully information-theoretic (thus, transparent and quantum-secure). This makes them particularly suitable for lightweight clients. And they're future-proof and not hard to implement. The main downside is the linear proof size (in the length of the circuit), that can be concretely 3-16x witness size (for VOLEitH; for VOLE the constant is low; see [Table 1 @ 2023/996](https://eprint.iacr.org/2023/996.pdf)), however produced at a fairly high speed (e.g. [millions](https://eprint.iacr.org/2022/819.pdf) 64-bit multiplications per second; also see [2023/150](https://eprint.iacr.org/2023/150.pdf) and [LogRobin++](https://eprint.iacr.org/2023/150.pdf)).
Due to advantages in prover speed, applications of VOLE-based proofs include protocols operating smaller languages (e.g. signatures like [FAEST](https://eprint.iacr.org/2023/996.pdf)), or environments in which proving speed, rather than communication, is a bottleneck, and transparency, simplicity, and well-founded assumptions are welcomed or necessary. In the current world, this amounts to L2s or off-chain protocols that have to authenticate or prove statements to each other repeatedly, but can afford bigger proof sizes and do not have to publish the transcripts on a public ledger. A great example of this is the [zkPass](https://zkpass.org/) project using VOLE-in-the-head in the Hybrid mode of their [zkTLS protocol](https://zkpass.gitbook.io/zkpass/overview/technical-overview-v2.0), ultimately aiming to allow verification of private data on-chain.
Note that this proof size is a concern for STARKS too, e.g. [Risk0 suggests](https://medium.com/casperblockchain/blockchain-enabled-zero-knowledge-proof-size-b3661ab32c5b) wrapping the STARK-based proof in a more lightweight proof like Groth16. This could be another way to integrate VOLE-ZK into blockchain environments: an L2 exchange transaction might have to be executed _now_ and _fast_ and in a fully _verifiable_ manner to the parties involved, but the final settlement proof for its correctness can perhaps be presented later and in an aggregated form.
Even as the world is moving relentlessly towards both cheaper compute and storage, cheap and fast _distributed storage_ is still an open problem in Web3 that we in o1Labs are thinking about actively as part of our new [[Project Untitled]](https://www.o1labs.org/project-untitled).
## Part 2: Folding and zkVMs
Several great papers on the more mainstream ZK topics naturally caught our attention:
- **Proofs for Deep Thought** ([eprint](https://eprint.iacr.org/2024/325)): the work by Jessica Chen and Benedikt Bünz is a formalisation of a memory-proving approach (think RAM-like lookup arguments) within an IVC based on the Protostar compiler. The gist of the technique is to use the standard logup argument, that is the one using additive $1 / (\alpha + X_i)$ terms instead of the multiplicative ones (as in e.g. plookup), to prove random access to a big memory that evolves over time. To do that, the paper suggests that each proof instance (fold iteration) refers to several distinct vectors with memory cells: original, final, and "work memory" (reads and writes). Each memory cell is a triple $(\mathsf{addr}, \mathsf{time}, \mathsf{value})$, and several intuitive conditions must be exposed on reads or writes to keep the memory accesses ordered properly. Additionally, authors suggest a modification of the GKR protocol that reduces prover overhead.
Our intuition is that the core technique of the paper is generalisable to Plonk-style arguments too. The protocol captures an important idea about building RAM lookup / memory access arguments, and we're very happy to see this properly studied and formalised.
- **MuxProofs** ([eprint](https://eprint.iacr.org/2023/974.pdf)): this work designs a SNARK for a VM computation while focusing on the problem of efficiently proving disjunctive statements (VM instructions). The critical contribution is a lookup argument called CosetLkup, that is similar to LogUp but utilises cosets of the multiplicative group of the curve for expressing complicated relations between the individual elements of the commitments. While the usual (Hyper)Plonk-style argument working over $\mathbb{H} = \langle\omega\rangle$ gives one the way to express relations between the current and the next row by allowing polynomials of the type $P(\omega X) - Q(X) = 0$ (over $\mathbb{H}$, $P$ "queries" the next row and $Q$ the current one), in CosetLkup $| \mathbb{H} | = mn$, and another group $\mathbb{V} = \langle w^n \rangle \leq \mathbb{H}$ can be used to simultaneously refer to different disjunctive cases within the data, e.g. to enforce a property on each coset simultaneously by $P(\gamma X) - k \cdot P(X) = 0$ (for constant $k$, over $\mathbb{V}$). In a way, each commitment can be thus viewed not as a ring with one generator (producing a column), but as a torus with two (producing a table). Another lookup argument, called SubcubeLkup, works with multivariate commitments over subcubes of the boolean hypercube. The final argument construction relies heavily on these lookup arguments – for looking up a concrete instruction, for example – and avoids recursion, but at the cost of needing to pick the group size big enough to fit entire execution length, and coset size to fit the number of disjunctive VM instructions.
- **Folding**:
- Two works in AC24 investigated folding over lattices, namely [RoK, Paper, SISsors](https://eprint.iacr.org/2024/1972) and [Lova](https://eprint.iacr.org/2024/1964). One big challenge that these works overcome is managing the norm of the encoded values. While group-based folding is "just" a $x_{\mathsf{left}} + \alpha \cdot x_{\mathsf{right}}$ recombination (in the field), that can be done homomorphically inside a (KZG/IPA) commitment, the witness inside a lattice-based commitment has to additionally have a "small" norm. Thus the naive solution would lead to amplifying the inner norm of the values inside the commitment too much (affecting soundness and extractability), seriously limiting the number of folds that one can perform soundly.
- Last but not least, [FLI: Folding Lookup Instances](https://eprint.iacr.org/2024/1531.pdf) presented, as the title suggests, two protocols for folding instances of a lookup protocol, with a particular focal point on performance within zkVMs. The solution achieves the cheapest (native) folding cost among existing solutions, at a cost of a slight overhead while verifying the folding recursively. Investigating Protostar-based folding schemes, including Proofs for Deep Thought, FLI suggests to be the concretely cheapest scheme for many lookup-based zkVM solutions.
## Part 3: Malleable Proofs
Finally, we'd like to mention a few works on non-recursive updatability and proof malleability.
**Mercurial Signatures**. There were two works improving on the state of the art and applications of mercurial signatures -- Griffy et al. ([eprint](https://eprint.iacr.org/2024/1216.pdf)) study stronger privacy notions for delegatable credentials, and the work by Abe et al. ([eprint](https://eprint.iacr.org/2024/625.pdf)) focuses on the threshold setting where the public keys of the signing parties are distributed. The primitive itself was defined and studied in [[several]](https://eprint.iacr.org/2018/923.pdf) [[works]](https://eprint.iacr.org/2020/979.pdf) by Elizabeth Crites and Anna Lysyanskaya, and it allows modifying the signature is a specific way after its creation -- changing both the message and public key within their respective equivalence classes. One common application of this powerful primitive is delegatable credentials: sometimes we want to have a signature over a class of messages (e.g. all pseudonyms of a particular identity), while being only verifiable with respect to a particular instance of the class at a time.
**Updatable Blueprints** ([eprint](https://eprint.iacr.org/2023/1787), _co-authored by o1Labs_). Signatures are a type of message-binding zero-knowledge proof (of possessing the secret key that uniquely corresponds to the public key). This is why the terms "Schnorr proof" (i.e. Schnorr Signature scheme) and "Sigma protocols" are quite often used interchangeably -- the basic recipe for the Schnorr proof is easily generalisable towards a bigger class of relations. This work on updatable blueprints shows two things:
1. There exist NIZK proofs that are easy to build and work with, that allow for certain language malleability. That is, you can transform a proof $\pi$ for the statement $(x,w) \in \mathcal{R}$ into a proof $\pi'$ for the statement $(T_x(x),T_w(w)) \in \mathcal{R}$, where $T_x,T_w$ is your language transformation. It's like doing a recursive proof, but without any recursion involved! The concrete proof system we use is by Couteau and Hartmann (CRYPTO20 [preprint](https://hal.science/hal-03374157/document)) -- it is pairing-based, transparent (universal setup), easy to implement, but linear in the proof size (like Sigma protocols), and has non-succinct verification, which makes it concretely efficient for modest-sized algebraic statements.
2. There exist natural applications for these kinds of proofs, that centre around algebraic statements (that is, relationships on group elements, such as consistency of powers-of-tau for KZG). In the paper, we design a two-party primitive called *updatable blueprints* that illustrates this well: the regulator creates a blueprint (kind of like a commitment) to $P(t,X=0)$ with the variable $t$ being fixed and known only to the regulator, and then users can repeatedly update this commitment to $P(t,x_{\mathsf{old}} + x_{\mathsf{new}})$ knowing only $x_{\mathsf{new}}$. In the end, the regulator learns whether $P(t,\sum x_i) = 0$ (but not any $x_i$); and users do not learn $t$. Updatable NIZKs can be used to make this construction concretely efficient by provably maintaining the consistency of elements in the "blueprint".
Some work-in-progress on this direction from our side includes both understanding and improving the performance of the primitive itself, but also investigating its potential in application to decentralised updatable storage, polynomial commitments, and payment channels. Think of small relations that have to be updated often -- that's where the power of the primitive lies. Stay tuned for more!
## Conclusions
Wrapping up the summary, it's clear that the research landscape in zero-knowledge cryptography continues to evolve at an impressive pace. From VOLE-based proofs to advancements in zkVMs, folding techniques, and updatable proofs, the diversity of topics presented highlights the breadth of innovation occurring in the field. These works may not always immediately receive the spotlight at ZKSummits or DevCons, but their theoretical depth and potential applications will undoubtedly shape the future of crypto, in web3 and beyond, especially as we look towards next-gen privacy and scalability solutions.
At o1Labs, we're particularly excited to be working on some of these cutting-edge advancements, exploring their potential applications in decentralized systems. I hope that this overview has sparked some ideas and that you’ll join me in following these innovations closely. For more details on our own research, don't forget to check out the publication at the end of part 3.
Feel free to reach out with any questions or comments, technical and not, and stay tuned for more updates!