dante

@dantecamuto

Prime membership

Joined on May 24, 2023

  • Disclaimer this a more technical post that requires some prior knowledge of how Halo2 operates, and in particular in how its API is constructed. For background reading we highly recommend the Halo2 book and Halo2 Club. A common design pattern in a zero knowledge (zk) application is thus: A prover has some data which is used within a circuit. This data, as it may be high-dimensional or somewhat private, is pre-committed to using some hash function. The zk-circuit which forms the core of the application then proves (para-phrasing) a statement of the form: "I know some data D which when hashed corresponds to the pre-commited to value H + whatever else the circuit is proving over D".
     Like 1 Bookmark
  • ZKML for On-Chain Games and Autonomous Worlds for_respected_patron.png At EZKL, we've already seen a number of creative instances where ZKML, and more broadly zero-knowledge computation, have been implemented onchain. Examples like noya.ai and cryptoidol.tech prove that when you combine the verifiable compute power of blockchains with the trustless affordances of ZK, new forms of applications that previously were unfeasible are now possible. Games like Dark Forest and ZK Hunt have been pioneers in using the privacy affordances of zero-knowledge proofs to create games with fog-of-war or hidden information mechanics. We believe that the verifiability of computation is an underexplored set of tools for on-chain game developers, and we’ve seen an increasing amount of developer and player energy focused on this paradigm. Broadly, we've seen the flourishing of three principal methods for ZKML game-design that have inklings of popularity with both game designers and players. The Model is the Game ZKML as Digital Physics ZKML for Lore and Narrative The earliest explorations that we've seen in the merger of on-chain games and ZKML are ones where the ZKML model IS the game. Here, players interact directly with the ZKML model, and this interaction constitutes the entirety of the game dynamics. A representative example of this is the cryptoidol game that we developed internally. Here players vie to be canonized as the best singer in an eternally running singing contest. They sing into the browser and generate a proof that they were correctly judged by a public judging model. They can then submit their score and associated proof on-chain to be inserted into a running leaderboard. Another example of this is the onchain-tictactoe library, where a neural network is trained on tictactoe telemetry data to recognize winning game patterns, such that games can be played off-chain and then submitted and settled on-chain in their final state.
     Like 1 Bookmark
  • What and Why ? The EVM, though a fantastic first experiment in permisionless and decentralized computing, is currently very limited. As it stands, matrix multiplication over 100 dimensions will cost you over 5 million gas. We built ezkl to expand the EVM's compute capabilities so that you can bring complex operations, like neural networks, on-chain, at a low cost. What is ezkl ? A library for making zero-knowledge (zk) circuits from python code. A suite of tools that mean you can deploy these zk-circuits in less than a minute and integrate them into browser apps and on-chain apps painlessly.
     Like 1 Bookmark
  • Full disclosure; this post is not about gardening but about implementing ZK versions of machine learning algorithms with botanical nomenclature: decision trees, gradient boosted trees, and random forests. If you're a keen gardener check this out. Lingering Github issues give me heart palpitations, particularly those that have been open for months on end. Sitting like mildew in an otherwise pristine home. Here's one we've had open since January of this year: EZKL (for those not in the know), is a library for converting common computational graphs, in the (quasi)-universal .onnx format, into zero knowledge (ZK) circuits. This allows, for example, for: you to prove that you've run a particular neural network on a private dataset. Or to prove you've developed a super secret new architecture that achieves 99% accuracy on a publicly available dataset without revealing the model parameters.
     Like  Bookmark
  • The recent interest in merging zero-knowledge cryptography and machine learning has led to progress at an impressive clip. In April of last year we had projects generating proofs for the final layer of a small multi-layer perceptron trained on MNIST, as of May 2023 we're getting entire transformer based models into circuits (see here and here). We're naturally arriving to a point where we should begin exploring architectures that are "easier" to prove. To quantify this ease of proving I want to introduce a concept or metric I've been calling constraint efficiency. Given $N$ network parameters, what is then the number of constraints $M$ generated within a ZK-circuit. Obviously $M$ depends on a number of factors, in particular: the particular gates and constraints used to represent a neural network layer. Some frameworks will tradeoff a larger number of constraints for greater proving efficiency by way for instance of "accumulated arguments" (see the accumulated dot product argument here for an example) . The architecture of the network. Convolutional layers with large strides will generate less constraints than their fully-connected counterparts. The proof system used. Proof systems with lookup arguments might have less constraints than those without.
     Like 2 Bookmark
  • Or rather how we got nanoGPT into Halo2-KZG circuits using EZKL. This post is written in collaboration with Bianca Gănescu a Master's student at Imperial College London (supervised by Jonathan Passerat-Palmbach) who was instrumental in making this happen, and Jason Morton. I am the lead developer on EZKL (stylized and pronounced Ezekiel), a library which enables users to convert computational graphs represented in the Open Neural Network Exchange (ONNX) format to a (Halo2-KZG) ZK-SNARK circuit. Since starting the project in August of 2022 we've come a long way in terms of the breadth of models we support, our proving performance, and also the community of folks building and researching on top of our tool! Bianca, as part of her Master's thesis, wanted to get nanoGPT into a ZK-SNARK using our tooling. This post is a high level overview of the steps it took to make that happen. The sections below are somewhat technical and assume some knowledge of how the Halo2 API is structured.
     Like 3 Bookmark