Ingonyama

@Ingonyama

Ingonyama company HackMD

Private team

Joined on Sep 10, 2024

  • Yuval Domb yuval@ingonyama.com x (33) Multiplication in prime fields with large unstructured primes is a fundamental building block of modern cryptography. Modular multiplication of two field elements $a, b \in \mathbb{F}_p$ requires computing their full product $ab$ and reducing it modulo $p$. The two most widely used reduction algorithms are due to Barrett and Montgomery, with the latter operating in a permuted coset of the field (Montgomery form). Single-precision implementations of both algorithms typically require three full multiplications: one for $ab$ and two for the reduction. With schoolbook multiplication, each full multiplier costs $n^2$ digit multiplications, resulting in a total of $2n^2$ for the reduction. Faster methods like Karatsuba can decrease the number of digit multiplications but introduce significant overhead in additions and complex internal dependencies, making them less suitable for CPU-based implementations. Multi-precision approaches (operand scanning), particularly with Montgomery reduction, are the most widely used today. The best known algorithm (at least to us and ChatGPT) achieves reduction in $n^2 + n$ digit multiplications without increasing additions or introducing complex dependencies, making it highly efficient for CPU-based systems.
     Like 2 Bookmark
  • image (6) In this article, we present an integration of the ICICLE field library (rust wrappers) for trace generation and symbolic constraints for AIR arithmetization using the Plonky3 framework. This enables users to write AIR circuits in the Plonky3 AIR language and easily interface with ICICLE APIs, generate trace data in ICICLE field types in any device using ICICLE’s device agnostic API’s, generate symbolic constraints. The user can parse the symbolic constraints and show that the trace satisfies the constraints by taking advantage of ICICLE backend APIs such as the NTT, Polynomials, VecOps, Hash, MerkleTree and FRI (upcoming). In short, this enables users to write custom STARK provers that meet their application requirements.
     Like 1 Bookmark
  • Reproducibility in deep learning has long been a critical concern. Whether it’s in autonomous vehicles, medical systems, or simply replicating the results of a scientific experiment, ensuring consistency is key. This challenge has become even more prominent with the rise of large language models (LLMs), particularly when we want our models to consistently follow a specific pattern, for example in safety-critical applications. This is a well-known problem (e.g., Determinism in Deep Learning, Reproducible Deep Learning Using PyTorch, and to the best of our knowledge, there is still no canonical solution for the general case — only best practices to mitigate its impact. Citing from the documentation of Pytorch on Reproducibility: “Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds.” 51982534-14E4-4D5C-A80E-A09A89E9F7F8 (1) <center> “Create an image of a red apple in studio lighting”</center> Our Motivation
     Like  Bookmark
  • Introducing zkDL++, a novel framework designed for provable AI. Leveraging zkDL++, we address a key challenge in generative AI watermarking—maintaining privacy while ensuring provability. By enhancing the watermarking system developed by Meta, zkDL++ solves the problem of needing to keep watermark extractors private to avoid attacks, offering a more secure solution. Beyond watermarking, zkDL++ proves the integrity of any deep neural network (DNN) with high efficiency. In this post, we outline our approach, evaluate its performance, and propose avenues for further optimization. Introduction Stable Signature is a watermarking (WM) scheme introduced by Meta AI to allow for identification of images generated by their latent diffusion model (LDM). It uses a Convolutional Neural Network(CNN) to extract the WM, and has the advantage of being very robust to image tampering, allowing for detection even if the image was filtered, cropped, compressed, etc. However, Stable Signature suffers from the drawback of having to keep the extractor model private, as exposing it makes the WM susceptible to attacks. This drawback can be significant if at any point there is a controversy about ownership of a certain image. While Meta, or any other company using such a WM method, can run the extractor and see that a certain image was generated by their model, they cannot prove it to a third party such as social media users, unless they expose the weights of the extractor model. Such exposure may lead to obsolescence of the WM system. We propose using the emerging technology of Zero-Knowledge Proof (ZKP) as a solution to this problem. In a ZKP scheme, a prover wishes to convince a verifier that they have knowledge of a witness $W$, which obeys a certain functional relation $F(X,W)=0$, without revealing $W$. The relation $F$ is generally represented by an arithmetic circuit and is public. The variable $X$ contains public input and output to the circuit. In the case of Stable Signature, the relation $F$ is the architecture of the WM extractor, with its output compared to the key, and it is satisfied if-and-only-if the extracted WM from the input image matches the key. The public variable $X=(x_{in},x_{out})$ includes the image and the Boolean output, and the private witness $W=(w,k)$ includes the weights of the extractor and the key as is depicted in the diagram above. A verifier can then run a verification algorithm that tests the proof and returns True if the relation is indeed satisfied, and False otherwise. An additional challenge is ensuring that the weights weren’t deliberately chosen to produce a specific result for a given image. Fortunately, this issue can be easily resolved by the following:
     Like 4 Bookmark