Anoma Unified Execution Environment

Primitives

Fractal instance

We assume the host environment of a fractal instance which provides the following functions:

Storage
  • Storage.Read(k) -> IO v
  • Storage.Write(k, v) -> IO ()
  • Storage.Delete(k) -> IO ()
Evaluation

The fractal instance should support evaluation of the instruction set.

Instruction set

We assume the existence of an instruction set Instr with the following (approximate) interface:

  • Instr.Eval(s, p) -> res
    On input state oracle s (provides read access) and program p, the evaluation algorithm outputs result res, which includes state writes and possible special scheduling commands.

ZK proof system

We assume the existence of a zero-knowledge proof system ZKP with the following (approximate) interface:

  • ZKP.Prove(...)
  • ZKP.Verify(...)

Threshold FHE scheme

We assume the existence of a threshold FHE scheme TFHE with the following (approximate) interface:

  • TFHE.Setup(n, k) -> (pk, sk_1, ..., sk_n)
    On input integers n and k, the setup algorithm outputs a threshold public key pk and n secret key shares sk_1 .... sk_n, where k shares will be sufficient to decrypt (see below).
  • TFHE.Encrypt(pk, d) -> c
    On input public key pk and plaintext d, the encrypt algorithm outputs a ciphertext c.
  • TFHE.Eval(pk, C, c_0) -> c_1
    On input public key pk, circuit C, and ciphertext c_0, the eval algorithm outputs a ciphertext c_1.
  • TFHE.PartDec(pk, c, sk_i) -> s_i
    On input public key pk, ciphertext c, and secret key share sk_i, the partial decryption algorithm outputs a partial decryption share s_i related to the party i.
  • TFHE.FinDec(pk, s_1 ... s_k) -> d
    On input public key pk and partial decryption shares s_1 ... s_k, at least k in total, the final decryption algorithm outputs a plaintext d.

This interface is primarily sourced from p15 of this paper. Note that this should encapsulate Ferveo, where Eval fails for all circuits (effectively a threshold encryption & decryption algorithm only), and potentially more limited schemes where Eval only supports a specific restricted class of circuits.

Concepts

This architecture defines the following concepts:

  • An application has a single application validity predicate and state which is split into notes, which may themselves be split across fractal instances.
  • A transaction is an atomic bundle of computation and state updates which must be ordered with respect to other transactions. Transactions are associated with a single security domain (fractal instance).
  • A note is an immutable object composed of the following fields:
{
    provenance: [][]byte // provenance identifier chain
    application: bytecode // application vp (by commitment)
    owner: bytecode // owner vp (by commitment)
    key: []byte // application state key
    value: []byte // application state value
}
  • provenance is an ordered list of fractal instance (security domain) identifiers, most-recent-first. Exact details TBD, but conceptually, locally originated state has an empty list, while state sent from A to B to C will have a list of ["b", "a"] on chain C.
  • application contains a binding commitment to code for the note application's validity predicate.
  • owner contains a binding commitment to code for the note owner's validity predicate.
  • key contains an arbitrary key, with semantics determined by the application.
  • value contains an arbitrary value, with semantics determined by the application.

Notes are immutable and can only be spent (consumed) once. They are stored at a fractal-instance-local state key calculated as follows:

{provenance}/{application}/{owner}/{key}

Thus, the global state can be considered to be organised on the basis of a state key calculated by preprending the fractal instance identifier to the fractal-instance-local state key.

Transactions could be "global" and include global keys, which are then used to determine which fractal instance(s) need to execute a transaction - perhaps a chimera chain if multiple instances are included, and many expressible transactions will not be possible to atomically execute. We could come up with higher-level programming abstractions for dealing with asynchronous execution and splitting an asynchronous application interaction into several transactions.

Transactions

A transaction includes the following fields:

{ stateReadConstraints: constraintSet stateWriteConstraints: constraintSet lookupCode: bytecode calculationCode: bytecode extradata: []byte }

The stateReadConstraints and stateWriteConstraints fields include state-independent programmatic constraints (representation TBD) on which keys (including provenance, application, owner, and key) a transaction is allowed to read or write. These allow for the deconfliction and safe ordering of transactions prior to execution. Transactions which attempt to read or write keys outside of the constraint-bounded keyspace immediately fail.

lookupCode and calculationCode are used in phases 1 and 2 of execution as below. extradata contains arbitrary extra data (including e.g. info to go in created notes) which can be accessed by any code running in the execution phase.

Transaction execution

In this architecture, transactions are executed in four phases.

  1. Lookup
    • In the lookup phase, arbitary instructions (lookupCode) are run to look up all specific input notes that this transaction will use as inputs in phases 2 and 3. These instructions cannot write any state.
    • These instructions are run sequentially (except for parallelism supported by the instruction set).
  2. Calculation
    • In the calculation phase, arbitrary instructions (calculationCode) are run to compute all specific output notes that this transaction will create. These instructions do not write state per se, but do create output notes.
    • These instructions are run sequentially (except for parallelism supported by the instruction set).
    • Claim: if these instructions could have been run in parallel, it would have been possible to split the transaction into two transactions which consume and create separate notes.
  3. Validity check
    • In the validity check phase, all user and application validity predicates for all involved input and output notes are called, with all input and output notes as input data. They must return "accept" or "reject".
    • All validity predicate checks can be performed in parallel. Validity predicate checks cannot write any state, and cannot read any state other than the input and output notes.
  4. Commit
    • In the commit phase, if and only if all validity predicates called in phase 3 returned "accept", all notes used as inputs are consumed (spent) and all notes created as outputs are added to state at the appropriate keys.
    • Created notes can now be looked up and potentially used by subsequent transactions.

In the transparent architecture, an executor runs all stages. Note that after phase 1, additional (optimistic) transaction parallelism may be possible. In the shielded architecture (Taiga), the user or solver creating the proof must run 1, 2, and 3, and prove their correctness, where the executor need only check the proof and run 4.

we can abstract over fractal instances and local instances!

build the validity predicate architecture on top of a VM?

separate out state privacy:

  • read_public, read_shielded, read_private
  • write_public, write_shielded, write_private
    • privacy markers are local
    • write_private includes a domain (public key)
    • public state is public on this instance
    • shielded state is private on this instance and proved by someone else
  • run some instructions

โ€“> compile this VM to ZKPs + FHE circuits that need to be verified
first-class notions of separate user's states?
โ€“>
โ€“> make VPs recursive ~ ~