# TransferFrom & Escrow nullification patterns <style> .ui-content.comment-panel #doc, body.pretty-comment-panel #doc { max-width: 90%; } </style> > Author: Mike [TOC] ## Intro This is an exploration into advanced note nullification patterns for Aztec smart contracts, where the entity performing the nullification is not necessarily the entity who "owns" the note. More succinctly: **"How to spend someone else's note"**. > I recommend reading Appendix A and Appendix B first, to get some background on Aztec. --- ## Nullification Note nullification can be thought of as a mechanism for editing state, without linking this "edit" action to previous such actions. A note usually conveys an "owner"[1]. This "owner" is an abstract concept which is defined and managed by the smart contract to which the note belongs. "Traditionally", in a zcash-style approach, computation of a note's nullifier requires a `nullifier_secret_key` which only the note's owner knows. Usage of such a secret key prevents outside observers from learning which note was nullified. It also means the person who created the note in the first place cannot nullify the note. > [1] We'll discuss shortly that ["owner" is a confusing name](#Owner-is-a-confusing-name). ### Requirements Here are the common requirements for the nullifier of a note: 1. A note must have one unique, deterministic nullifier. 2. To all except the owner of the note: The tx which nullifies a note should not be linkable to the tx which created the note. That's about it, actually. But there are a few consequences: - No one but the "owner" can compute the nullifier of a note. - The creator of a note cannot compute its nullifier (unless they create a note for themselves). - Computation of a nullifier requires a secret known only to the "owner". This secret must be provably linkable to the "owner". - An unambiguous notion of a note "owner" must be conveyed, somehow, by the contract in which the note is defined. We'll start to consider some compromises on requirement (2) as we progress through this doc, as we'll seek for other people to be able to nullify notes on behalf of a note "owner". > Why do I keep putting "owner" in quotes? An "owner" is a bit of an ambiguous and wobbly term. See ["Owner" is a confusing name](#Owner-is-a-confusing-name) later on. --- ## In this doc In this doc, we're concerned with enabling _entites other than the "owner"_ to nullify a note, so most of our focus is on: - Who is the entity (or entities) who may nullify the note? - When are these entities decided upon, and by whom? - How are these entities conveyed? - Mapping key? Note preimage? Hard-coded admin? Stored somewhere? Knowledge of something? - Under what circumstances may the note be nullified? - How should the nullification secret be designed? - How should the _nullifier_ be designed? - How does the creator of the note safely share the nullification secret with those who may nullify the note? - What are the tradeoffs of different approaches? We explore two patterns in particular: "transferFrom" and "Escrow", to be discussed shortly. The crux of the difficulty is: **How do we derive and share nullification secrets?** --- ## Humans vs "Dumb smart contracts" In Aztec, there are no EOAs; users are represented by an Account Contract, and signing keys are abstracted. But there's still a distinction between contracts that represent an end user (Account Contracts) and smart contracts which don't represent a user and simply run code. These distinctions aren't apparent to other Aztec smart contracts, but are apparent to us as observant humans and as designers of private apps on Aztec. A key distinction is: it's very easy for a human to possess a secret[^1]; it's very hard for a "dumb smart contract" to possess a secret. For want of a better name[^2], we refer to contracts which don't directly represent a human as "dumb smart contracts". We'll see later that we can roughly emulate "knowledge of a secret" for a dumb smart contract through MPC, but it's not perfect. Also, a dumb smart contract is not backed by any offchain compute, and so cannot execute a transaction. A human (or a bot, or a bot within a TEE if you're feeling heretical) has access to offchain compute: a place to execute transactions and to store secrets. [^1]: The idea that "only humans can possess secrets" is quite simplistic. A TEE could hold a secret. A group of people could generate a secret through MPC. When indistinguishability obfuscation ("io") matures, a smart contract _will_ be able to possess a secret. There are tradeoffs and complications. [^2]: Please do suggest a better name. --- ## "Owner" is a confusing name Within the scope of an Aztec private function, there are several kinds of entity that might be colloquially referred-to as "owner". Let's disambiguate them here, and try not to use "owner" again. > Edit after writing: "owner" is still used all over the place - I am sorry. This is still a useful section, as these more-exact terms are also used below. 1. "**Executor**": the person whose machine is executing the functions of a tx. Therefore, the person who needs to have access to whatever secrets are needed in order to execute the tx. - Notice: an Executor cannot be a dumb smart contract; it's a role that transcends contracts and the blockchain; it's a real-world machine (or the human who owns the machine). 1. "**Caller**": the person or dumb smart contract that is calling the function. A.k.a. `msg_sender`. - Just like Ethereum smart contracts, an app developer is responsible for writing custom access controls to restrict who is allowed to call a particular function, and hence who is allowed to access certain state or notes. Restrictions in Ethereum are usually based on "Who is `msg.sender`?". In Aztec, restrictions can also be based on what secrets the caller knows, so we have the below notions of "Authorizer" and "Nelly". See Appx A for much much more discussion around this. 3. "**`owner` conveyed through a mapping-key**": Sometimes state is stored within a mapping, where some address is the mapping key. Depending on the app-specific interpretation, the mapping key can sometimes be referred-to as the "owner" of the state at that mapping key. 4. "**`owner` field inside a note**": Sometimes an address is included in the preimage of a note, to convey some sense of ownership, or to convey who may nullify the note. Often this is not needed if a mapping-key already contains that exact same address. 5. "**Authorizer**": Sometimes there's an entity (or multiple entities) whose signature permits access to a function, or to edit a particular private state variable. See authwits. Let's just call this entity an "Authorizer". 6. "**Admin**": Sometimes there's a contract admin, who's the only entity allowed to touch certain state, or call certain functions. 7. "**Nelly**": The entity (or possibly multiple entities) who knows the nullification secret that can generate the nullifier for some note. 1. "**Note-Creation Executor**": The "Executor" of the tx which _created_ the note in the first place. (Not to be confused with the `msg_sender` of the function which created the note, which is why we are not simply calling them the "Creator", because that has been confusing). As the Note-Creation Executor of a note, they know certain information that might need to be shared with the future Nelly of the note. E.g. they might need to share a `note_secret`. > **Nelly** > I am going to refer to "the person (or people) who know the nullification secret that can generate the nullifier for a note" as "Nelly", because: > - "owner" is not necessarily a correct name, depending on the app's custom logic; > - Nullifier is already taken, and shouldn't be overloaded to also mean a person; > - Nullifioorrrr is a strong candidate, but can be confused with "Nullifier" when said out loud; > - Nullificator was also a strong candidate, but it's a lot of syllables. > - Nelly. It's a name. It's short. Your brain will get used to it, trust me. Plus when I was little, my auntie had a [pointer](https://en.wikipedia.org/wiki/Pointer_(dog_breed)) called Nelly, who was the best most playful dog ever. And Nelly's dead now, so you can't argue with me and suggest an alternative because that would be insensitive. Oftentimes, the same entity will fulfill multiple of these roles in a single function call. Other times, these roles will each be taken by a different entity, in the context of a function call. --- ## Who? Defining example entities. Many examples in this doc use these names in this way. I figure I should shoehorn this section in here. - **Alice**: - In `transferFrom`: the person who owns the note. - In `Escrow`: the person who escrowed the note with a dumb smart contract in the first place. - **Bob**: - A very specific person. Useful for exploring scenarios where Alice grants "transferFrom" permission to _only one person_ (Bob), or where _only one person_ (Bob) may later nullify an escrowed note. Bob might need to be decided in advance of an escrow, or perhaps Alice wants to decide on Bob later. - **Greg**: - Because it looks a bit like "Group". One of many possible people. A.k.a. "Someone". Useful for exploring scenarios where Alice grants "transferFrom" permission to _many people_, or where _one of many people_ may later nullify an escrowed note. This "Someone" might belong to a closed group with whom Alice can communicate. Alice might even be part of this group, and so could also occasionally play the role Greg. The group might need to be decided in advance of an escrow, or perhaps Alice wants to decide on the group later. - **`Foo`**: - For "Escrow", `Foo` is the dumb smart contract which owns the state. - **`Bar`**: - Some dumb smart contract. --- ## TransferFrom vs Escrow Internally, we've colloquially referred to two scenarios where one might need to nullify someone else's notes: "transferFrom" and "escrow": **"transferFrom":** Alice owns a note. She wants someone else (a human or some dumb smart contract) to be able to also spend her note under certain circumstances. **"Escrow":** A note is transferred into the custody of a dumb smart contract, `Foo`. I.e. the note is "escrowed". Now `Foo` "owns" the note. But a dumb smart contract cannot possess a secret key easily[^1]. Later, we want to enable a Caller (it could be Alice, or other users, or other dumb smart contracts, or `Foo` itself) to nullify the note. ![image](https://hackmd.io/_uploads/rJUjtJJLkg.png =250x) *We want someone else to be able to nullify Alice's note.* ![image](https://hackmd.io/_uploads/HkP9KkkIyl.png =400x) *We want someone (maybe even `Foo` itself) to be able to nullify `Foo`'s note. How can someone do this, when `Foo` is a dumb smart contract which doesn't possess secret keys?* In this doc, we'll often explore a more-granular re-framing: We'll consider tuples of (Creator, Owner, Executor, Caller): | Note- Creation Executor | Owner | Executor | Caller | What we've been categorising it as | | ------------------ | ----- | -------- | ------ | ---------------------------------- | | Any | Alice | Alice only | Alice | Transfer | | Any | Alice | Bob | Bob | TransferFrom | | Any | Alice | Any | `Foo` | TransferFrom | | Alice | `Foo` | Alice only | `Foo` | (Transfer out of) Escrow (Singleplayer) | | Alice | `Foo` | Alice only | Alice | (TransferFrom out of) Escrow (Singleplayer) | | Alice | `Foo` | Bob | `Foo` | (Transfer out of) Escrow | | Alice | `Foo` | Bob | Bob | (TransferFrom out of) Escrow | | Alice | `Foo` | Bob | `Bar` | (TransferFrom out of) Escrow | If we were in Ethereum-land, where everything is public (or even Aztec public-land), the "Owner" column wouldn't make a difference: Regardless of whether the owner is a human or dumb smart contract, the "Owner == Caller" case would be considered a `transfer`, and all other cases a `transferFrom`. It's the private-land setting -- where nullification secrets are needed -- that makes the nullification of escrowed notes a much harder, distinct problem from nullification of a human user's notes. See [Escrow](#Escrow). --- ## How should we compute the nullifier? That's the question this doc explores. And it depends on what the app developer is building. Here are some basic nullification primitives: > We shoehorn this section in here, because the next sections will use some of these terms. Please see Appx B for much more detail, and for further nullification options. ```rust // The way in which note_hash is computed isn't relevant to us, // so we gloss over it. // Zcash-style nullifier = hash(note_hash, nullifier_secret_key); // Note-secret-style nullifier = hash(note_hash, some_one_time_note_secret); // Plume-style // Note: plume nullification does not actually require `sk` to // be passed into a circuit. // See the plume section for details. nullifier = sk * hash_to_curve(note_hash, Pk); // That `*` is elliptic curve scalar multiplication ``` --- ## TransferFrom In private-land, transferring another human actor's balance is much easier than the Escrow case (discussed after this) of transferring a dumb smart contract's balance. I'll leave it up to the [Summaries](#Summaries) section of this doc to outline transferFrom options, through pretty tables. --- ## Escrow What does it mean for a dumb smart contract to be a note "owner"? In Ethereum, a dumb smart contract `Foo` can "own" a token balance in some token contract. Using terminology from the above section, they are understood to be the "owner" because `Foo`'s address will be the "`owner` conveyed through a mapping-key", when accessing `balances[Foo]`. To spend some of `Foo`'s balance (in Ethereum, still), there are two options: `transfer` and `transferFrom`; the exact same functions that can be called to spend an EoA's balance. Both functions contain a simple check to see who the Caller (`msg.sender`) is, to check for authorisation to edit the state `balances[Foo]`. > With a transfer, `balances[msg.sender]` may be edited by the Caller. > With a transferFrom, `balances[from]` may be edited as long as the Caller has sufficient `allowances[msg.sender]`. > There's also a nicer Permit flow for `transferFrom`, where a signature from the owner (`from`) is required. Notice then, that in Ethereum, it doesn't matter who triggered the tx or who executed the tx; it only matters who the Caller is (and in the case of the Permit flow, it matters who the Authorizer is). > We discuss detailed approaches for authorising state access in private-land in Appx A. I recommend reading that before coming back here. In Aztec private-land, a nullification secret is needed to modify private state (i.e. to nullify a note). It's this pesky nullification secret that makes it difficult to conceptualise the notion of `Foo` "owning" private state, because it's difficult for a dumb smart contract to possess a secret (as briefly discussed [earlier](Humans-vs-Dumb-smart-contracts)). ### Approaches What are our options, if we want to emulate `Foo` "owning" a note? These are the options I can think of. I list them with brief explanations, then we go into deeper explanations with illustrative pseudocode. 1. **Approach 1**: A "Singleplayer" special case: For use cases where the "Note-Creation Executor" (the entity who executed the tx which created the note) is the only entity who will ever want to nullify the note in future. The Note-Creation Executor specifies and retains some kind of nullification secret, and hence effectively retains ownership of the note, in the sense of being the Nelly. The function to spend the note could still give `Foo` a sense of ownership, e.g. by checking that `Foo` is the `msg_sender`. a. Use the "note-secret-style" nullification scheme, where the note contains a `note_secret` field. b. Use zcash-style or plume-style nullification, but redesign the note to contain a new `nelly` field. 2. **Approach 2:** The "Note-Creation Executor" (the entity who executed the tx which created the note) specifies and retains some kind of nullification secret, and later selectively shares this secret with an Executor who wishes to execute some tx which needs to nullify `Foo`'s note. a. Share a so-called `note_secret`, if using the "note-secret-style" nullification scheme. b. Use plume-style nullification, but redesign the note to contain a new `nelly` field. The nullifier and plume proof would then need to be shared with the would-be Executor. 3. **Approach 3:** A group of people with a shared public key are used to represent `Foo`, and collectively control some kind of shared secret key. a. Shared Account Contract b. Threshold Plume c. Other MPC schemes d. FHE coprocessor ### Discussing "Approach 1" This is clearly a very special case of "Escrow", since the only person who can nullify the note is the person who put it there. #### Example 1a - note-secret-style nullification Alice creates a note, "owned" by `Foo` (at least, nominally owned as the "`owner` conveyed through a mapping key"): ```rust note = { slot(balances[Foo]), note_secret, value: 10 }; ``` Some time later, **only Alice** can execute a `transfer` function, when `msg_sender = Foo`. ```rust #[private] fn transfer(to: AztecAddress, amount: u64) { // Notice: to access `balances[Foo]`, the msg_sender must be `Foo`. note = balances[msg_sender].get_note(); note_hash = note.hash(); // Alice can furnish `note_secret`: nullifier = hash(note_hash, note_secret); // Creation of a new note owned by `to` is not shown. } ``` #### Example 1b - zcash-style nullification It's not as clean a note design, but putting it here for completeness. One benefit is that Alice doesn't need to keep track of one-time-use note secrets, and can instead just use her `nullifier_secret_key`; but that's not a very strong argument, because she'll need to keep track of the preimage of the note (including its randomness) anyway. Alice creates a note, "owned" by `Foo` (at least, nominally owned as the "`owner` conveyed through a mapping key"). But the note is also designed to allow a separate Nelly address to be specified: ```rust note = { slot(balances[Foo]), nelly: alice_address, value: 10 }; ``` Some time later, **only Alice** can execute a `transfer` function, when `msg_sender = Foo`, but using her own `nullifier_secret_key`. ```rust #[private] fn transfer(to: AztecAddress, amount: u64) { // Notice: to access `balances[Foo]`, the msg_sender must be `Foo`. note = balances[msg_sender].get_note(); note_hash = note.hash(); let nelly_address = note.nelly; let nelly_nullifier_secret_key = unsafe { oracle.get_nsk(nelly_address); } assert_nullifier_secret_key_belongs_to_address(nelly_address, nullifier_secret_key); // Alice can furnish `note_secret`: nullifier = hash(note_hash, nelly_nullifier_secret_key); // Creation of a new note owned by `to` is not shown. } ``` > The same example could be applied to Plume-style nullification instead of zcash-style nullification, if you wanted. #### Example 1c - Singleplayer TransferFrom out of Escrow functionality Let's try to achieve Permit-style transferFrom in this "Singleplayer Escrow" setting. In the case of a transferFrom, the Caller is not `Foo`, but someone else, so we can't give `Foo` gatekeeping abilities by checking "Is the `msg_sender` `Foo`?", because `Foo` will not be the `msg_sender`. In Permit flows, the "owner" provides a signature to authorise some other Caller to access their state. But in these Approaches 1 & 2, `Foo` does not possess a secret. In the section '"Owner" is a confusing name', we defined an "Authorizer" entity who could fulfil the role of authorizing access to a state through a signature. We could specify such an entity inside the note. In example 1c, we already specified a "Nelly" entity inside the note, whose `nullifier_secret_key` can be used to nullify the note. In Appx A, we discuss how technically, a "Nelly" is all you need to gatekeep nullification of private state, so we could technically have just a "Nelly" with no authorization signature required. We do go on to say in Appx A that it provides much stronger security to demand a signature from a more secure key than a `nullifier_secret_key`; an "Authorizer", who could be specified in a new "Authorizer" field in the note struct definition, or as the same address as the "Nelly". Let's just give an example, and you can play around with permutations if you don't like it. ```rust note = { slot(balances[Foo]), nelly_and_authorizer: alice_address, value: 10 }; ``` Some time later, only Alice can execute a `transferFrom` function: ```rust #[private] fn transferFrom(from: AztecAddress, to: AztecAddress, amount: u64) { // In our examples, `from = Foo` note = balances[from].get_note(); note_hash = note.hash(); let nelly = note.nelly_and_authorizer; let authorizer = nelly; // We can check a dedicated signature through an authwit. // See Appx A do determine whether you think this is necessary for you. // I don't actually remember authwit syntax, so let's just say this: address(authorizer) .call("is_valid", ...some details about the function call, msg_sender and args); // If that call succeeds, then we know that msg_sender // is authorised to call this transferFrom fn with these args. // Now nullify: // (This example does zcash-style nullification) let nelly_nullifier_secret_key = unsafe { oracle.get_nsk(nelly_address); } assert_nullifier_secret_key_belongs_to_address(nelly_address, nullifier_secret_key); // Alice can furnish `note_secret`: nullifier = hash(note_hash, nelly_nullifier_secret_key); // Creation of a new note owned by `to` is not shown. } ``` ### Discussing "Approach 2" Clearly, this approach kind-of violates one of the requirements of a nullifier; that the creator of the note cannot derive the nullifier and cannot see when the note is nullified. With this Approach 2, the Note-Creation Executor is effectively retaining ownership of the new note, since their knowledge of some nullification secret is gatekeeping access to the note (see Appx A). So it's not really `Foo` who autonomously owns the note at all. Even if the smart contract stores the note in a mapping `balances[Foo]`, this "`owner` conveyed through a mapping key" is just syntactic ownership; the effective owner is the Nelly; the Note-Creation Executor. This approach is of limited usefulness. It is useful for apps where all users can all communicate with each other interactively, such as a closed game amongst friends; or where all communication can be facilitated by a central party. It'll require the Note-Creation Executor to somehow know who would be interested in learning about the existence of the note, and to then know how to contact them to securely notify them of its existence. Whether the nullification secret is also shared at this stage is up to the developer. If it's not shared, it would then require anyone who wants to actually execute a tx to request and receive the nullification secret from the Note-Creation Executor. Given the non-interactivity that smart contract developers are used to, the offchain communication is a bit cumbersome, but it is nice and simple! #### Example 2a - note-secret-style nullification Alice creates a note, "owned" by `Foo` (at least, nominally owned as the "`owner` conveyed through a mapping key"). The note struct is designed to contain a `note_secret` field. ```rust note = { slot(balances[Foo]), note_secret, value: 10 }; ``` Alice shares the `note_secret` with a group of >= 1 people. Call one of those people `Greg`. Then `Greg` can execute a `transfer` function, when `msg_sender = Foo`. ```rust #[private] fn transfer(to: AztecAddress, amount: u64) { // Notice: to access `balances[Foo]`, the msg_sender must be `Foo`. note = balances[msg_sender].get_note(); note_hash = note.hash(); // Greg can furnish `note_secret`: nullifier = hash(note_hash, note_secret); // Creation of a new note owned by `to` is not shown. } ``` > We can also do a transferFrom with this approach; discussed shortly. #### Example 2b - plume-style nullification Alice creates a note, "owned" by Foo (at least, nominally owned as the "owner conveyed through a mapping key"). But the note is also designed to allow a separate Nelly address to be specified: ```rust note = { slot(balances[Foo]), nelly: alice_address, value: 10 }; ``` > Recall: this extra `nelly` address is only needed because `Foo` does not possess a secret in this Approach 2. If we were not considering the "Escrow" case, and we were just transferring or transferFrom-ing a human's balance, the "`owner` conveyed through a mapping-key" would convey who the `nelly` is already, without having to add an extra field to the note struct. Someone other than Alice, Greg, wants to nullify this note. How Greg learns about the note's existence in the first place is a problem for the app developer to think about. With Plume nullification, Alice doesn't need to divulge her `nullifier_secret_key` to Greg; she can just compute and share the nullifier for this one note, along with a Plume proof of the nullifier's correctness. ```rust #[private] fn transferFrom(from: AztecAddress, to: AztecAddress, amount: u64) { // In our examples, `from = Foo` note = balances[from].get_note(); note_hash = note.hash(); nelly = note.nelly; // Note: as discussed in Appx A, we don't necessarily need to use // the _nullifier_ public key; we could instead use the address // public key directly. There are tradeoffs. nelly_nullifier_public_key = get_npk(nelly); // Now nullify, using plume: plume_proof = unsafe { oracle.get_plume_proof(note_hash, nelly_nullifier_public_key); } verify_plume_proof(note_hash, nelly_nullifier_public_key, plume_proof); { nullifier } = plume_proof; // Creation of a new note owned by `to` is not shown. } ``` #### Example 2c - TransferFrom out of Escrow functionality See example 1c, but: zcash-style nullification doesn't work unless doing "Singleplayer" stuff as described by Approach 1, because to share Alice's personal `nullifier_secret_key` is unacceptable. As per example 1c, the developer of the note struct could design a new "Authorizer" address field inside the note, and the `transferFrom` function could demand a signature or authwit from this address. Nullification can be done as per 2a or 2b. ### Discussing "Approach 3" _"A group of people with a shared public key are used to represent `Foo`, and collectively control some kind of shared secret key."_" Here, we're in the realm of multi-party computation. #### MPC `Foo` can't possess secrets, and so ordinarily when declaring the address of a dumb smart contract instance, the `nullifier_secret_key` field defaults to being the useless and unknown discrete log of some "null" generator point. But under certain circumstances, the developer of `Foo` might be comfortable having `Foo`'s nullification power be held by a collection of people, all of whom know a secret-share of `Foo`'s `nullifier_secret_key`. If we all squint a little, and we all relax, we might then agree to say that `Foo` _"possesses"_ a secret key. In reality, it's the group of people who possess a shared secret. Let's call this group "Team Greg". Under this framing, `Foo` effectively becomes some kind of shared account contract, whose secret keys are secret-shares held by the members of Team Greg. But how can `Greg` (a member of Team Greg) nullify `Foo`'s note, without learning `Foo`'s entire `nullifier_secret_key`? Well, first we discuss a scheme where Greg _does_ need to know `Foo`'s entire `nullifier_secret_key`, because it can be useful and acceptable in some cases. ##### Shared Account Contract See Appx B for details. A group of people come together to generate a shared secret, with a corresponding shared public key. They can even go further and use that shared public key (and other shared public keys) to define an Aztec Address, along with some contract bytecode. Here, we're considering that the contract bytecode is `Foo`'s bytecode. Alice creates a note, owned by `Foo` (that is, owned by Team Greg's shared public key). There are two ways this note could be defined: If the shared `nullifier_secret_key` is baked into the address of `Foo` when `Foo` is deployed: ```rust note = { slot(balances[Foo]), value: 10 }; ``` ```rust #[private] fn transfer(to: AztecAddress, amount: u64) { note = balances[msg_sender].get_note(); note_hash = note.hash(); // Greg injects the nullifier_secret_key of `Foo`: nullifier_secret_key = unsafe { oracle.get_nullifier_secret_key_from_address(msg_sender) }; // Prove existence of this nullifier_public_key in Foo's address: nullifier_public_key = nullifier_secret_key * G; assert_correct_nullifier_public_key(msg_sender, nullifier_public_key); nullifier = hash(note_hash, nullifier_secret_key); // Creation of a new note owned by `to` is not shown. } ``` If the shared `nullifier_secret_key` is _not_ baked into the address of `Foo` when `Foo` is deployed -- for example if Team Greg can change over time, or if Team Greg can be decided by Alice when the note is created -- then the note will need to contain some public identifier of Team Greg as the note's "Nelly", in addition to `Foo` being used as the note's "`owner` conveyed through a mapping-key". ```rust note = { slot(balances[Foo]), nelly: team_joe_public_key, value: 10 }; ``` ```rust #[private] fn transfer(to: AztecAddress, amount: u64) { note = balances[msg_sender].get_note(); note_hash = note.hash(); { nelly: team_joe_public_key } = note; // Greg injects the nullifier_secret_key: nullifier_secret_key = unsafe { oracle.get_nullifier_secret_key(team_joe_public_key) }; // Prove correctness of this nullifier_secret_key: assert(team_joe_public_key == nullifier_secret_key * G); nullifier = hash(note_hash, nullifier_secret_key); // Creation of a new note owned by `to` is not shown. } ``` ##### Threshold Plume > See here: https://hackmd.io/sFVGNA4PRvKh5FnRkzFPiA and Appx B. > Bias warning: Mike - the author of this doc - also wrote that Threshold Plume idea. Beware of being misled. But also look how nice this is! (See what I did there? ;) ) We can avoid `Greg` learning what `Team Greg`'s shared secret is (and also avoid passing a secret into the circuit), by using Threshold Plume. With Threshold Plume, a group of `n` people generate a shared public key, and each participant retains a secret-share. `t` of `n` people from the group can then compute nullifiers for notes which are "owned" by the group's shared public key, without any of them learning the shared secret key. Again, the shared public key used for Threshold Plume might be baked-into `Foo`'s address as a `nullifier_public_key` at the time of deployment, or it might be a separate public key that Alice must provide as a "Nelly" field in the note. We'll look at the former case, just because it's cleaner and results in a `transfer` function which is identical to the simpler, non-escrow setting of a person transferring their own notes. For the latter case, we already have examples above of expanding a note definition with a `nelly` field. ```rust note = { slot(balances[Foo]), value: 10 }; ``` ```rust #[private] fn transfer(to: AztecAddress, amount: u64) { // Assume `msg_sender` is `Foo`; the address of the collective group // known as "Team Greg". note = balances[msg_sender].get_note(); note_hash = note.hash(); // Note: with Plume, the nullifier keypair doesn't necessarily // need to be used. The address keypair could instead be used, // resulting in fewer derivation constraints. // There are security tradeoffs, though. See Appx A. team_joe_nullifier_public_key = get_nullifier_public_key_for_addres(msg_sender); // Through MPC, Greg communicates with a threshold t members of Team Greg, // who collectively generate a plume_proof. // Much like threshold Schnorr signatures, this plume_proof can be // generated without anyone (incl. Greg) learning the shared secret. plume_proof = unsafe { oracle.get_plume_proof(note_hash, team_joe_nullifier_public_key); } verify_plume_proof(note_hash, team_joe_nullifier_public_key, plume_proof); { nullifier } = plume_proof; // Creation of a new note owned by `to` is not shown. } ``` Notice that this smart contract looks _identical_ to a regular (non-threshold) Plume nullification of a note. I think that's very nice. Very nice indeed. The same function for the transferring of notes, regardless of whether they're owned by a single human or a dumb smart contract (via a group of n humans)? Phwoar. Please do criticise it against other approaches, though. ##### Other MPC Schemes There are probably a lot of other underexplored ways that we can emulate `Foo` "owning" a note, when in reality the nullification of its notes is managed through a group of users. MPC Co-SNARKS, like those being developed by [Taceo](https://github.com/TaceoLabs/co-snarks) might give some powerful ways to solve this problem. Co-snarks enable snarks to be generated by a collection of untrusted servers, without those servers being able able to decode the witness data. There are some security and collusion assumptions, so developers should be careful. But if comfortable with the security assumptions, the co-snark executors might be able to hold a shared private key for an app smart contract. If anyone wants to execute a tx to _nullify_ any of the notes which are escrowed with the app smart contract, they'd need to make some kind of http request to all of the co-snark servers. If sufficient servers like the request, they could run the co-snark MPC protocol to then execute the smart contract's private function circuit (inside the TEE) and return a proof. This seems similar to what Threshold Plume achieves. I'd say Threshold Plume is a specific, lightweight scheme focussed on nullification, and co-snarks are a very powerful, general-purpose hammer that can prove anything. It's a bit like comparing Schnorr signatures vs Snarks: one is very specific and lightweight; the other is a general-purpose beast. ##### FHE Co-processor See Appx B. This is an underexplored area, and I can't remember all the details. ##### Indistinguishability Obfuscation Not practical technology yet, but if people can get this to work, it enables a circuit to "possess" a secret, in some sense. ##### TEEs Sometimes considered "Heresy!" by cryptographers, but they're worth a mention. If a TEE (Trusted Execution Environment) is assumed "secure", then it can generate Aztec secret keys that no one knows, and it can then act like a "human" on the network. The design patterns of TEEs might be similar to Co-Snarks. Perhaps there are patterns where an app smart contract's secrets can be held by a TEE, and hence notes can be escrowed with the smart contract in a more natural way. If anyone wants to execute a tx to _nullify_ any of the notes which are escrowed with the app smart contract, they'd need to make some kind of http request to a server where the TEE lives. If it likes the request, the TEE could then execute the smart contract's private function circuit (inside the TEE) and return a proof. This might be complicated by Aztec's folding proving system, though, since entire witnesses for earlier function calls in a tx might be leaked to the TEE, which might leak some of the user's secrets (from other apps) to the TEE. Who knows what the TEE might do with those secrets!? --- ## Summaries Lots of attempts to tabulate info, in a way that it hopefully useful! > Below, if you don't know a particular nullification scheme, you can find it explained in detail in Appx B. ### Desirable transferFrom features Assessment criteria for a good token note nullification scheme: | Criteria | Explanation | | ----------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Contents of the note stay private from outside observers 🔎 | Retaining privacy gud. | | No secret key in circuit 🔑 | Ideally, a secret key never enters a circuit. | | No sharing of personal secret keys 🔑 | I do not want to share my secret key with someone, to enable them to spend my note(s). See Appx A for a table of which keys can safely be shared with whom (spoiler: don't share user keys!) | | No on-chain setup step ⚙️ | Ideally, I do not have to do initial "setup" transactions, so that a user can then spend my note(s). | | Avoids split balance 💵 💵 | Ideally, I don't want to "move" some of my notes into an inconvenient state where I can't then spend them in the same way as my other notes. Think of ERC20 approvals; I can still spend my balance even if it's also allocated in any counterparty's `allowance`, in the same way as I can spend the rest of my balance. That's trickier to achieve in a utxo model. </br> (Related to "no on-chain setup step"). | | Requires one note type 1️⃣ | I don't want to convert from one token note struct to some other struct, in order for my balance to be spendable by someone else.</br>(Related to "no on-chain setup step" and "avoids split balance") | | Supports > 1 counterparty 🧑‍🤝‍🧑🧑‍🤝‍🧑 | Ideally a scheme works the same whether I want for one or many counterparties to be able to spend my note(s). | | Pretty contract interface 🖼️ | Doesn't introduce lots of extra functions or ugly functions in the token contract interface. (This isn't a dealbreaker, if we find it's the only way). | | Engineering effort | Low is good | ### Comparison of nullification schemes against desirable `transferFrom` features **I LIKE THIS TABLE, BUT DON'T USE THIS TABLE ALONE TO MAKE DECISIONS, BECAUSE IT DOESN'T CONSIDER THE 'ESCROWING' USE CASE, AND THE MPC VARIANTS OF THESE SCHEMES, THAT MIGHT BE NEEDED!*** The naming of rows seeks for a "tick" (✅) to always mean "something good", so sometimes the wording is strangely negated. Notice, we don't include the MPC-style nullification schemes here, because they're not needed for `transferFrom` functionality. When we come to look at Escrow, we'll need those. If you want to design a contract which uses just one approach for `transferFrom` and `escrow` use cases, you'll need to do some further thinking! | Nullification methods --> | Zcash- style | Note Secret | Shared Account Contract | Plume | Un- shield | |:--------------------------------------------------- | ------------ |:----------- |:----------------------- |:------ | ---------- | | Contents of the note stay private from observers 🔎 | ✅ | ✅ | ✅ | ✅ | ❌ | | No secret key in circuit 🔑 | ❌ | ✅[2] | ❌ | ✅ | ❌ | | No sharing of personal secret keys 🔑 | ❌ | ✅ | ✅ [5] | ✅ | ✅ | | No on-chain setup step ⚙️ | - [1] | ❌[3] | ❌[3] | ✅ | ❌[3] | | Avoids split balance 💵 💵 | - [1] | ❌ | ❌ | ✅ | ❌ | | Requires one note type 1️⃣ | - [1] | ❌ | ✅ | ✅ | ✅ | | Supports > 1 counterparty 🧑‍🤝‍🧑🧑‍🤝‍🧑 | - [1] | ✅ | ✅ | ✅ | ✅ | | Pretty `transferFrom` interface 🖼️ | - [1] | ❌ | ✅ | ✅ | ✅ | | Engineering effort | - [1] | Mid | Higher | Higher | Simple | | Quantum privacy? [4] | ? | ? | ? | ? | ? | > Notes: > Using zcash-style nullification is a non-starter, given the need to share secret keys; hence the dashes in the table. [1] Since zcash-style nullification of someone else's note would require their `nullifier_secret_key` to be shared, we consider it not appropriate or applicable, and so we don't consider the rest of the rows. [2] Whilst nullification of this kind of note doesn't require a secret key -- only a one-time-use "note secret" -- the "ordinary" notes in this scenario would still be zcash-style notes, which do require secret keys in the circuit. [3] On-chain setup steps - a reminder of what they are: - Note secret: - The owner of the note must convert their zcash-style note into a "nullification-secret-style" note, which can then be nullified by someone else. - Shared Account Contract - The owner of the note must create a new, shared acount contract, and transfer their note to that contract's ownership. - Note: interestingly, the shared account contract - if it only contains private functions - doesn't need to be "deployed" via the contract deployment flow; it can instead be executed on-the-fly. This would save the user proving time, and data publication costs. - Unshielding: - The act of unshielding is the extra setup step. [4] Plume does have some caveats relating to quantum privacy, but so does the rest of the Aztec system, at the moment. Aztec's current design has all user addresses as publicly-known elliptic curve public keys. Ciphertexts are broadcast alongside Ephemeral Public Keys. Aztec will continue to evolve and improve, especially after mainnet launch, and undoubtedly post-quantum improvements will be on that roadmap. I.O.U. a separate, dedicated doc on quantum privacy. [5] The secret key that is shared between the creators of a "Shared Account Contract" is not a _personal_ secret key. If the Shared Account Contract is coupled with zcash-style nullification (as is assumed in Appendix B), then this shared secret key would be known to all who share the Shared Account Contract. For a secret sharing scheme where no one learns the nullification secret, Threshold Plume can be used, or some other MPC protocol. ### Compatibility of various nullification schemes #### TransferFrom The owner of the note is Alice. Therefore the nullifier is generated from some secret that Alice knows, and that Alice might choose to share with Bob or a group. Rows are ordered from "most specific use case" to "most general use case". So if all you wanted was for Alice to be able to spend her own note, then the options of row 1 are available to a token contract developer. But if you also want Greg to be able to execute a tx which calls the token contract from `Bar`, to spend Alice's note, the bottom row is probably your only set of options. Rows 1-2 explore use cases where the app developer wants _only_ for Alice to ever be able to _execute_ the nullification of her own note, in which case usage of zcash-style nullifiers is possible, since Alice's nullifier secret key wouldn't need to be shared with anyone else. This is sometimes referred to as "Singleplayer" nullification. Row 2 is similar to row 1, except an authwit might be improve security (see Appx A), since the Caller is not Alice. The final rows are all very similar: It is not Alice who is executing, and so nullification using Alice's personal secret key(s) is not acceptable - hence zcash nullifiers are not compatible. Given the sheer number of ticks (✅), you might think any of these schemes will do, but there are significant tradeoffs. See later tables in this section, which explore which of these schemes have certain desirable features. > ✅ - Compatible, but with tradeoffs that are considered later. > ❌ - Incompatible. > 📞 - Requires an offchain private messaging channel, or usage of Aztec's encrypted logs as a bulletin board. > 🔏 - Authwit should be considered, to give extra security to Alice's state, beyond knowledge of the nullification secret. See Appx A for much more discussion on this. | Owner | Executor | msg. sender | Zcash | Note secret | Plume | Shared Account Contract | Threshold Plume[2] | MPC Cosnark[2][4] | Un- shield[3] | | ----- |:-------- | ----------- |:----- |:---------------- |:----- |:----------------------- |:------------ |:-------------- | ------------- | | Alice | Alice only | Alice only | ✅ | ✅ | ✅ | ✅📞 | ✅📞 | ? | ✅ | | Alice | Alice only | `Bar`🔏 | ✅ | ✅ | ✅ | ✅📞 | ✅📞 | ? | ✅ | | Alice | Bob | Bob 🔏 | ❌[1] | ✅📞 | ✅📞 | ✅📞 | ✅📞 | ? | ✅ | | Alice | Alice / Bob | `Bar`🔏 | ❌[1] | ✅📞 | ✅📞 | ✅📞 | ✅📞 | ? | ✅ | > [1] Bob doesn't have Alice's nullifier secret key (nor should he be given it). > [2] Probably unnecessarily complex for transferFrom flows. > [3] Unshielding is kind of cheating, since it's so leaky. This column is really only here for completeness. Remember, in this flow, Alice first unshields the note (which can be done with a basic zcash-style nullifier), and then once the data is public, anyone can act on it just like in the Ethereum paradigm. > [4] More thinking needed to understand the applications of cosnarks. #### Escrow The owner of the note is a dumb smart contract called `Foo`. Alice is the person who created the note for `Foo`. Rows are ordered from "most specific use case" to "most general use case". So if all you wanted was for Alice to be able to escrow a note, and later have Alice be able to nullify it, then the options of row 1 are available to a token contract developer. But if you also want Greg to be able to execute a tx which calls the token contract from `Bar`, to spend `Foo`'s note, the bottom row is probably your only set of options. Rows 1-3 explore use cases where the app developer wants _only_ for Alice to ever be able to _execute_ the nullification of `Foo`'s note. In these cases, usage of zcash-style nullifiers is possible, since Alice's nullifier secret key wouldn't need to be shared with anyone else. This is sometimes referred to as "Singleplayer" nullification. It's not really like escrow at all: despite `Foo` nominally "owning" the note (according to whatever custom definition of "ownership" the smart contract uses), Alice has effectively retained ownership due to her sole ability to nullify it. | Note- Creation Executor | Mapping- Key owner | Nelly[0] | Executor | msg. sender | Desc. | Zcash | Note secret | Plume | Shared Account Contract | Threshold Plume | MPC Cosnark | Un- shield[1] | | ----------------------- | ------------------ | --------- | --------------------------------------- | ----------- | --------------------------------------- | ----- | ----------- | ----- | ----------------------- | --------------- | ----------- | ------------- | | Alice | `Foo` | Alice | Alice only | `Foo` only | Approach 1: Singleplayer `transfer` | ✅ | ✅ | ✅ | - [2] | - [2] | - [2] | - | | Alice | `Foo` | Alice | Alice only | Alice only | Approach 1: Singleplayer `transferFrom` | ✅ | ✅ | ✅ | - [2] | - [2] | - [2] | - | | Alice | `Foo` | Alice | Alice only | `Bar` | Approach 1: Singleplayer `transferFrom` | ✅ | ✅ | ✅ | - [2] | - [2] | - [2] | - | | Alice | `Foo` | Alice/Bob | Bob | `Foo` | Approach 2: `transfer` | ❌ | ✅📞 | ✅📞 | - [2] | - [2] | - [2] | - | | Alice | `Foo` | Alice/Bob | Bob | Bob | Approach 2: `transferFrom` | ❌ | ✅📞 | ✅📞 | - [2] | - [2] | - [2] | - | | Alice | `Foo` | Alice/Bob | Bob | `Bar` | Approach 2: `transferFrom` | ❌ | ✅📞 | ✅📞 | - [2] | - [2] | - [2] | - | | Any | `Foo` | Team Greg | User who can make requests to Team Greg | `Foo` | Approach 3: `transfer` | ❌ | ❌ | ❌ | ✅📞 | ✅📞 | ✅📞 | - | | Any | `Foo` | Team Greg | User who can make requests to Team Greg | Any | Approach 3: `transferFrom` | ❌ | ❌ | ❌ | ✅📞 | ✅📞 | ✅📞 | - | | Any | `Foo` | Team Greg | User who can make requests to Team Greg | `Bar` | Approach 3: `transferFrom` | ❌ | ❌ | ❌ | ✅📞 | ✅📞 | ✅📞 | - | [0] The Nelly (the entity who knows the nullification secret for the note) needs to share the secret with the Executor, somehow, or _be_ the Executor. We've seen approaches where the nelly is implied by the mapping-key owner, and approaches where the nelly needs to be specified as a dedicated field of the note struct; and approaches where the nelly isn't specified anywhere, and knowledge of the nullifier secret enables nullification without further checks on who it is that's actually furnishing the nullifier. [1] I've nulled this "Unshield" column, because if we're in the case where a note has been escrowed with a dumb smart contract, in private, then we're probably already past the point of discarding the "unshield" approach due to lack of privacy. If the unshield approach had been chosen already, then escrow would have been done in public. [2] Possible, but probably overkill for this row's use case, alone. ## Q's that didn't fit anywhere else **Do all these schemes require Alice to know Bob and Bob to know Alice?** All the above note nullification schemes require some secret nullification data to be given to the eventual nelly. _And_ of course, to nullify a note in the first place, the Executor needs to know the private preimage data of that note, somehow! Alice needs to know Bob (e.g. his address, or some other contact details), so that she may share some secret nullification data with him. Bob does not necessarily need to know Alice: - For example, Alice could send an encryption of the secret information to a bulletin board, which Bob could brute-force-discover. This way, Bob wouldn't necessarily discover who sent him this secret information. Of course, it's nice and simple for an app if Alice and Bob _do_ have a communication channel between them to share these secrets more directly. **Can Alice just share the secret nullification data with the world, enabling anyone to spend it?** Technically, yes, Alice could share this data with the world, but that's very leaky. The logic of the app could still include further conditions to restrict who exactly may spend the note. BUT, at the time the note is spent, the whole world would be able to see that it has been spent, because they would all know its nullifier. It kind of defeats the purpose. **Revocation: How does Alice revoke her "transferFrom approval"?** In all cases, Alice would need to do a transaction to revoke Bob's permission to spend her note. Apps would need to provide Alice with a cancellation flow, giving her the ability to nullify her note and create a replacement note that only she can nullify. In some cases it could be a race between Alice and Bob, to see who revokes or spends the note (respectively) first. ## Closing comments These comments relate to deisnging a token standard, specifically. Designing a Token contract standard for Aztec will be tricky! Considerations include: - Note design - **Nullification scheme** - **State access authorisation** - **TransferFrom patterns** - **Escrow patterns** - Partial Note patterns - Log delivery - Choice of encryption scheme (or how to abstract this away from the standard) - Enabling constrained and unconstrained log delivery flows. - Tagging scheme - Log discovery, through unconstrained functions - Transferring between public and private-land - Bridging between L2 and L1 - Enabling encrypted backups of a user's transaction activity, for their records. Items in **bold** were discussed in this doc. ### Composability is essential! A token contract can be called by other contracts in many different contexts. It's the calling contract's preferences that are important, and the token contract has to have functions which cater to those preferences. E.g. whether to constrain encryption; whether the owner is a human or a dumb smart contract; whether the dumb smart contract owner is represented by a shared account contract, or a threshold plume committee, or an FHE committee; whether partial note patterns are needed, etc. I anticipate that the function interface for a private token standard might be quite large, compared to ERC20 contracts, and I'd encourage developers to ignore ERC20 for maximum freedom to innovate! --- ## Appendix A - Aztec addresses and keys A rigorous doc is in the works, but here's a summary. Here's a lazy screenshot of a slide, which shows the data that's contained within the preimage of an Aztec Address: ![image](https://hackmd.io/_uploads/B1onuh9LJx.png) _The data contained within the preimage of an Aztec address_. ### Account Contracts Every Aztec address 'contains' contract code. In the diagram, you can see the big left branch is a load of information which defines the functions and starting state of the contract. What's an account contract? Aztec embraces account abstraction, where every user is represented by an Aztec smart contract. If a user wants to interact with the network, they need to call an "entrypoint" function of some contract. Often, the entrypoint function they call will be a function in their own account contract. (I say "often", because sometimes a user might directly call an app's entrypoint function; this is advanced and not discussed in this doc). Assuming here that the user wants to call their own account contract entrypoint function, then it will contain some kind of authentication checks, to ensure the caller is indeed the owner of the account contract. The function might verify a signature against some stored public key that belongs to the user. The public key can be from any curve; whoever writes the user's account contract can choose. The function might hash a password like `Hunter2`. The function might check that you can solve a sudoku. The function might do google auth against your gmail account. The function might check your passport using zkPassport. The function might ask you to write out a self-deprecating sentence about yourself, just so you feel humiliated. You get the idea. Crucially, notice that there are no protocol-mandated "signing keys" in Aztec. It's completely abstract. In the case of the sudoku, there are no keys at all. You might see us lazily say "signing keys" from time to time, when we lazily assume the account contract will implement some sort of signature verification scheme. You should tell us off. Maybe "entrypoint authentication keys" is a better term, but it's still not quite abstract enough a term. "entrypoint authentication secret"? Essentially, if you can provide args which pass an acount contract's checks, then you can make contract calls from that contract. That is, you can adopt that account contract's `address` as `msg_sender` when making a call to the next function. That is, the next function will see `msg_sender` as the address of the calling account contract. The function then might make edits to some state variables which "belong" to `msg_sender` (the account contract). This is powerful. If you satisfy the abstract checks of an account contract, you can impersonate that account contract's address on the network, and modify state which "belongs" to it in other contracts. Given the dangerous consequences of leaking the user's "entrypoint authentication secret" (ooh, it's sticking!), we anticipate that users will want their EAS to be stored offline in a hardware wallet. ### Keys The right-hand branch of the above diagram is the `keys_hash`; a commitment to several keypairs. Whilst "signing keys" (sorry) are abstract, we felt the need to provide some extra keys at the protocol level, which app developers can always rely on existing for all users: #### Nullifier keypair You'll see in the rest of this doc, that a common way to compute a nullifier for a note is to use the note owner's nullifier secret key. We call this "zcash-style nullification". Actually, let's just repeat it here for convenience: ```rust nullifier = hash(note_hash, nullifier_secret_key); ``` For an app to _prove_ the link between the owner of a note (an Address) and this `nullifier_secret_key`, we've baked the "master nullifier public key" into the preimage of the user's address (`Npk_m` in that diagram above). This way, all apps can consistently prove a link between the owner and the nullifier secret key for all users[1]. **Q**: But why do we need a _separate_ `nullifier_secret_key`, if the user already has a ~~signing key~~ "Entrypoint Authentication Secret" associated with their account contract, and hence associated with their address? - **A**: Security. The "zcash-style" nullification scheme described, requires the user's `nullifier_secret_key` to be passed into a circuit (i.e. into a less-secure environment than a hardware wallet). We've already discussed that many users will want their entrypoint authentication secret to be stored in a hardware wallet, and so the security of a "signing secret key" is incompatible with a zcash-style nullifier secret key. - Spoiler: in the rest of this doc, we'll see nullification schemes that don't need a `nullifier_secret_key` to be used inside a circuit. For apps which adopt such schemes, the nullifier keypair might not be used. But since _some_ apps might wish to do zcash-style nullification (or other styles of nullification that make use of this nullifier keypair), the protocol bakes a nullifier keypair into the preimage of every address. - **A**: Consistent derivation path from a user secret to a note owner. As we described above, entrypoint authentication is abstract, so there's no consistent derivation path from an entrypoint authentication secret to the address of a note owner. And without a consistent derivation path, a smart contract can't write constraints to prove a relationship between an entrypoint secret and a note owner's address. The enshrined derivation path from a nullifier secret key to a user's address is consistent between all users, so apps can rely on it when designing nullification logic. That last bullet point is a bit misleading. We _do_ actually have a somewhat consistent way for an app contract to link an entrypoint authentication secret with a note owner's address: [AuthWits](https://docs.aztec.network/aztec/concepts/accounts/authwit)[2]. Authwits are like the [ERC-2612](https://eips.ethereum.org/EIPS/eip-2612) permit flow. An authwit is signed permission from a user, granted to another contract to make a function call on the user's behalf. E.g. "I am Alice and I permit the contract `Bob` to call the `transferFrom` function of contract `Foo` with args `{ msg_sender: Bob, from: Alice, amount: 100 }`". In this example, `Foo` wants to check with Alice whether `Bob` is allowed to touch Alice's state. I.e. whether `Bob` is allowed to nullify Alice's note(s). `Foo` can call a standardised `is_valid` function on Alice's account contract. Within this function, Alice's custom "entrypoint authentication" checks can happen. E.g. if Alice's account contract implements ECDSA signatures, then `Bob` can provide a signature -- signed by Alice and given to Bob offchain -- as an oracle input to the `is_valid` function. If the `is_valid` call returns `false`, then `Foo` will revert; otherwise the rest of `Foo` will execute. **Q**: So if an authwit is a consistent way for an app contract to link a user's "entrypoint authentication secret" to a note owner's address, why can't we use authwits for nullification? **A**: It's because an `is_valid` call only gives a boolean response back to the app: "Yes, this function call may proceed", or "No, please revert this function call". An `is_valid` call doesn't provide any data which can be used to compute a nullifier for a particular note. This is natural: with account abstraction, there is no consistent data that can be gleaned from the abstracted validation of some "entrypoint authentication" function. A nullifier needs to be computed from some deterministic secret, and the authwit flow does not (and cannot) provide a deterministic secret to an app contract. **Q**: Wait, why can't the account contract return a nullifier back to `Foo`? Why can't we standardise a `get_nullifier(note)` function for all account contracts, to enable this? Several reasons: - **A**: An account contract **cannot be trusted** to compute certain kinds of nullifier, e.g. zcash nullifiers. For such nullifiers, a call to an account contract doesn't make sense. A malicious account contract could just return a random field and say "trust me bro, this is the nullifier for this note". Bad. Dangerous. It is ultimately an _app's_ job to validate the correctness of a nullifier: either by computing the nullifier directly (e.g. zcash-style); or by verifying a proof that the nullifier is correct (e.g. plume-style). - **A**: Also, it would be very time-inefficient to make a function call for every nullifier computation, since each function call in Aztec results in an extra Kernel recursion. - **A**: You'll see in the rest of this doc that there are lots of ways to compute a nullifier, with various tradeoffs. Notes can also be customised in any way the app developer wants, so there's no "one size fits all" note type that can be passed to an account contact. Basically, there's too much configurability with notes and nullifiers for a standardised interface to be practical. **Q**: You mentioned that we want to minimise extra function calls (to keep proving times low), so why are authwits (calls to `is_valid`) standardised and used? **A**: Good point! It's because we think permission to edit state should be 'gatekept' by the user's "entrypoint authentication" keys, which can be considered "more secure", because they can live in cold storage. If an app developer doesn't like the slow-down of an extra function call, they _could_ choose to _not_ make that authwit call. Instead, the dev could just go ahead and allow the note to be nullified without any permission from the user's entrypoint auth key. I.e. "knowledge of the `nullifier_secret_key` alone is sufficient to nullify this note". Clearly, this is less secure, because the nullifier secret key lives in "hot" storage. Also, in the rest of this doc we'll explore nullification scenarios where multiple people are given knowledge of a nullifier or of some kind of alternative nullifier secret (not the user's nullifier secret key). These scenarios increase the risk of the nullifier secret leaking, so then it might be prudent of the app dev to also check for some additional authorisation from the note owner, such as: a signature using their address keypair; or a signature using their nullifier keypair; or a good old-fashioned authwit. [1] We said all "users". But what about if the owner of the note is not a human user? What if the entity is a dumb smart contract (to be defined later)? Then it might not have an entrypoint authentication secret; and it might not possess a `nullifier_secret_key`. That's something this doc explores! [2] Perhaps the term "function call authorisation signature", or "signed permission to call a function" is more descriptive, but "authwit" seems to have stuck. #### Ivpk and Ovpk They're not important for this doc. They're used for encrypting and decrypting in various situations. #### Address Keypair See the diagram above. An address is actually the x-coordinate of the `Address` public key. The corresponding address secret key is `address_secret_key = pre_address + ivsk`. (In the diagram above, `h = pre_address`; we couldn't think of a better name). Why can't we use the `address_secret_key` in place of the `nullifier_secret_key`? Separation of concerns, mainly, but also because we can technically give greater security to the `address_secret_key` than the `nullifier_secret_key`. The `address_secret_key` is effectively the viewing key that enables a recipient to decrypt logs. This viewing key can technically be stored in an enclave, and never needs to be used inside a circuit. - Spoiler: with schemes like Plume, the `address_secret_key` _could_ be used as a replacement for the `nullifier_secret_key`, because with Plume, computation of the nullifier happens outside of any circuit, through a Chaum-Pedersen signature. ### Which Aztec secret keys can be shared? | Thing | Can leave its enclave?[3][4] | Can pass into a user's own account contract? | Can pass into an app contract? | Can share with other users? | Can use in a trusted protocol circuit (e.g. Kernel circuit)? | |:----------------------------------------------------------- | ---------------------------- |:-------------------------------------------- |:------------------------------ |:--------------------------- |:------------------------------------------------------------ | | Entrypoint authentication secret</br>(a.k.a. "signing key") | ⚠️ [1] | ⚠️ [1] | ❌ | ❌ | n/a - no protocol circuits understand this key. | | `address_sk`</br>(`= (pre_address + ivsk_m)`) | ⚠️[5] | ❌ | ❌ | ⚠️[5] | n/a - '' | | `ivsk_m` & `ovsk_m` | '' | '' | '' | '' | '' | | `nsk_m`</br>(Master nullifier secret key) | ✅⚠️[6] | ❌ [2] | ❌ | ❌ | ✅ | | `nsk_app`</br>(App-siloed nullifier secret key) | ✅[6] | ✅ | ✅ | ❌ | n/a - '' | | Plume nullifier | n/a - it never lived there. | ✅ | ✅ | ✅⚠️[7] | n/a - '' | | Note secret | n/a - '' | ✅ | ✅ | ✅⚠️[7][8] | n/a - '' | ✅ means "Yes". ❌ means "Never". ⚠️ means "Beware". [1] Ideally not. It depends on the account contract's authentication scheme, and the user's security preferences. [2] Technically possible, but highly inadvisable. [3] Software or hardware enclave, depending on the key. [4] Enclaves can perform certain cryptographic primitive operations on secret keys, such as hashing or scalar multiplication. [5] Not unless the user wants to _choose_ to share their `address_sk` with a _very_ trusted 3rd party. [6] Whilst zcash-style nullification requires these keys to leave the enclave and be passed into a circuit, other nullification schemes like Plume do not require the `nsk_m` nor `nsk_app` to ever leave the enclave. Therefore, if an app developer uses Plume, there's arguably no difference between them using the `address_sk` instead of the "nullifier keys" for plume signing. Whether usage of the `address_sk` is appropriate for nullification depends on whether users would prefer to store their `address_sk` in a hardware wallet. Practically speaking, a user's PXE will make frequent calls to the `address_sk` enclave to derive shared secrets when decrypting, so the `address_sk` will likely always live in the same place as the `nsk_m`. [7] Clearly knowledge of a user's nullifier -- when used -- leaks that the user's note has been spent, so the user needs to be very selective and only give out nullifiers to users who may `transferFrom` their balance. [8] The creator of the note will know this secret too, which somewhat violates one of the [requirements](#Requirements) of nullification. ### Gatekeeping state access Above we talked a lot about how an app developer can enable the owner of a note to authorise state access. We can summarise the checks that can be included in the body of the function: - **Checks against `msg_sender`**. - E.g. "Is the caller the "owner" of the note?" / "Is the caller on a whitelist / blacklist?" / "Is the caller an admin?". - **Make an authwit call** to the note "owner's" account contract, by calling `is_valid` and passing details of: the function being called; the args to the function; the `msg_sender`. Receive a boolean response which indicates approval from the account contract. - E.g. if the `msg_sender` is not the owner of the note, the previous check against `msg_sender` isn't appropriate. E.g. a `transferFrom`. - **Gatekeep with proof of knowledge of some nullifier secret**. As discussed above, this alone is often less secure. - E.g. for zcash-style nullifiers, knowledge of the user's `nullifier_secret_key` could be considered sufficient. - E.g. for Plume nullifiers, knowledge of the nullifier (and plume proof) could be considered sufficient. - **Gatekeep with a signature** - E.g. a signature using the "owner's" `nullifier_secret_key`. - E.g. a signature using the "owner's" `address_secret_key`. - E.g. a signature using some pre-registered app-specific keypair. Here's an incomplete summary. You can see that the case where Alice (a person) owns the note ("transferFrom") is much easier than if a smart contract `Foo` owns the note ("escrow"). | Owner | Exec utor | Caller | Proof of knowledge of Nullifier secret | Check vs `msg_sender` | Authwit | Sig | | ----- | --------- |:------ |:------------------------------------------------------------------------------------ |:-------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |:------------------------------------------------------------------------------------------------------------------------------------------------- | | Alice | Alice | Alice | Essential | Extra security: knowledge of "more secure" entrypoint auth secret. | Unnecessary, if doing a `msg_sender` check;</br>Extra security, otherwise. | Unnecessary, if doing a `msg_sender` or authwit check;</br>Extra security, otherwise. | | Alice | Alice | `Foo` | Essential | n/a [1]. `Foo` does not own the note. | Extra security: knowledge of "more secure" entrypoint auth secret. | '' | | Alice | Bob | Bob | Essential | n/a [1]. '' | '' | '' | | Alice | Bob | `Foo` | Essential | n/a [1]. '' | '' | '' | | `Foo` | Bob | `Foo` | ? Can owner `Foo` (or executor Bob) even possess a nullifier secret? Explored below. | Very useful. Possibly essential. The fact that `Foo` is making the call can be considered authorisation. | Unnescessary if doing `msg_sender` check. </br>Maybe not possible/practical since how does `Foo` possess an authwit secret key? Explored below. | Unnescessary if doing `msg_sender` or authwit check. </br>Maybe not possible/practical since how does `Foo` possess a secret key? Explored below. | | `Foo` | Bob | Bob | ? '' | n/a [1]. '' | Maybe not possible/practical since how does `Foo` possess a secret key? Explored below. | '' | | `Foo` | Bob | `Bar` | ? '' | n/a [1]. '' | '' | '' | [1] - Not applicable for "permit" flows. Still applicable for ERC20 "approval" flows, if checking an `allowances[msg_sender]` mapping. --- ## Appendix B - Nullification Schemes ### Zcash-style nullifiers Not quite how zcash does it, but heavily inspired by zcash, so the name stuck. #### Tl;dr The note contains the nelly's address. Knowledge of the nelly address' `nsk_app` (app-siloed nullifier secret key) is required to nullify the note. #### Note design: Here, the note conveys the nelly_address. This can be done in 2 ways: - The note _contains_ the nelly_address: `my_note = { ...stuff, nelly_address }` - The note is _mapped to_ by the nelly_address: `note = my_mapping.at(nelly_address).get_notes(...)` #### Nullifier design `nullifier = hash(note_hash, nelly_nsk_app)` where `nelly_nsk_app` is the nelly's app-siloed nullifier secret key. To nullify the note, the nelly's secret key must be provided to the circuit. This makes the "Zcash-style" nullifiers unsuitable for `transferFrom` use cases, because a user should never share their nullifier secret key(s). #### TransferFrom Flow - Alice is the owner of a note. Hers is the nelly_address. - Alice wants to enable Bob to `transferFrom` her note. - Alice would have to give Bob her `nullifier_secret_key`, so that Bob may compute the nullifier within a circuit. - :warning: That's bad. Dead end. Approach not viable. #### Pseudocode ```rust! fn compute_nullifier(note: Note, nullifier_secret_key: Field) -> Field { let { owner_address: AztecAddress } = note; assert(owner_address == derive_address_from_nsk(nullifier_secret_key)); let nullifier: Field = hash(note_hash, nullifier_secret_key); nullifier } ``` --- ### Note Secret #### Tl;dr The note contains a random secret. Knowledge of the random secret is enough to nullify the note. #### Note design `note = { ...stuff, note_secret }` > Note: The `note_secret` is unrelated to a user's nullifier keypair; it's a completely independent, randomly-generated secret which only exists to nullify this one note. #### Nullifier design `nullifier = hash(note_hash, note_secret)` Whoever knows the `note_secret` can nullify the `note`. #### TransferFrom Flow - Alice is the owner of a **zcash-style** note. - Alice wants to enable Bob to `transferFrom` her note. - Alice first converts her **zcash-style** note [1] into one of these **"nullification-secret-style"** notes, where `note = { ...stuff, note_secret }` - Alice privately shares the `note_secret` with Bob. - Bob can now "spend" Alice's note. > [1] Note: There are probably other kinds of notes that would work, but the zcash-style note is easiest to conceptualise. #### Discussion Q: Why does the contract need two kinds of notes? Why can't the contract just have "note-secret-style" notes? A: **It would violate a requirement of nullifiers: the creator of a note (when he is not the intended owner of that note) would know the nullifier for that note**. Suppose Charlie sent Alice a note. If the default note type in the contract is `note = { ...stuff, note_secret_hash }`, then _Charlie_ now knows the `note_secret` for Alice's note, meaning any Sender could later spend all their Recipients' notes at some later time! It's quite ugly for the app to have to handle two kinds of notes: zcash-style notes that the true owner can spend, and "secret-hash-style" notes that can be "transferred-from" by other people. The need for Alice to convert from one kind of note to the other kind of note is also ugly. It's also a security risk: passing a secret between different people could quite easily leak the secret. #### Pseudocode ```rust! fn compute_nullifier(note: Note, note_secret: Field) -> Field { let { note_secret_hash: Field } = note; assert(note_secret_hash == hash(note_secret)); let nullifier: Field = hash(note_hash, note_secret); nullifier } ``` --- ### Plume #### Tl;dr > Plume (see https://hackmd.io/@aztec-network/BJkQqJE50?type=view, https://eprint.iacr.org/2022/1255) is a scheme where a user can provide a lightweight Chaum-Pedersen-like proof to a circuit, along with a Nullifier, and the circuit can be convinced of the correctness of the nullifier without being given the user's nullifier secret key. Alice generates a plume proof & nullifier and shares this tuple with Bob. Bob can then spend Alice's note using the plume proof and nullifier. #### Note design Similarly to the zcash-style note example above, the note conveys the nelly_address. This can be done in 2 ways: - The note _contains_ the nelly_address: `my_note = { ...stuff, nelly_address }` - The note is _mapped to_ by the nelly_address: `note = my_mapping.at(nelly_address).get_notes(...)` #### Nullifier design $Npk = nsk \cdot G$ $H := hashToCurve(m, Npk)$ (for our purposes, `m` is a `note_hash`) $Nullifier = nsk \cdot H$ See https://hackmd.io/@aztec-network/BJkQqJE50?type=view for how the correctness of this Nullifier is proven to a circuit, without leaking the nsk. #### TransferFrom Flow - Alice is the owner of a note (where the note is designed to be nullified with Plume nullifiers). - Alice wants to enable Bob to `transferFrom` her note. - Alice gives Bob the plume proof and nullifier; noting that this doesn't leak Alice's nullifier secret key. - Bob uses this plume proof and nullifier to spend Alice's note. #### Discussion This requires the note struct to adopt the plume scheme for `compute_nullifier`. Interestingly, Alice's _nullifier_ keypair doesn't need to be the keypair that is used for plume. Indeed, the nullifier keypair only arose as a distinct keypair for zcash-style nullifiers for security reasons. TODO: explore whether it's safe for the user's address keypair to be used instead. It's raised in the plume paper that plume does not provide quantum secrecy of the tx graph if the Npk is known to the world. In Aztec, we could slightly tweak the design of the Aztec Address, and of the contract deployment process, so that Npk is not leaked, but is hashed into the derivation of the Aztec Address (e.g. by not leaking the preimage of the keys_hash, or by nesting the Npk deeper into the preimage of the keys_hash). However, the Npk would still need to be leaked to a contract in order to prove that it relates to the address, so if the contract is malicious, it could leak the Npk. Strange isn't it, how we've moved from needing to protect the secret key, to also now needing to protect the public key from leaking, to ensure post-quantum security. #### Pseudocode See here: https://github.com/iAmMichaelConnor/noir-pumpkin-plume/blob/e681fe3d57e5294fda36045d92f04824c54abf49/crates/use/src/main.nr#L8 --- ### Liam Plume "PLiume"? #### Tl;dr Liam Eagan wrote a comment on my Plume hackmd. It's an optimisation of plume to avoid hashing-to-curve. Very cool idea. Here's the idea: Instead of hashing-to-curve in Plume, which costs quite a lot of constraints, the Sender (who creates the note in the first place) chooses the note randomness in such a way that `hash(m, Npk)` equals a valid x-coordinate for a grumpkin point. (Recall: `m` is the `note_hash` for our purposes). The circuit that the Sender executes could ensure the point is on the curve. This lets us use poseidon2 as the `hash`, instead of a more-expensive hash-to-curve hash. #### Note design Similar to Plume section. The Sender must also ensure that the note's `randomness` results in a `note_hash` where `hash(note_hash, Npk)` is an x-coordinate of a Grumpkin point. #### Nullifier design $Npk = nsk \cdot G$ $H := hash(m, Npk)$ (note: this can be poseidon2, instead of hash-to-curve) $Nullifier = nsk \cdot H$ See https://hackmd.io/@aztec-network/BJkQqJE50?type=view for how the correctness of this Nullifier is proven to a circuit, without leaking the nsk. #### Discussion This requires the note struct to adopt the liam-plume scheme of `compute_nullifier`. This approach is perhaps a bit invasive, as the sender must perform an extra check, and so it might make aztec.nr library functions ugly. It's also worth noting that the predominant cost of regular plume nullifier verification is actually now scalar multiplications, rather than hash-to-curve. Linked to the discussion in the "Plume" section, to improve post-quantum secrecy, we wouldn't want the Sender to know `Npk`, and so we could alter the derivation of `H` to be `hash(m, address)`. But actually, this doesn't help because everyone knows `address`, so they could still derive `H`. And if they can break the DL of the Nullifier, they can then learn nsk. #### TransferFrom Flow - Some Sender sends Alice a note, ensuring that `hash(m, alice_npk)` is an x-coord of a point on the curve. - Alice is the owner of a note (where the note is designed to be nullified with Liam Plume nullifiers). - Alice wants to enable Bob to `transferFrom` her note. - Alice gives Bob the liam-plume proof and nullifier; noting that this doesn't leak Alice's nullifier secret key. - Bob uses this plume proof and nullifier to spend Alice's note. #### Pseudocode See here: https://github.com/iAmMichaelConnor/noir-pumpkin-plume/blob/e681fe3d57e5294fda36045d92f04824c54abf49/crates/use/src/main.nr#L8 --- ### Nullify notes with a zk-snark #### Tl;dr This is like using a hammer to crack a nut. But for completeness, we include it here. An app circuit could be designed to verify a zk-snark which _proves_ correctness of a note's nullifier. "I own a note. Here is a zk-snark, with the nullifier and note hash as public inputs. The zk-snark proves that this nullifier is indeed this note's nullifier.". #### Discussion Whilst Plume is a lightweight algebraic proof of the correctness of a nullifier, this section's approach would be a heavyweight proof of the correctness of a nullifier. The owner of the note can then share this zk-snark with a would-be nelly, so that the nelly may later nullify the note. It might sound silly, but... - It enables some cool features: - The owner of the note doesn't need to leak the _entire_ contents of the note to the nelly; just a select few fields of the note's preimage. In all other cases (above), the nelly must be given the full preimage of the note, so that they may satisfy some circuit checks. If instead a snark is given to the nelly, where the public inputs divulge only a selection of information, the note doesn't need to be fully opened to the nelly. - Having said that, in the above cases, the note could be designed with nested hashes, so that certain informations isn't leaked to any nelly. - The zk-snark could be designed so that it can only be used by _one specific person_. So if this secret zk-snark is leaked, it wouldn't be usable. - Having said that, these kind of constraints could also be included in apps which adopt the above approaches too. - It won't actually be _that many_ constraints: - Aztec contracts will be able to piggy-back on the MegaHonk IVC scheme, and provide UltraHonk proofs to be verified within an Aztec contract. This will be tens of thousands of constraints (to be measured), which is quite reasonable versus the naive approach of performing pairing operations in-circuit. --- ### Shared Note Secret Not fleshed out, but there are potentially some modifications to the above "Note Secret" approach where the note secret is derived through some MPC secret sharing scheme, enabling one of many people to nullify a note. Other schemes below also achieve this property. --- ### Shared Account Contract #### Tl;dr Alice creates a brand new "shared" Account Contract and corresponding Aztec Address, whose keys are derived from all of Alice, Bob and Charlie's keys. Alice's notes can be made "transfer-from-able", by her first transferring them to the Shared Account Contract. #### Note design Doesn't matter. #### Nullifier design Doesn't matter. #### TransferFrom Flow - Alice is the owner of a note. - Alice wants to enable Bob to `transferFrom` her note. - Alice creates a Shared Account Contract, whose keys are derived from Alice and Bob's. - Alice first transfers her note to the Shared Account Contract's ownership. - To "`transferFrom`", Bob uses the Shared Account Contract and does a regular `transfer` of the note. #### Discussion #### Pseudocode --- ### Unshield the note #### Tl;dr Alice transfers the note to her public balance. Alice can then do a "traditional" public `transferFrom`, by approving Bob as a spender of this "allowance". #### Note design Doesn't matter. #### Nullifier design Doesn't matter. #### TransferFrom Flow - Alice is the owner of a note. - Alice wants to enable Bob to `transferFrom` her note. - Alice "unshields" the note. More recently, we call this 'transfer_to_public'. - Now Alice has a public token balance. She can approve Bob to spend an allowance of her balance. - Bob calls `transferFrom` on the allowance. #### Discussion Unshielding a note to public clearly leaks the contents of the note; so only user identities are kept private in this approach. --- ### Threshold Plume I had an idea... #### Tl;dr A plume nullifier is effectively a Chaum-Pedersen proof of discrete log equality, which (if you squint) is like two Schnorr signatures. A threshold Schnorr signature can be created through an MPC protocol, such as FROST. There, `n` people are given secret shares, and `t` of those people need to come together to sign a message. We can apply the ideas of threshold Schnorr signatures to Plume: `n` people can be given secret shares, and `t` of those people need to come together to generate the nullifier for a given note. Interestingly, _nobody ever learns the shared secret key_ (unless some number of the group colludes). > Edit after I first thought of the idea and wrote that (^^^): It seems to work! No one's actually checked my maths yet, though: https://hackmd.io/sFVGNA4PRvKh5FnRkzFPiA #### Note design Similarly to the zcash-style note example above, the note conveys the nelly_address. This can be done in 2 ways: - The note _contains_ the nelly_address: `my_note = { ...stuff, nelly_address }` - The note is _mapped to_ by the nelly_address: `note = my_mapping.at(nelly_address).get_notes(...)` #### Nullifier design From the point of view of the Aztec smart contract, nullifier verification is the same as normal Plume. This is fantastic! The differences are lots of MPC keygen effort, and MPC signing effort. Clearly, generating the Threshold Plume nullifier proof will require lots of off-chain communications to run the MPC protocol, and will require at least `t` people to be available to sign. See https://hackmd.io/sFVGNA4PRvKh5FnRkzFPiA #### TransferFrom Flow Use Plume for transferFrom - note that the smart contract function is identical between Plume and Threshold Plume; the only difference is how the specified public key is generated, and how the plume proof is generated off-chain. #### Escrow Flow - `Foo` "owns" a note, but the nelly specified as the group public key (of the theshold committee). - Bob wants to nullify the note. - He requests for at least `t` members of the committee to compute the plume nullifier (and plume proof) of the note. Interestingly, the committee DO NOT necessarily need to learn the contents of the note. That's very nice. - `>= t` members compute the Threshold Plume nullifier and proof, and return it to Bob. - Bob calls the transfer or transerFrom function as part of his tx. #### Discussion INTERESTING EXTRA CONSIDERATION: The group of `n` don't necessarily need to know the contents of the note (depending on the use case), if the owner is conveyed by a mapping key. INTERESTING: If the owner is a user, the note can technically be nullified without the owner knowing!!! This unlocks use cases where the owner might not _want_ the note to be nullified, but the logic of the contract permits the note to be nullified, such as the negative reputation example that Vitalik once blogged about. This requires the note struct to adopt the plume scheme for `compute_nullifier`. Also see the above [Plume](#Plume) section for more considerations. #### Pseudocode See the above [Plume](#Plume) section for a vanilla Plume code example. There's no example code for the "MPC" stuff, though. --- ### FHE Coprocessor Our man @Maddiaa0 co-wrote a proof-of-concept FHE Coprocessor for Aztec Contracts here: https://x.com/Maddiaa0/status/1727744885967491253. #### Flow - User sends some encrypted information about some state they own. - The FHE Coprocessor operates on the encrypted state and returns an encrypted result. - Proving correctness of the FHE Coprocessor's actions is hard. Interesting area of research in the wild. - More likely at the moment (given current technology) the coprocessor would be a trusted oracle to the contract. - In a private function, the user decrypts the result and stores some state. ---