tags: aztec3-speccing-book

PROPOSAL
Author: Zac

[OLD] Communication Abstractions (L1<>L2, public<>private). Take 2

N.B. many ideas here drawn from Mike's writeup from April (https://github.com/AztecProtocol/aztec2-internal/blob/3.0/markdown/specs/aztec3/src/architecture/contracts/l1-calls.md)

Organisation of Document

This doc is split into 3 parts

Part One describes the design goals of this spec, the restrictions we're working under and presents a high-level design with rationalisations provided for the design choices made (e.g. "what" and "why", but not "how")

Part Two lists worked examples of implementing some Aztec Connect functionality

Part Three describes a more detailed technical specification with no rationalisations (e.g. "how", but not "what" or "why")

Part 1

Part 2

Part 3

Objectives

What is the minimal-complexity mechanism to implement L1<>L2 comms?

  • Enable internal L2 functions to 'call' L1 functions
  • Enable L1 functions to 'call' L2 functions
  • Enable a method of public<>private function communication that does not encounter race conditions

High-level Overview

All communication that crosses the L2<>L1 boundary is enabled via message passing.

  • L2 contracts are linked to L1 portal contracts
  • An L2 contract can send messages to its portal contract
  • An L1 portal contract can send messages to its linked L2 contract

Messages can be used to compose complex sequences of function calls in a failure-tolerant manner.

Data Structures, Communication Channels

The following describes all possible interaction primitives between public/private functions, L1 contracts and L2 databases.

(from architecture drawings)

N.B. a unilateral function call is one with no return parameters. i.e. a function can make a unilateral call but cannot perform work on the results of the call.

What are the fundamental restrictions we have to work with?

  1. L1 contracts and L2 functions cannot read from shared state (L2 state is stored in snark-friendly Merkle trees, L1 state is stored in snark-unfriendly Merkle-patricia trees)
  2. In an Aztec block, transactions are sequenced as [private calls -> public calls -> L1 calls]
  3. L1<>L2 writes can only be acted on in a future block
  4. L1 lacks access to SNARK-friendly primitives

Communication across domain boundaries is asynchronous with long latency times between triggering a call and acting on the response (e.g. up to 10 minutes, possibly much more depending on design decisions).

This doc follows the following heuristic/assumption:

L1 contracts, private functions and public functions are separated domains: all communication is unilateral

This doc defines no logic at the protocol level that requires callbacks or responses as a result of a message being sent across the public/private/L1 boundary.

These abstractions can be built at a higher level by programming them as contract logic.

Messaging abstraction layer

The following image isolates the primitives from fig.1 to enable L2<>L1 communication.

All write operations take at least 1 block to process
e.g. If an L2 triggers an L1 function and that L1 function writes a message, it cannot be read by an L2 function until the subsequent block.

Message Boxes

We introduce a "message box" database abstraction:

The goal is for L2->L1 messages and L1->L2 messages to be treated symmetrically.

The L1 messageBox is represented via a Solidity mapping in the rollup's contract storage.

The L2 message box is represented via an append-only Merkle tree + nullifier tree.

However the interface for both message boxes is similar. The available actions one can perform on a message is identical for both message boxes:

Consuming messages

For both L1/L2 messages, a message can either be validated or consumed.

A validate operation asserts that a message exists.

A consume operation will assert that the message exists. The message is then deleted.

Q: What is in a "message"?

A message is a tuple of the following:

  1. destinationAddress
  2. messageData

For L2->L1 messages, destinationAddress is the address of the L1 portal contract that is linked to the L2 contract that create the message.

The destinationAddress is defined by the Kernel circuit, not the function circuit (i.e. an L2 contract can only send messages to its linked portal contract)

For L1->L2 messages, destinationAddress is the address of the L2 contract that is linked to the L1 portal contract that created the message.

The destinationAddress is defined by the rollup smart contract (i.e. an L1 portal contract can only send messages to its linked L2 contract)

The contents of messageData are undefined at the protocol level. Constraint to size of NUM_BYTES_PER_LEAF. More data requires more messages.

Emulating function calls via messages

The intended behaviour is for messages to represent instructions to execute L2/L1 functions.

This can be achieved by formatting the message payload to contain a hash of the following:

  1. Hash of function parameters (function signature + calldata)
  2. (optional) senderAddress

The senderAddress is used if the function call must come from a designated address.

This is useful if a transaction writes multiple messages into the message box, where the associated functions must be executed in a specific order.

Handling Errors

Error handling is delegated to the contract developer.

If a message triggers a function that has a failure case, this can be supported in one of 2 ways:

  1. revert the transaction. This will prevent the message from being consumed. The transaction can be re-tried until successful
  2. write a failure message into the L2/L1 message box, which instructs the L2/L1 component of the contract to unwind the transaction

Chaper 2: Worked Examples

Uniswap

Tx1: Triggering the swap from L2

Tx2: Executing swap on L1

Notes:

When calling consumeMessage, the portal contract derives the message data. For example, the typical pattern could produce a message which is a SHA256 hash of:

​​​​1. SHA256(calldata)
​​​​2. address of entity calling portal contract (if required)

In the above example, some messages do not specify a "from" parameter. These messages are linked to functions that can be called by any entity (e.g. the swap function could be designed to be called by a bot; paying the bot some Eth to incentivize it to generate the transaction)

Tx3: Process swap result on L2

Notes:

  • If tx fails in unintended way (e.g. out of gas), L1 tx will be reverted and no messages are consumed. i.e. tx can be attempted again

  • Only UniPortal contract can trigger DaiPortal "deposit" due to message specifying UniPortal as the depositor. Enables tx composability.


Part 3: Technical Specification

Data Structure Definitions

Message Leaf

Added into append-only data tree. A message leaf is a hash of the following:

name type description
contractAddress address L2 address of contract Portal is linked to
messageHash field SHA256 hash of a byte buffer of size NUM_BYTES_PER_LEAF

messageHash = SHA256(messageData). Hash performed by L1 contract.

messageData is a buffer of size NUM_BYTES_PER_LEAF.

The message leaf spec does not require messages are unique. This is left to the portal contract if they desire this property (e.g. portal contract can track a nonce).

Messagebox Queue

A dynamic array with max size MAX_L1_CALLSTACK_DEPTH

Each call item contains:

name type description
portalAddress u32 used to define message target
chainId u32 (needed if we want to go multichain)
message sharedBuffer message to be recorded

Kernel-Circuit-Logic

Creating L2->L1 Messages

The public inputs of a user-proof will contain a dynamic array of messages to be added, of size MAX_MESSAGESTACK_DEPTH.

The portalAddress parameter is supplied by the Kernel circuit and is stored in the circuit verification key.

The Kernel circuit will perform the following when processing a transaction:

  • Iterate over contract's outbound message array and push each item onto the message stack (adding in portalAddress)
  • Validate there is no message stack overflow

Nullifying L1->L2 messages

Nullifier logic is identical to handling regular state nullifiers.

Contract Logic

Define the following storage vars:

  • pendingMessageQueue: dynamic array of messages (FIFO queue)
  • messageQueue: dynamic array of messages (FIFO queue)

addMessage(bytes memory message)

(function has no re-entrancy guard)

  1. Validate msg.sender is a portal contract
  2. Look up portalAddress that maps to msg.sender
  3. Push tuple of (message, portalAddress) into a FIFO pendingMessageQueue

processRollup

processing messages

  1. Validate the rollup circuit has added the leading MAX_MESSAGES_PROCESSED_PER_ROLLUP from messageQueue into the data tree
  2. Pop processed messages off of messageQueue
  3. Push pendingMessageQueue onto messageQueue
  4. Clear pendingMessageQueue

processing L2->L1 messagebox writes

Iterate over messageStack provided by rollup public inputs.

Use mapping(address => bytes) messageBox to log messages.

For each entry, messageBox[entry.portalAddress] = entry.message (TODO: handle duplicate messages)

MessageBox Logic

function consumeMessage(bytes message) public

If messageBox[msg.sender] contains message, delete message from messageBox, otherwise throw an error

function assertMessageExists(bytes message) public

If messageBox[msg.sender] does not contain message, throw an error.

Rollup Circuit Logic

Rollup contract actions:

L2->L1 messages

Concatenate all kernel circuit L1 message stacks into a monolithic L1 messageStack.

Monolithic messageStack has max size MAX_ROLLUP_L1_MESSAGESTACK_DEPTH

Sum of all monolithic callstack calldata is MAX_ROLLUP_L1_MESSAGESTACK_BYTES

The contents of messageStack are assigned as public inputs of the rollup circuit.

L1->L2 Messages

For each message in L1's messageQueue array, perform the following:

  1. Provide a contract leaf from the contract tree
  2. Validate contract.portalId == message.portalId
  3. Compute message leaf H(messageSeparator, contract.contractAddress, message.messageHash)
  4. Add leaf into message tree
  5. Extract message.to, message.value. If nonzero, credit balances[to] += value

Output SHA256(messageQueue) to attest to the messages added into the message tree.

Select a repo