aztec3-speccing-book
PROPOSAL
Author: Zac
N.B. many ideas here drawn from Mike's writeup from April (https://github.com/AztecProtocol/aztec2-internal/blob/3.0/markdown/specs/aztec3/src/architecture/contracts/l1-calls.md)
This doc is split into 3 parts
Part One describes the design goals of this spec, the restrictions we're working under and presents a high-level design with rationalisations provided for the design choices made (e.g. "what" and "why", but not "how")
Part Two lists worked examples of implementing some Aztec Connect functionality
Part Three describes a more detailed technical specification with no rationalisations (e.g. "how", but not "what" or "why")
What is the minimal-complexity mechanism to implement L1<>L2 comms?
All communication that crosses the L2<>L1 boundary is enabled via message passing.
Messages can be used to compose complex sequences of function calls in a failure-tolerant manner.
The following describes all possible interaction primitives between public/private functions, L1 contracts and L2 databases.
N.B. a unilateral function call is one with no return parameters. i.e. a function can make a unilateral call but cannot perform work on the results of the call.
What are the fundamental restrictions we have to work with?
Communication across domain boundaries is asynchronous with long latency times between triggering a call and acting on the response (e.g. up to 10 minutes, possibly much more depending on design decisions).
This doc follows the following heuristic/assumption:
L1 contracts, private functions and public functions are separated domains: all communication is unilateral
This doc defines no logic at the protocol level that requires callbacks or responses as a result of a message being sent across the public/private/L1 boundary.
These abstractions can be built at a higher level by programming them as contract logic.
The following image isolates the primitives from fig.1 to enable L2<>L1 communication.
All write operations take at least 1 block to process
e.g. If an L2 triggers an L1 function and that L1 function writes a message, it cannot be read by an L2 function until the subsequent block.
We introduce a "message box" database abstraction:
The goal is for L2->L1 messages and L1->L2 messages to be treated symmetrically.
The L1 messageBox is represented via a Solidity mapping in the rollup's contract storage.
The L2 message box is represented via an append-only Merkle tree + nullifier tree.
However the interface for both message boxes is similar. The available actions one can perform on a message is identical for both message boxes:
For both L1/L2 messages, a message can either be validated or consumed.
A validate operation asserts that a message exists.
A consume operation will assert that the message exists. The message is then deleted.
A message is a tuple of the following:
destinationAddress
messageData
For L2->L1 messages, destinationAddress
is the address of the L1 portal contract that is linked to the L2 contract that create the message.
The destinationAddress
is defined by the Kernel circuit, not the function circuit (i.e. an L2 contract can only send messages to its linked portal contract)
For L1->L2 messages, destinationAddress
is the address of the L2 contract that is linked to the L1 portal contract that created the message.
The destinationAddress
is defined by the rollup smart contract (i.e. an L1 portal contract can only send messages to its linked L2 contract)
The contents of messageData
are undefined at the protocol level. Constraint to size of NUM_BYTES_PER_LEAF. More data requires more messages.
The intended behaviour is for messages to represent instructions to execute L2/L1 functions.
This can be achieved by formatting the message payload to contain a hash of the following:
senderAddress
The senderAddress
is used if the function call must come from a designated address.
This is useful if a transaction writes multiple messages into the message box, where the associated functions must be executed in a specific order.
Error handling is delegated to the contract developer.
If a message triggers a function that has a failure case, this can be supported in one of 2 ways:
Notes:
When calling consumeMessage
, the portal contract derives the message data. For example, the typical pattern could produce a message which is a SHA256 hash of:
1. SHA256(calldata)
2. address of entity calling portal contract (if required)
In the above example, some messages do not specify a "from" parameter. These messages are linked to functions that can be called by any entity (e.g. the swap
function could be designed to be called by a bot; paying the bot some Eth to incentivize it to generate the transaction)
Notes:
If tx fails in unintended way (e.g. out of gas), L1 tx will be reverted and no messages are consumed. i.e. tx can be attempted again
Only UniPortal contract can trigger DaiPortal "deposit" due to message specifying UniPortal as the depositor. Enables tx composability.
Added into append-only data tree. A message leaf is a hash of the following:
name | type | description |
---|---|---|
contractAddress |
address | L2 address of contract Portal is linked to |
messageHash |
field | SHA256 hash of a byte buffer of size NUM_BYTES_PER_LEAF |
messageHash = SHA256(messageData)
. Hash performed by L1 contract.
messageData
is a buffer
of size NUM_BYTES_PER_LEAF.
The message leaf spec does not require messages are unique. This is left to the portal contract if they desire this property (e.g. portal contract can track a nonce).
A dynamic array with max size MAX_L1_CALLSTACK_DEPTH
Each call item contains:
name | type | description |
---|---|---|
portalAddress |
u32 | used to define message target |
chainId |
u32 | (needed if we want to go multichain) |
message |
sharedBuffer | message to be recorded |
The public inputs of a user-proof will contain a dynamic array of messages to be added, of size MAX_MESSAGESTACK_DEPTH
.
The portalAddress
parameter is supplied by the Kernel circuit and is stored in the circuit verification key.
The Kernel circuit will perform the following when processing a transaction:
portalAddress
)Nullifier logic is identical to handling regular state nullifiers.
Define the following storage vars:
pendingMessageQueue
: dynamic array of messages (FIFO queue)messageQueue
: dynamic array of messages (FIFO queue)addMessage(bytes memory message)
(function has no re-entrancy guard)
msg.sender
is a portal contractportalAddress
that maps to msg.sender
(message, portalAddress)
into a FIFO pendingMessageQueue
processRollup
MAX_MESSAGES_PROCESSED_PER_ROLLUP
from messageQueue
into the data treemessageQueue
pendingMessageQueue
onto messageQueue
pendingMessageQueue
Iterate over messageStack
provided by rollup public inputs.
Use mapping(address => bytes) messageBox
to log messages.
For each entry, messageBox[entry.portalAddress] = entry.message
(TODO: handle duplicate messages)
function consumeMessage(bytes message) public
If messageBox[msg.sender]
contains message
, delete message from messageBox
, otherwise throw an error
function assertMessageExists(bytes message) public
If messageBox[msg.sender]
does not contain message
, throw an error.
Rollup contract actions:
Concatenate all kernel circuit L1 message stacks into a monolithic L1 messageStack.
Monolithic messageStack has max size MAX_ROLLUP_L1_MESSAGESTACK_DEPTH
Sum of all monolithic callstack calldata is MAX_ROLLUP_L1_MESSAGESTACK_BYTES
The contents of messageStack
are assigned as public inputs of the rollup circuit.
For each message in L1's messageQueue
array, perform the following:
contract
leaf from the contract treecontract.portalId == message.portalId
H(messageSeparator, contract.contractAddress, message.messageHash)
message.to
, message.value
. If nonzero, credit balances[to] += value
Output SHA256(messageQueue)
to attest to the messages added into the message tree.