The Sequencer and Prover coordination mechanism will be put to test during the upcoming Sequencer & Prover Testnet, a permissioned Aztec rollup deployment where multiple sequencers coordinate with provers to build blocks, outsource proof production and advance the Aztec chain. S&P Testnet launch date is the first week of November.
Goals of sharing this:
Both sequencing and proving in the Aztec Network are intended to be fully decentralized. We expect sequencers to submit blocks to L1 every ~36 seconds, and provers to prove batches of 32 blocks to finalize the L2 chain.
Sequencers will be chosen via a random election, while provers will be selected by sequencers via an out-of-protocol coordination mechanism. The plan is to steadily scale throughput to 1TPS (i.e. 36tx blocks posted every 36s), and we expect provers to be able to prove at 1TPS throughput by mid November.
The proposers in the first N+1
will accept quotes to prove epoch N
from provers. The winning prover will have until the end of epoch N+1
to produce and submit the proof to L1. In the case where a quote is accepted in slot 1 of epoch N+1
, the prover will have 18.6mins to compute the proof and land it on L1. In the case where a quote is accepted in slot 13, the prover will only have 11.4 mins.
At 1 TPS, each epoch is 1,152 txs. Based on timeliness requirements and depending on proof computational complexity, we should expect the number of prover agents to be up to 1,000 at 1TPS.
For more information on the block production mechanism, please see Aztec’s RFC on block production.
Proposers run RFQs to obtain quotes from provers. Quotes are binding promises from provers to prove an entire epoch. The exact channel over which provers send quotes to proposers is NOT enshrined by the protocol.
However, Aztec Nodes will support two optional mechanisms that provers can use to submit quotes to proposers.
To send a quote via the p2p, do not set the environment variable PROVER_COORDINATION_NODE_URL
and make sure that P2P_ENABLED
is set to True.
Note: For S&P Testnet, please make sure that you are gossiping quotes via the p2p. Set
P2P_ENABLED
to true and do not usePROVER_COORDINATION_NODE_URL
.
struct EpochProofQuote {
Signature signature;
address prover;
uint256 epochToProve;
uint256 validUntilSlot;
uint256 bondAmount;
address rollupAddress;
uint32 basisPointFee;
}
To accomplish this coordination through the Aztec node software, we extend both the P2PClient
and ProverNode
.
The P2PClient
will be extended by:
class P2PClient {
//...
public async addEpochProofQuote(quote: EpochProofQuote): Promise<void> {
// add quote to quote memory pool
this.epochProofQuotePool.addQuote(quote);
// propagate quote via p2p
this.broadcastEpochProofQuote(quote);
}
This is called by the Prover Node inside ProverNode.sendEpochProofQuote()
after it detects an epoch has ended.
As for the Prover Node, we add QuoteProvider
and BondManager
interfaces. Also an EpochMonitor
which sits on the main start
loop of the Prover Node. It fetches the most recent completed epochs and checks whether the proposer accepted an EpochProofQuote.
If no quote has been accepted yet, the EpochMonitor
will call on BondManager
and QuoteProvider
to provide a valid quote. If the claim detected belongs to the prover, the monitor will kick off a handleClaim()
to create proving jobs.
interface BondManager {
ensureBond(amount: number): Promise<void>;
}
interface QuoteProvider {
getQuote(epoch: number): Promise<EpochProofQuote | undefined>;
}
When the prover node first starts up, it will call BondManager.ensureBond
to ensure it has the minimum deposit amount PROVER_MINIMUM_ESCROW_AMOUNT
deposited in the escrow contract. If it does not, it will top up to the target deposit amount PROVER_TARGET_ESCROW_AMOUNT
.
Both
PROVER_MINIMUM_ESCROW_AMOUNT
andPROVER_TARGET_ESCROW_AMOUNT
are customizable env variables.
The EpochMonitor
will then get the last completed, unproven epoch and will call on the QuoteProvider
to generate a quote if the epoch has not been claimed by any provers yet. The QuoteProvider will be provided with all the blocks in the unproven epoch so it could perform any custom logic to determine the quote parameters i.e. bondAmount
, basisPointFee
.
Alternatively, the quote provider can issue an HTTP POST to a configurable QUOTE_PROVIDER_URL
to get the quote. The request body is JSON-encoded and contains the following fields:
epochNumber
: The epoch number to provefromBlock
: The first block number of the epoch to provetoBlock
: The last block number (inclusive) of the epoch to provetxCount
: The total number of txs in the epochtotalFees
: The accumulated total fees across all txs in the epochThe response is also expected in JSON and to contain basisPointFee
and bondAmount
fields. Optionally, the request can include a validUntilSlot
parameter which specifies for how many slots the quote remains valid. For example, an EpochProofQuote with parameters epochToProve=100
and validUntilSlot=5
means that any of the first 5 proposers in epoch 101 can "claim" this quote.
If no QUOTE_PROVIDER_URL
is passed along to the Prover Node, then a SimpleQuoteProvider is used which always returns the same basisPointFee
and bondAmount
as set in the QUOTE_PROVIDER_BASIS_POINT_FEE
and QUOTE_PROVIDER_BOND_AMOUNT
environment variables.
Warning
If the remote QuoteProvider does not return a bondAmount
or a basisPointFee
, the Prover Node will not generate nor submit a quote to the proposer.
Separately, the Prover Node needs a watcher on L1 to detect if its quote has been selected.
To this end, the L1Publisher
will be extended with a new method:
interface L1Publisher {
getProofClaim(): Promise<EpochProofClaim>;
}
The Prover Node will call this method at least once per L2 slot to check for unclaimed epochs or for whether its quotes have been accepted. You can update the polling interval using the environment variable PROVER_NODE_POLLING_INTERVAL_MS
.
The Orchestrator is a component of the Prover node. It encodes the rules that govern how the tree of proofs is constructed and attempts to achieve maximum parallelism.
The Orchestrator enqueues jobs to the Prover Broker and periodically checks if any proofs have completed
The Prover Broker sits in between the Prover Node and the many Prover Agents that will be producing proofs to complete an epoch.
The broker's main responsibility is to take proving jobs from the Prover node and give them to agents in a robust way. As such it has the following properties:
In the Aztec software, a Prover Broker is any service that implements the ProvingJobProducer
and ProvingJobConsumer
interfaces. Out of the box we ship with a simple service that maintains a couple of queues in memory. The proving jobs are also backed up to disk such that the broker is able to recover after a crash. More advanced queueing mechanism can be implemented as needed (Redis, Kafka), the interfaces we use are purposefully kept simple.
The built-in Prover Broker does not make any outgoing calls and it is accessible over JSON-RPC to both the Prover Node (specifically the Orchestrator) and to Prover Agents.
The Aztec client can be run as a Prover Node. In this mode, the client will automatically monitor L1 for unclaimed epochs and propose bids (i.e. EpochProofQuote
) for proving them. The prover node watches the L1 to see when a bid they submitted has been accepted by a sequencer, the prover node will then kick off an epoch proving job which performs the following tasks:
The Aztec client needed to run a prover node is shipped as a docker image aztecprotocol/aztec
. The image exposes the Aztec CLI as its ENTRYPOINT
, which includes a start
command for starting different components. You can download it directly or use the sandbox scripts which will automatically pull the image and add the aztec
shell script to your path.
Once the aztec
command is available, you can run a prover node via:
aztec start --prover-node --archiver
To run a prover agent, either run aztec start --prover
, or add the --prover
flag to the command above to start an in-process prover.
The Aztec client is configured via environment variables, the following ones being relevant for the prover node:
ETHEREUM_HOST
: URL to an Ethereum node.L1_CHAIN_ID
: Chain ID for the L1 Ethereum chain.DATA_DIRECTORY
: Local folder where archive and world state data is stored.AZTEC_PORT
: Port where the JSON-RPC APIs will be served.PROVER_PUBLISHER_PRIVATE_KEY
: Private key used for publishing proofs to L1. Ensure it corresponds to an address with ETH to pay for gas.PROVER_AGENT_ENABLED
: Whether to run a prover agent process on the same host running the Prover Node. We recommend setting to false
and running prover agents on seperate hosts.P2P_ENABLED
Set to True so that your node can discover peers, receive tx data and gossip quotes to sequencers.PROVER_COORDINATION_NODE_URL
Send quotes via http. Only used if P2P_ENABLED
is false.BOOT_NODE_URL
The URL of the boot node for peer discovery.AZTEC_NODE_URL
is used by the Prover Node to fetch the L1 contract addresses if they were not manually set via env vars.Note
For S&P Testnet, we will be providing an Ethereum host, a Boot Node URL and a specific Aztec Image. Please refer to the
The prover agent, on the other hand, relies on the following environment variables:
PROVER_BROKER_HOST
: URL to the Prover Node that acts as a proving job source.PROVER_AGENT_CONCURRENCY
: Maximum concurrency for this given prover agent. Defaults to 1
.Both the prover node and agent also rely on the following:
PROVER_REAL_PROOFS
: Whether to generate actual proofs, as opposed to only simulating the circuit and outputting fake proofs. Set to true
for the scope of the S&P Testnet.LOG_LEVEL
: One of debug
, verbose
, info
, warn
, or error
.LOG_JSON
: Set to true
to output logs in JSON format (unreleased).OTEL_EXPORTER_OTLP_METRICS_ENDPOINT
: Optional URL for pushing telemetry data to a remote OpenTelemetry data collector.Aztec's circuits are structured such that a single root rollup proof is produced that attests to the correct execution of a sequence of transactions throughout an epoch. An epoch is defined as 32 consecutive slots.
Proofs in Aztec are initially split into two kinds: client and server proofs.
Client proofs are generated by users in their own devices when they send a transaction, and use the ClientIVC proving system. These proofs are zero-knowledge, and prove the correct execution of smart contract functions on the client, attesting to the validity of the transaction side effects (emitted note hashes, nullifiers, encrypted logs, etc).
Server proofs are generated by provers. These are Honk proofs, and can be roughly split into 3 domains:
Note that generating each proof requires two steps:
The parity circuits attest to cross-chain messaging. All L1 to L2 messages in a given block are batched and processed by individual base parity circuits, whose output is then aggregated into a root parity circuit. This process is repeated for every block in the epoch.
Each transaction submitted by a user has an associated ClientIVC proof. The first step in proving is to transform this proof into a Honk proof that can be recursively fed into other circuits. This is handled by the initial tube circuit.
Then, a transaction may have multiple public function calls (or none) that are executed by the sequencer. Each of these public executions need to be proven using the AVM circuit. The outputs of the AVM and Tube circuits are then fed into the Public Base Rollup.
If the transaction has no public function calls, then the output of the tube circuit is fed directly into the next stage of proving.
The output of each transaction is passed onto a base rollup circuit. These circuits then get aggregated into a binary tree via merge circuits, until a block root rollup circuit that attests to the validity of an entire block.
Block root rollups are then again merged into a binary tree using block merge rollup circuits, until they are merged into a root rollup circuit that proves the validity of the entire epoch.
This is the entire set of circuits that need to be recursively proven in order to complete the proof for a given epoch.
The number of gates figure is fluctuating as we push more code but is based on observations. The memory requirements are estimates on the other hand.
Proving times depend on thread availability. Efficient multi-threading is implemented for up to 16 cores. There are additional improvements from using 32 cores instead of 16 but speed-ups are marginal beyond that.
Circuit | # of Gates | Memory Requirements |
---|---|---|
Tube Circuit | 4 million | 12 GiB |
Private Base Rollup | 2.3 million | 9 |
Public Base Rollup | 3.7 million | 12 GiB |
Merge Rollup | 1.8 million | 9 |
Block Root | 4.4 million | 12 GiB |
Root Rollup | 10 million | 30 GiB |