# Astria: The Shared Sequencer Network
By: [Elizabeth](https://twitter.com/elizabethereum)
Astria is developing a decentralized shared sequencer network. We have designed the network to provide rollups with fast finality, censorship resistance, composability guarantees, and decentralization at inception.
This post explores the current state of rollups, explains parts of Astria's architecture, and how Astria's sequencer tackles the limitations of centralized rollup operations through shared sequencing.
## 1. How Rollups Work
A rollup is a blockchain that consists of a state transition function executed over some subset of data contained in another blockchain, the L1. Bridging to/from the L1 (by enshrining the rollup’s state transition in the L1) is implemented by non-sovereign rollups.
### 1.1 Rollup full node architecture
A rollup needs to perform the following:
- read relevant subsets of data from the L1
- turn this data into rollup transactions
- execute these transactions to form a rollup block
The first two points are part of “rollup consensus” while the last is “rollup execution”.
![Updated Rollup](https://hackmd.io/_uploads/SkC7kApqa.png)
The main component required for rollup consensus is an L1 derivation function, which derives the transactions for a rollup block deterministically from the data on the L1. This data generally looks like calldata or blob data posted to the chain and/or events emitted from a contract. This data is extracted and turned into rollup transactions.
After the set of rollup transactions for a block has been derived, they are passed to the rollup execution layer, which executes the transactions and updates the rollup state. After execution, a new rollup state root is created.
Unlike L1 nodes, rollup nodes do not necessarily need to form a p2p network and communicate with each other for consensus. The chain’s consensus is provided by the L1, and all (properly-functioning) rollup nodes will deterministically derive the same state given the same L1 state.
However, rollup nodes may still wish to form a p2p network for two reasons: fast confirmations, which can be received from the sequencer, or for rollup light nodes.
### 1.2 Rollup sequencers
In the above section, we discussed how a rollup derives its transactions, but how does the rollup data end up on L1?
A simple solution is to have users post their rollup transactions to the L1 directly. However, this is not ideal because there is not as much of an economic benefit (users have to pay for L1 inclusion, plus L2 execution) for them to do so. As well, users need to wait for L1 blocks to know if their transaction was included.
A more cost- and time-effective solution is to use a sequencer. Rollup sequencers are akin to block producers on L1. They have the privilege of determining ordering and inclusion for rollup blocks. The sequencer collects a set of rollup transactions, batches them, optionally compresses them, and posts them to L1. Compression has the benefit of reducing the L1 inclusion cost for each rollup transaction. Sequencers also provide “pre-confirmations” or “fast confirmations”, meaning the sequencer can provide a commitment to the user that their transaction will end up in a specific rollup block before the corresponding L1 block has actually been published.
Sequencers to date have been implemented as a centralized service, generally run by rollup teams themselves. This means there is one party responsible for rollup transaction ordering and inclusion, leading to a monopoly on MEV extraction and the potential for transaction censorship. While there is always the “escape hatch” by which users post their rollup transactions directly to the L1 for inclusion on the rollup, this is a poor UX alternative.
### 1.3 Decentralized sequencers
A decentralized sequencer network is an alternative to the centralized service, where multiple sequencer nodes each have the ability to propose a batch of rollup transactions.
The flow would then look like:
- a proposer proposes some transaction batch, or commitment to some transaction batch
the network reaches consensus over this batch
- the transaction batch data is made available to rollup nodes
A simple solution is to have a permissioned set, where only batches signed by a key from the chosen set are allowed to be derived into rollup blocks. To make this permissionless, we need to allow anyone to join the sequencer node set - eg. through proof-of-stake. By putting up some stake, anyone can join the sequencer network as a consensus node, and have the privilege of proposing transaction batches.
There are a few options regarding the asset(s) used as stake on the sequencer network. The first is to create a native asset on the sequencer network to be used for staking, as well as for transaction fees. The second is to use the asset of an underlying data availability layer for staking and transaction fees.
### 1.4 Shared sequencers
![Decentralized Shared Sequencer](https://hackmd.io/_uploads/HJd8N0T56.png)
Currently, sequencers are implemented for one specific rollup. Instead of this, we can have a sequencer batch transactions for many rollups. With data compression, this allows for greater cost savings when posting data to L1. A sequencer network that is decentralized and shared incentivizes actors from multiple rollup ecosystems to potentially act as validators on the sequencer network.
Current sequencer implementations generally execute the transactions that it sequences as well. By allowing the sequencer to sequence many rollups, it can either execute the transactions for each rollup it sequences for, or have it not execute any transactions at all, and remain ignorant of the rollup state machines it’s sequencing for (known as lazy sequencing).
The downside to executing the transactions for each rollup is that it’s much more difficult to add new rollups to the shared sequencer, as the sequencer needs to be forked and made aware of a new state machine. Additionally, as the state of the rollups grows, the sequencer execution time gets slower and slower (the same way state bloat affects L1s), but it’s potentially even worse over time since many states are involved.
The better solution is the second, where the shared sequencer’s function is reduced to only sequencing and not executing. In this model, the shared sequencer batches and orders generic transaction data, where the data is tagged with the rollup it’s destined for, and only after the sequencer commits to the batch is the data executed by rollup nodes.
#### Atomic cross-rollup composability
By sharing a sequencer between multiple rollups, it’s possible to have some level of atomic cross-rollup composability between rollups that was not previously possible. For instance, we can guarantee that two transactions for different rollups are both included in the same batch. Currently, rollups are completely disjoint, with only the L1 acting as the common base layer. Any messaging between rollups would need to happen asynchronously using the L1, and the rollups would need to implement some way of handling those messages and deriving state from them.
If we had a shared sequencer that was aware of the state machine of each rollup it sequenced, then we would be able to guarantee atomic cross-rollup execution -- for instance, an atomic bridge between rollup A and rollup B could be implemented where a lock transaction on rollup A and a mint on rollup B are batched and executed together. Since the sequencer is able to simulate the results of execution before committing to the batch, it will only include the lock/mint duo if it can guarantee it will succeed on both chains. Otherwise, it discards both transactions. However, as stated above, having execution on the sequencer layer is not ideal.
If the shared sequencer does not execute transactions and only sequences, we can no longer guarantee atomic cross-rollup execution, only atomic cross-rollup inclusion, which is a much weaker guarantee. This means we can guarantee that a tx for rollup A and a tx for rollup B end up in the next sequencer batch together, but not that they’re both executed successfully -- the tx on rollup A might revert, for instance. With this, we can no longer implement atomic bridging between rollups, or synchronous messaging.
There are potential workarounds for this, for example, having cross-rollup block builders that are aware of the state machines of multiple rollups, plus having top-of-block guarantees on the sequencer. Then, the block builder can execute our lock/mint duo on its respective rollups, and determine if they will succeed if placed at the top of the next block (i.e. there are no other transactions that come before them, thus no state changes that could potentially cause one to revert). If they both succeed if placed at the top of the next block, the block builder can then submit the duo as an atomic bundle to the sequencer, and say “only include this bundle if it’s at the top of the block”. Because it is known when the transactions execute on their respective rollups, they are guaranteed to succeed.
However, this requires design at the protocol level regarding proposer-builder separation and top-of-block auctions. It also puts a limitation on how many atomic bundles can be executed per sequencer batch. As the sequencer doesn’t know the state of the rollups, how can it know how many bundles that touch the same few rollups it can include safely (unless we severely constrain the sequencer to say only one top-of-block transaction per rollup)? It would require some sort of proof on the builder’s end, or an economic guarantee that far outweighs the cost of having part of an atomic bundle revert. However, there is probably no sufficient economic guarantee; for instance, in case of an atomic bridging lock/mint, if the builder’s penalty for submitting a partially-reverted bundle is lower than what it can gain from a failed lock and successful mint, then it’s clear which it will do.
Overall, atomic cross-rollup execution has not been successfully designed for any shared sequencer (yet). More research is needed!
### 1.5 Rollup light nodes
Rollup light nodes, like normal L1 light nodes, follow the consensus of the main chain and verify and sync its headers, without executing the full transaction data of the chain. A rollup light node needs to do a few more things to verify headers than an L1 light node.
A rollup light node needs to:
- implement an L1 consensus light client
- implement an L2 consensus light client
- ensure that the transaction data for each L2 block was published
A light node of a rollup that uses a shared sequencer needs to verify the consensus of the sequencer chain, as the sequencer acts as the equivalent of the L1 - i.e. it’s where transaction inclusion and ordering is finalized. It needs to follow the headers and verify consensus of the sequencer chain. Since light nodes don’t store the blockchain state, to verify if a rollup transaction was included in some rollup block X, the light node first needs a Merkle proof that the rollup transaction was included in the transactions/data root of some sequencer chain header. Then, if it follows that rollup block X was derived from sequencer block Y (using the rollup derivation function), it knows that the transaction should be included in rollup block X. The light node also needs to check that the block data was published, which it can do via data availability sampling (for example, when using Celestia).
## 2. The Astria Stack
![Component Diagram](https://hackmd.io/_uploads/HJHjE0Tca.png)
Astria is building a decentralized sequencer network that can be shared by many rollups.
At a high level, the Astria stack performs the following functions:
- sequences arbitrary data for usage by multiple rollups
- makes this data available to rollup nodes
- allows rollup nodes to easily fetch and verify sequenced data
The first two are mandatory, while the last is implemented more for the developer experience, allowing rollup developers to focus only on the rollup-specific application logic, as opposed to the other aspects such as rollup consensus.
The first requirement (sequencing of arbitrary data for rollups) is implemented by the Astria sequencer network, a PoS network of sequencer nodes that use CometBFT for consensus. The sequencer network comes to consensus on the ordering and inclusion of rollup transactions of the form (rollup_id, tx_bytes). The rollup_id can be any arbitrary string; it’s used only by rollup nodes to determine which data is for them. The second (making data available to rollup nodes) is achieved by publishing the sequenced data via Celestia.
The third (allowing rollup nodes to easily fetch and verify sequenced data) is achieved by the Astria “Conductor”, which works similarly to existing rollup “consensus nodes”, such as op-node within the OP Stack. The conductor obtains the sequenced data, verifies it, and derives the transactions for a specific rollup, all while remaining agnostic to the transaction format and state transition function of the rollup execution node. It then passes the derived transactions to the rollup execution node for processing.
The Astria sequencer network is a lazy sequencer. This means that data is sequenced and committed, but execution is delayed until necessary, and is left to the rollup. This de-couples the execution logic from the consensus logic, removing consensus bottlenecks and allowing for more flexibility for rollup developers. For example, since the data is executed lazily, a rollup may choose to have 2 rollup blocks per Astria block, or 1 rollup block per Astria block. The rollup’s consensus and execution logic is not enshrined in the sequencer.
### 2.1 The Astria shared sequencer network
The Astria sequencer network uses CometBFT (formerly Tendermint) as its consensus algorithm. At a minimum, the sequencer network needs to implement a decentralized leader selection algorithm which rotates between proposers. Ideally, it is also able to provide single-slot (“fast”) finality, which prevents forks from occurring, allowing for simplified chain derivation logic on the rollups which use it. CometBFT is able to provide both of these. Additionally, CometBFT is battle-tested and has been used in production, on many chains, for years. It allows blockchain applications (application, in this case, meaning the chain’s execution logic) which implement ABCI (Application Blockchain Interface) to easily use it as their consensus layer. Additionally, CometBFT-enabled chains have the ability to support IBC (inter-blockchain communication), meaning they have the potential to bridge between many other chains.
Astria sequencer’s execution logic is implemented as a CometBFT application, written in Rust. The application logic allows for three main functions:
- sequencing of rollup data
- value transfers
- validator set changes
During the consensus phase of the sequencer network, the proposer decides on the transactions for the block and creates a commitment to the rollup data for each rollup_id. This is then verified and only finalized if agreed upon by the majority of other nodes. This allows for rollups to only pull the data necessary for their rollup, checking that the commitment to it matches what was finalized by the sequencer chain, without needing to pull the entire sequencer block’s data.
The sequencer network also needs to make its block data available to rollup nodes. During the consensus process, the block data needs to be made available to all validators, so that they can verify and vote on the rollup data commitments (ie. they need to ensure the commitments match the block data). After the consensus process, the data is made more widely available via a data availability layer. Celestia is used for DA, as it supports data availability sampling via light nodes, meaning a light node is able to check if data was made available.
### 2.2 The Astria Conductor
![Updated Conductor](https://hackmd.io/_uploads/r1ez1CTqa.png)
The Conductor can be thought of as the consensus implementation of a rollup full node, similar to op-node within the OP Stack. The conductor is run as one half on a rollup full node, where the other half is the execution engine. Its role is to connect the sequencer and DA layers to the rollup execution layer by deriving the relevant rollup transactions for each block and passing them to the execution layer.
The conductor’s flow looks like:
- for each sequencer block, parse the relevant rollup data it needs
- validate the batch of rollup data; this includes verifying that the corresponding sequencer block was finalized, as well as verifying that the rollup data it extracted is complete (there are no rollup transactions missing), correct (there are no rollup transactions in the batch that shouldn’t be), and properly-ordered (the ordering of the rollup transactions matches what was finalized by the sequencer chain). It is able to do this by verifying the data against the rollup data commitment included in the sequencer block.
- once it has validated the rollup data, it turns it into a list of transactions, which are passed to the execution engine.
Note that the conductor, like the sequencer, is agnostic to the rollup’s transaction format and execution engine; it simply treats transactions as an arbitrary byte array.
#### Data verification
The conductor needs to verify the data it receives before passing it to the rollup. Specifically, it needs to verify that the data is actually what was sequenced by the sequencer in the correct order, that there is no data missing, and that there is no additional data added. The conductor wishes to do this without pulling the entire set of sequenced data or the entire sequencer block, as this is additional data it doesn’t need.
This is achieved by placing a commitment to the entire set of sequenced data in every sequencer block. This commitment is the Merkle root of a tree where each leaf is a commitment to the data for one specific rollup. This commitment is checked by every validator node on the sequencer network, thus has the majority of stake backing it. Then, the conductor can verify the rollup data received is correct by validating:
- the commitment to the entire set of sequenced data was committed by the network, via the set of validator signatures on the block
- the commitment to its specific rollup data is included in that, via a merkle proof of inclusion
- the rollup data it receives corresponds to the commitment for that rollup data, via recalculating the commitment
After all of these steps are done, the conductor can be certain that the rollup data is actually what was sequenced.
There is an additional verification step needed: since the conductor doesn’t pull an entire block’s data, there needs to be consensus over what rollup IDs were sequenced in a sequencer block. It’s possible that a sequencer node doesn’t advertise that a rollup ID was included (i.e. by not posting it to DA). Then, the conductor would think that it simply had no sequenced data in a certain block when it actually did.
This is fixed by having a commitment to all the rollup IDs sequenced inside each sequencer block, which is checked and voted on by every validator. Then, the conductor receives a list of all the rollup IDs sequenced in a block and verifies that against the rollup IDs commitment to be certain as to whether a block had data for it or not.
### 2.3 Bridging on Astria
#### Fee payments
With a shared sequencer, transaction data touches three different chains: the sequencer chain, the data availability chain, and the rollup chain. Each of these requires a fee payment for DoS prevention. If each chain requires a different token for fee payment, this causes a poor UX. Many rollups built on Ethereum allow for bridged ETH to be used to pay fees, alleviating UX concerns, as users only need to obtain one, widely-available token (ETH).
We can do something similar with the sequencer network. Assuming the data availability network’s token is the most easily accessible and established, we can bridge the DA token to the sequencer network to use as fee payment, and also bridge the DA token to the rollup via the usual rollup bridging (deposit derivation) method. In our case, this means bridging TIA (Celestia) to Astria via IBC, and allowing it to be used for fee payments. Then, a rollup can optionally choose to accept (IBC-)TIA as a token for fee payments as well.
#### Rollup bridging
To bridge tokens to a rollup built on Astria, the rollup needs to add the ability to derive deposit transactions from the sequencer or DA network. In general, rollup deposits work as follows:
- on the L1, tokens are transferred to some escrow account/contract.
- the rollup consensus node, which derives the L2 transactions from L1 data, sees these deposits, and includes a corresponding “deposit” transaction in the next L2 block, which is a distinct transaction type.
- the L2 node executes these deposit transactions, minting synthetic funds on the L2 to the respective account.
A rollup on Astria would have to implement something like this to bridge from the sequencer/DA to it.
Bridging back to the L1 requires proving rollup block state roots on the L1. This is the “optimistic” or “ZK” parts of the rollup. To implement bridging from a rollup on Astria back to the sequencer or DA network, we need to add functionality to verify rollup state rollups on them. On the sequencer, this would mean enshrining some sort of state root verification actor in the sequencer’s state machine, which could be instantiated by the rollup and used to bridge to the sequencer. Since we’re using Celestia as DA, which does not support any sort of programmability, the only way to bridge from the rollup to Celestia would be first through the sequencer, then back to Celestia through IBC.
#### IBC
IBC (inter-blockchain communication) is a standard used by Cosmos/CometBFT chains to pass messages between each other in a trust-minimized way. Since Astria uses CometBFT, it is IBC-enabled and able to bridge tokens between other IBC-enabled chains. To see more details about IBC, see the spec.
Celestia is also IBC-enabled, and we expect to bridge TIA from Celestia to Astria for fee payments.
### 2.4 The Astria Composer
To use a rollup on Astria, users need to somehow submit rollup transactions as sequencer transactions to the sequencer network. However, this requires having access to two different nodes (rollup and sequencer), as well as having tokens for fees on both networks. This is not ideal for UX.
A solution to this is to have a dedicated service that wraps rollup transactions as a sequencer transaction and submits them on a rollup user’s behalf. In the Astria stack, we have a service called the composer that performs this function. The composer only needs sequencer tokens, while the user only needs rollup tokens. The user submits transactions as normal to a rollup node. The composer picks up these transactions and ensures they are sequenced.
The incentive for the composer is the ability to make bundles of rollup transactions, ensuring they are included in a specific order. In this regard, the composer can be thought of as a MEV searcher.
The current implementation of the composer is naive and doesn’t perform any searching or bundling, although it could be extended to do so. Our hope is that the Composer serves as a starting point for searchers and block builders interested in collecting end user order flow for one or more rollups and submitting it to the shared sequencer as bundles or blocks.
## Parting Thoughts
The shared sequencer is currently running on a public devnet ([dusk-3](https://docs.astria.org/docs/dusk-faq/information/)), with a decentralized testnet planned in February 2024.
We are committed to building alongside and for teams who share the vision of decentralization, permissionless, and censorship resistant networks. To learn how to start building on the shared sequencer, reach out or visit our [docs](https://docs.astria.org/).
Follow along on [Twitter](https://twitter.com/AstriaOrg) and [Discord](https://discord.gg/6stfbwymA2) to join the discussion.