# Cross Rollup Forced Transactions - Introduction
*Special thanks to Kobi Gurkan*
Cross rollup forced transactions are a crucial part of scaling. L2s do not solve the problem, as forced txs have to go through L1, and the L1 will be bottlenecked by execution. L3s partially ease this problem, as forcing txs on an L2 will be much cheaper than on L1, so L3s are a good intermediate solution. Unfortunately the L2 execution is still a bottleneck.
The path for cross rollup messaging is state proof bridges, as described for example [here](https://era.zksync.io/docs/dev/fundamentals/hyperscaling.html) or [here](https://vitalik.ca/general/2022/09/17/layer_3.html). Ideally we would make these state proof bridges forcible.
The way to do this is to utilise the DA layer, as described [here in Celestia](https://arxiv.org/abs/1905.09274): X-RU txs can be recorded and read from the DA layer. By adding ZKPs (like [Sovereign Labs](https://github.com/Sovereign-Labs/sovereign/blob/main/core/specs/overview.md)), the RUs can prove both that the txs that are sent are valid, and that they are consumed in the receiving RU.
Ethereum already has plans to add a [DA layer](https://twitter.com/VitalikButerin/status/1588669782471368704/photo/1). This does not yet enable the last component of the Celestia vision: scalable X-RU forced transactions. To make this work securely and with the best possible interoperability, we need a new architecture in the current rollup-centric roadmap. This architecture needs to combine the DA layer, the shared proof, the state of the RUs, the X-RU txs, and the consensus mechanism that secures it, all in a scalable way.
Why do we need cheap forced transactions?
- First and foremost cheap forced txs are a security question, as if a user has some a small amount of funds in an L2, but can only access it via expensive forced tx via L1, then this will not be worth it for them. So the user effectively lost their funds.
- The alternative approach to forced transactions is Censorship Resistance of L2s. This is a good intermediate step, it would mean making the L2's sequencer set and the locked stake large. With this approach, we can trust some of the L2 sequencers to not censor our tx. However, this is expensive as it requires lots of capital and multiple sequencers. It also penalises new rollups, it will be hard for them to boot up a trusted sequencer set.
- Finally in this context, forced transactions are equivalent to atomicity, a tx on one chain implies a tx on another chain. The RUs are not fragmented anymore. So we can say that this is also a UX problem for the users. With cross rollup forced transactions in the future, the UX of bridging will be great: cheap, fast, atomic, secure.
## Other RU ecosystem architectures
### L2s in a shared prover
Multiple L2s inside a shared prover (e.g. [Starkexes](https://docs.starkware.co/starkex/architecture/solution-architecture.html)) are the simplest architecture for organising multiple RUs. Here a single RU is applicatively recursed in a certain time frame, then these proofs are aggregated across rollups in a shared proof. This shared proof is published on L1. Here forced transactions have to go through L1, which is expensive. It is also slow, the txs can only be consumed once the shared proof is published on L1.
### L2 and L3 method
Here the situation is similar to the previous one, just pushed up one layer, but this leads to big differences. Instead of verifying proofs on L1, we can verify them on L2, either with or without a shared proof. This makes CR much cheaper, as forcing txs through L2 is cheaper than on L1 (especially given state diffs, if the tx is consumed before L1 settlement, then the L2's state diff does not have to include it). It is also faster, as settlement of the L3s on the L2 can happen more often than of the L1. It is ultimately not scalable however, as the L2 will also have a txs bottleneck.
### Comparison to Celestia and Sovereign Labs
The combination of Celestia and [Sovereign Labs](https://github.com/Sovereign-Labs/sovereign/blob/main/core/specs/overview.md), is almost what we need. Given Celestia (a scalable DA layer), the Sovereign SDK would allow RUs to use the DA layer for sequencing (using [Named Space Merkle trees](https://celestia.org/glossary/namespaced-merkle-tree/)), enable them to scan the DA layer for these txs (including X-RU txs), prove using a ZKP that this scanning happened, and with a further ZKP the new state of the RU based on these txs. After this the RUs can verify each others proofs' (perhaps aggregate them for efficiency), and based on these verify and process the X-RU txs.
Here the problem is not scalability of the X-RU txs, but interoperabilitiy and compatibility of the systems. ([As I understand](https://github.com/Sovereign-Labs/sovereign/tree/main)), SOV SDK does not plan to have a standard VM, or a standard prover system, they are merely providing a framework for these. Furthermore their rollups do not have to have a permanent shared global state (L1), or the corresponding shared proof which validates the common state. Without these, [hyperbridging](https://era.zksync.io/docs/dev/fundamentals/hyperscaling.html#logical-state-partitions) does not work, as the whole idea is that a single VM and proof system can be verified in a shared proof, and the shared proof attests to the common state which can be accessed from an enshrined position (L1). This in turn means that everybody can easily verify everybody else. Without these, the Sov SDK will be a patchwork of bridges based on which RU trusts which other proof system, instead of a unified whole.
With this in mind our architecture will focus on a single set of trust assumptions: a single standard VM, a single proof system, and a single enshrined shared proof. This will lead to other differences, such as seperation of the X-RU txs from the normal txs, we will not have to use the DA layer for sequencing, and we will not have to prove that we scanned the DA layer.
## Technical overview
### Technical details high level summary
- DA layer, with its DA commitment
- Shared State Commitment, = ShSC, recording the state commitment of each RU
- Shared Tx Commitment, = ShTxC, recording the X-RU txs
- Shared Proof, = ShP, proves the current state of the system
- Consensus Mechanism, verifies the shared proof and does DAS on the DA layer
A RU rollup running a zkEVM with its own PoS consensus mechanism creates some blocks, and proves them.
This proof contains as public inputs the state diffs, the sent and received transactions to other L3s, all combined in a DA blob (unlike the traditional L1, L2 architecture, where only the state diffs are in the DA blob). This blob is dumped on the DA layer. Other RUs will be able to read the sent txs from here.
At this point the consensus mechanism of the DA layer accepts a block containing the blob.
The Shared Proof can now be constructed. The Shared Proof verifies for each RU a smart contract similarly to how an L2's proof is verified on L1 today. To make this scalable, the smart contract has no permanent state, this is stored in the Shared State Commitment and Shared Tx commitment, these correspond to the current L2 state and the L2<>L1 txs stored on L1. When the smart contract is proven, the used DA commitment, ShTxC and ShSC are public inputs, and are opened inside the smart contract for the RU.
This smart contract opens the DA commitment at the DA blob, (thus verifying for each rollup that the data blob is in the DA layer), and verifies the proof contained in the blob. It also outputs (public input) a commitment to the DA blob. It also opens the ShSC and ShTxC, and updates the RU's current state and consumed and sent txs accordingly. A new commitment for the state and txs is an output (public input) to the proof, alongside the initial shared commitments and blob commitment.
Then the proofs of the individual RUs are aggregated. The initial DA, State and TX commitments are checked to be the same in each recursive proof, and the SC, TXC, and the commitment to the data blob for each RU are aggregated across the recursive proofs. The result is a single proof which proves that the used DA commitment contains blobs that update the SC and TXC to their new values. To make sure that the shared proof is censorship resistant, the combination of the used DA blobs should match the original DA commitment (this replaces scanning the DA proof, as this is an implicit scan).
When the shared proof is finished the L2's consensus mechanism can verify and accpet it, along with the new DA block. The shared proof will always prove the content of the previous DA block.
#### L1 settlement for backward compatibility
Up to this point the DA layer could have been its own L1. We want it to be a backward compatible with the Ethereum, and we want the DA layer to be the DA layer of Eth, so the proposal outlined in [Danksharding](https://notes.ethereum.org/@dankrad/new_sharding). This would make this system an L2, making it send proofs to L1 for settlement.
When settling to L1, the L2 can aggragate the shared proof of each L2 block. This aggregated shared proof, has to be sent to L1, outputting the used DA commitments, and the new ShSC and ShTxC.
L3s will have the same right to join and exit this system as they had in the more traditional L2<>L3 architecture. We will be able to force txs from L1 to the L3s. We can also use the traditional non-atomic L2<>L3 messaging, but for this the messaging contracts need to be modified.
- Fast, cheap and scalable CR resistant (i.e. atomic) messaging between L3s
- Settlement on L1 can be less often, as the L2 allows aggregation. This makes the system cheaper
- Security depends on the consensus mechanism.
- Backwards compatible. The solution allows an easy and secure way for existing dapps, users and funds on L1 to migrate.
We will want cheap cross rollup forced txs. This is a security and UX question for users, and makes bootstrapping new rollups easier. We can achieve this via removing the general VM from an L2, and replacing it for a specialised Shared State Commitment, Shared Transaction Commitment, and specialised Shared Proof for the L3s.
We outline in very broad terms the architecture of such a system. The result is scalable and also backward compatible.
I describe the technical in more detail [here](https://hackmd.io/@kalmanlajko/HyKxykOao).