# Very WIP
# Cross Rollup Forced Transactions - technical details
| Abbreviation | Meaning |
| -------- | -------- |
|RU | Rollup|
| X-RU | Cross Rollup |
|DAL | Data Availability Layer
SC | State Commitment to a single RU
ShSC | Shared SC, a commitment to the SCs of the different RUs.
TxC | A commitment to the X-RU transactions of a single RU.
ShTxC | Shared TxC, a commitment to the TxCs of the different RUs.
ShP | Shared proof, validates the state transition from old to new ShSC and ShTxC. Composed of BPs and RPs.
BP | Bottom Proof, this is where the proof of a RU is verified and compared to its previous state.
RP | Recursive Proof, the aggregation of BPs.
[The intro post](https://hackmd.io/@kalmanlajko/BkmTTADai) gives a high level overview of the system.
Here we discuss the technical details of the architecture capable of hosting scalable X-RU forced txs.
A brief summary of the system is:
- RUs post their proof, state diffs, X-RU txs on the DA layer.
- After this a shared proof is created, which verifies the proof of each RU. This shared proof attests to the validity of transition of the system from the old to the new Shared State Commitment and Shared Tx Commitment.
- To create a shared proof for a given DA block:
- for each RU a proof verifying its DA blob is created. This BP verifies that the RU's proof is correct based on the previous ShSC and ShTxC. The DA blob commitment, the RUs new State Commitment and TxC is an output (public input) of the proof.
- These proofs are recursively aggregated. The used ShSC and ShTxC are verified to be the same across the different proofs, and the SC and TxC of the RUs are combined, creating the new ShSC and ShTxC. The DA blob commitments are also combined, resulting in the DA commitment, so scanning the DA block happens implicitly inside the shared proof and not the proofs of the individual RUs.
- This shared proof is then aggregated across blocks, and can be settled on L1 for backwards compatibility. Traditional L2s will be able to join the sytem and become L3s.
#### DA layer
The DA layer is where the proofs and the data blobs are posted. We will use [Ethereum's](https://ethresear.ch/t/2d-data-availability-with-kate-commitments/8081) [future DA layer](https://notes.ethereum.org/@dankrad/new_sharding). The DA commitment can be opened to prove that a data blob (which contains the proof and state diffs of a rollup instead of just the state diffs) is indeed stored on the DA layer. We will do this opening inside a ZKP instead of the L1, as doing it in a ZKP is scalable.
#### Shared State Commitment
The Shared State Commitment, ShSC, is an ordinary snark friendly Merkle Tree commitment to the each RU's State Commitment. This can be opened by each RU to access their current SC, and this is what their ZKP will be compared to. The new state root is an output of the RU's proof. The RUs new SCs is combined in the recursive proofs for the new ShSC (we can easily do this for a MT).
The ShSC and the Shared Transacation Commitment, ShTxC, which we will discuss after the Shared Proof together compromise the global State of the System, and it is the ShP that updates this state.
#### Shared Proof
The shared proof attests to the updates of th State (the ShSC and ShTxC) of the system. A ShP is always constructed for a DAL block, and the ShP also outputs the DAL block's commitment. When verifying a ShP, this DA commitment also has to be checked against the actual DA commitment.
The Shared Proof is constructed by first verifying the proofs of the RUs that are posted to the DA layer in the Bottom Proofs, and then aggregating these in the Recursive Proofs.
The BP is equivalent to a traditional L2's verifier smart contract on the L1. However we need to adapt it to this setting. The process is as follows.
1. We check that the proof and the state diffs are included in the DA layer by opening the DA layer's commitment. We output the DA commitment.
1. We open the ShSC, validate the proofs of the RU compared to the RU's State Commitment. We output the original ShSC and the RU's new SC.
1. We open the ShTxC, check that we consume the received transactions and add the sent transactions. This is a bit complicated, we will discuss this at the ShTxC. We output the original ShTxC, and the rollups new TxC.
In the RPs, we check two BP proofs or RPs:
1. We assert the two DA commitments are equal and output it for the next RP.
1. We assert the two used ShSC are equal, and output it for the next RP.
1. We also combine and output the two SC. We can do this as the ShSC is a Merkle tree of the SCs.
1. We assert the two used ShTxCs are equal and output it.
1. We also combine the TxCs. This is a bit more complicated we discuss it at the ShTxC.
#### Shared Tx Commitment
This is the complicated part. I will outline two solutions to this, and list their respective advantages and disadvantages.
##### 1. Merkle Tree of mixed transactions that need scanning
This is the simple method. Each RU publishes their X-RU txs on the DA layer, and the commitment is a MT to these txs as leaves. These MT commitments are then further aggregated in the recursive proofs.
The problem here is when accessing these txs, the RU has to open the commitment at every other RU to read if they sent a tx or not. We also want atomicity, so the receiving RU needs to do this opening even if there is no tx. And they have to do this for every DA block.
This method is most similar to [Sovereign SDK](https://github.com/Sovereign-Labs/sovereign/blob/main/core/specs/overview.md). The fact that each DA block is checked corresponds to the DA scanning. The difference is that we are creating the ShTcC in the shared proof, as opposed to using the DA layer as the Tx commitment. This makes the system more efficient, as we do not have to scan through all the data posted to the DA layer.
##### 2. 2d KZG commitment of organised transactions
The second alternative is that the the ShTxC stores the pending txs. This means that not every DA block's ShTxC needs to be opened and scanned by the RU. Only a single ShTxC needs to be opened when submitting a proof.
The basic requirement for this is to change the structure of ShTxC. For $N$ RUs, we will store the unconsumed txs in an $N\times N$ grid, with txs going from RU $i$ to RU $j$ in cell $(i, j)$. RUs will add txs to the respective cell when sending a txs, and remove it when receiving it.
A RU will be involved in a row where it sends Txs and a column where it receives them. RUs should not access/modify the same cells at a single time. To coordinate this, a single ShP will either have to be a Sending Proof where rows are modified, or a Receiving Proof, where columns are modified.
We can now commit to this 2D grid of txs. We can attempt to commit to this using a MT, but then the proofs or the
There is another more elegant method of committing to 2D tx grid which both keeps opening sizes and proof sizes small. The **C** method avoided using KZG and BLS12_381 inside proofs. We have to do this if we want to use Eth's DA layer, as Bottom Proofs in the ShP will have to open DA blobs when verifying that the proof is on the DA layer. If we don't want to open KZG inside ZKPs, we will have to use a non-KZG based DA system.
For the alternative to the **C** method, we commit to the same 2D grid with a 2D[^1]
[KZG](https://dankradfeist.de/ethereum/2020/06/16/kate-polynomial-commitments.html) commitment. This commitment can be opened from both directions for the tx sender and receiver respectively.
[^1]: The best link I found for 2D KZG: https://eprint.iacr.org/2022/621.pdf
To reiterate, the for $N$ RUs, we will store the txs in an $N\times N$ grid, with the unconsumed txs going from RU $i$ to RU $j$ in cell $(i, j)$. RUs will add txs to the respective cell when sending a txs, and remove it when receiving it.
The tx commitment can be opened in a specific row, column, or cell. For each cell, we can add sent, or remove consumed txs. We can calculate the new commitment for each cell, row or column.
Similarly to a MT, we can combine multiple rows with other rows and multiple columns with other columns to calculate the new tx commitment.
The combination of two commitments $a=[p(s)]$, $b=[q(s)]$, to two disjoint sets $S$, $R$ is $Z(S)(s)\cdot b+Z(R)(s) \cdot a$, where $Z(S)$ is the minimal polynomial that vanishes on S. The result is the evaluation at s of $Z(S) \cdot q+ Z(R) \cdot p$, which is a polynomial that evaluates to $p$ on $S$, and to $q$ on $R$.
$$[(Z(S)\cdot q+ Z(R) \cdot p)(s)]= [Z(S)(s)\cdot q(s) + Z(R)(s) \cdot p(s))]= $$
$$ = Z(S)(s)\cdot [q(s)] + Z(R)(s) \cdot [p(s))]= Z(S)(s)\cdot b+Z(R)(s) \cdot a$$
The whole TX commitment means the commitment to whole array which includes every sender, receiver pair. Commitments to multiple rows and columns of this are partial tx commitments.
#### Consensus mechanism
The precise details of the consensus mechanism are out of scope here. I list the main requirements.
- The first reason we need a consensus mechanism is the same as for traditional L2s, we want soft finality before L1 proof settlement. This makes L2 confirmation time better (it does not include L2 proving time, or the time it take takes to aggregate proofs and settle to L1).
- The consensus mechanism also allows other L2s to join and new L3s to leave the L2 in an organised manner. This means posting transactions to the L2 verifier contract on L1 (these being the traditional forced transactions). When the consensus mechanism processes these transactions, the L2s can join and the L3s can leave the system.
Organised joining and leaving is especially important, because the state of the L3s are intertwined because of the forced messages. The soft finality ensures that message receivers can be certain that the sender rollups do not fork after they send a message, and similarly sender rolllups can make sure that receiver rollups do not fork before receiving a message.
- The third reason for the consensus mechanism is to verify the L2 DA layer with DAS sampling. The shared proof and the data blobs are ultimately settled on L1, but the L2 needs to check as well as the L2 proofs are not constantly settled onto L1.
### Interacting with L1
The L1 settlement will verify the L2 consensus mechanism and the proof of the L2, which attests to the state commitment, TX commitment. The L1 will also have to receive the latest DA blobs of the rollups.
This is also where L1<>L3 messages are checked, and where the rollups can join and exit the L2.
#### L1<>L3 messaging
The easiest way of achieving L1<>L3 messaging is via the TX commitment. This would mean adding the L1 as an L3 to the TX commitment array. Then we can add the L1->L3 txs to the TX commitment, and so forcing rollups to consume it. Similarly, the L3->L1 txs can be added to the TX commitment. When the L2 settles to L1, it can then prove that these transactions were added to the TX commitment.
#### Connecting with old state proof bridges, L2 <> L3 messaging
The old L2s, L3s message each other through traditional state proof bridges. These are not forced txs, so we can easily execute them as normal txs in the L3s. For this to be viable the state proof bridge smart contracts have to be modified so it understands that L3s can now have Merkle proofs at different locations, attested to by this new kind of proofs. Otherwise we can keep the other infrastructure the same.
#### Joining Exiting
This is crucial for backwards compatibility. We will want L2s to be able to join the system and become L3, and L3s to be able to exit and become their own traditional L2s.
We do this similarly to traditional L2s and L3s. This means that the original tokens (e.g. Eth) are stored on L1, while the L2s in the associated L2 maintainer smart contract own virtual versions of this token, burning and minting it when bridging between each other. The funds can be reclaimed from this pool by any L2 in the maintainer contract. When an L2 A joins another L2 B and becomes an L3, this means that the B's smart contract receives a forced tx on L1 that modifies its state, adding the A's state root to the L3 maintainer smartcontract on B. The L2 maintainer smart contract on L1 has to be linked to the L3 maintainer smart contract on the L2. Then when the L3 A wants to leave the L2 B, the L2 maintainer smart contract can receive a message from L3 maintainer contract, and readd A's state root.
In our case we can do the same, the difference is the L3 maintainer smart contract is more explicit, so the state of the L2 changes in a different way. Specifically, we have to modify the state commitment, and TX commitment.
#### Posting the proof to the DA layer
We want to post the proof to the DA layer, as we want to inherit its Censorship Resistance. If we didn't post proofs, then we could send the proof directly to the shared proof, but that process would have to be CR as well.
In this case the BP's would still have to open the DA commitment to prove
#### Merkle Tree commitment to the 2D grid of X-RU Txs
As an alternative to the [two methods](https://hackmd.io/yt8SDVZeStWBw0mol_Lvnw#Shared-Tx-Commitment), we could attempt to combine them into a 2D MT commitment. This is possible, but it is quite involved.
As before, we commit to the 2D grid of unconsumed txs.
As before we will have sending proofs that modify rows, and receiving proofs that modify columns.
There are now two ways of committing to the X-RU Tx grid:
**A.** we either commit to the rows (corresponding to the sending proofs) first in a MT, and then to these commitments across the columns in a second MT,
**B.** or we first commit to the columns (corresponding to the receiving proofs), and then to the rows in a second MT.
It is unweildy to mix the A with receiving proofs, or the B method with sending proofs. If we did this, e.g. A with receiving proofs, then when creating in the BPs and RPs in the ShP we would be unable to output a single TxC, as in each proof we would output the cells in a column, while we would have to aggregate the cells first along rows. We could aggregate along rows however in the RPs, but we could still not aggregate along rows, so each proof would have $N$ tx commitment outputs. Finally at the end of the process when we have aggregated the rows, we could aggregate along columns,
So we have to options here, either we use a only a single one of the methods and have larger proofs in the other rounds, or we use the appropriate commitment for each round, and switch between the commitments.
We can switch between these two methods by proving the [equivalence between polynomial commitments](https://ethresear.ch/t/easy-proof-of-equivalence-between-multiple-polynomial-commitment-schemes-to-the-same-data/8188). Briefly we do this by interpreting the two $C_1, C_2$ MT commitments as polynomial commitments to some $p_1(x)$, $p_2(x)$, calculating $z=H(C_1, C_2)$, and evaluating $p_1(z)$ and $p_2(z)$. If these two values are equal, then it is very likely that the two commitments commit to the same data.
If we use this trick, then we can create shared proofs in two rounds, alternating between the **A** method and the **B** methods, and proving the equivalence between rounds. This way our proofs will be small in both rounds.
This method is especially good for non-KZG based DA layers, so for example Celestia the original DA layer. This is because with these DA layers we can avoid using KZG and BLS12_381 inside ZKPs. If we want to use Eth's Danksharding, then we will have to open KZG commitments inside proofs however, as")