# WIP
# Cross-Rollup Forced Transactions - core technical details
*Thanks to Kobi Gurkan, Luozhu Zhang,*
## Contents
- [Abbreviations](#Abbreviations)
- [Introduction](#Introduction)
- [Core Components](#Core-Components)
- [DA layer](#DA-layer)
- [Shared State Commitment](#Shared-State-Commitment)
- [Shared Proof](#Shared-Proof)
- [Shared Transaction Commitment](#Shared-Transaction-Commitment1)
- [First inefficient method](#1-Merkle-Tree-of-mixed-transactions-that-need-scanning)
- [Second efficient method](#2-2d-KZG-commitment-of-organised-transactions)
- [Interacting with the L1](#Interacting-with-L1)
- [Appendix](#Appendix)
## Abbreviations
| Abbreviation | Meaning |
| -------- | -------- |
|RU | Rollup|
| X-RU | Cross-Rollup |
|DAL | Data Availability Layer
SC | State Commitment to a single RU
ShSC | Shared SC, a commitment to the SCs of the different RUs.
TxC | A commitment to the X-RU transactions of a single RU.
ShTxC | Shared TxC, a commitment to the TxCs of the different RUs.
ShP | Shared proof, validates the state transition from old to new ShSC and ShTxC. Composed of BPs and RPs.
BP | Bottom Proof, this is where the proof of a RU is verified and compared to its previous state.
RP | Recursive Proof, the aggregation of BPs.
## Introduction
[The intro post](https://hackmd.io/@kalmanlajko/BkmTTADai) gives a high level overview of the system.
Here we discuss the core technical details of the architecture capable of hosting scalable X-RU forced transactions. We cover the core components, and the way they can be backwards compatible.
A brief summary of the system is:
- RUs post their proof, state diffs, X-RU txs on the DA layer.
- After this a shared proof is created, which verifies the proof of each RU. This shared proof attests to the validity of transition of the system from the old to the new Shared State Commitment and Shared Tx Commitment.
- To create a shared proof for a given DA block:
- for each RU a proof verifying its DA blob is created. This BP verifies that the RU's proof is correct based on the previous ShSC and ShTxC. The DA blob commitment, the RUs new State Commitment and TxC is an output (public input) of the proof.
- These proofs are recursively aggregated. The used ShSC and ShTxC are verified to be the same across the different proofs, and the SC and TxC of the RUs are combined, creating the new ShSC and ShTxC. The DA blob commitments are also combined, resulting in the DA commitment, so scanning the DA block happens implicitly inside the shared proof.
- This shared proof is then aggregated across blocks, and can be settled on L1 for backwards compatibility. Traditional L2s will be able to join the sytem and become L3s.
## Core Components
First we cover the core components of this architecture. These enable the system to scale X-RU txs. These components are the DA layer, Shared Proof, Shared State Commitment, Shared Transaction Commitment. Only the ShTxC is new and complicated.
#### DA layer
The DA layer is where the proofs and the data blobs are posted. We will use [Ethereum's](https://ethresear.ch/t/2d-data-availability-with-kate-commitments/8081) [future DA layer](https://notes.ethereum.org/@dankrad/new_sharding). The DA commitment can be opened to prove that a data blob (which contains the proof and state diffs of a rollup instead of just the state diffs) is indeed stored on the DA layer. We will do this opening inside a ZKP instead of the L1, as doing it in a ZKP is scalable.
We want to post the proof to the DA layer, as we want to inherit its Censorship Resistance. If we didn't post proofs, then we could still send the proofs directly to the Shared Proof, and that process would have to be CR as well.
We can also use the [equivalence trick](https://notes.ethereum.org/@dankrad/kzg_commitments_in_proofs#ZKPs-over-non-aligned-fields) between commitments to transfrom the KZG commitment into e.g. a Merkle Tree commitment. In this case this proof of equivalence will have to be generated alongside the DA block so that the ShP can be easily constructed.
#### Shared State Commitment
The Shared State Commitment, ShSC, is an ordinary snark friendly Merkle Tree commitment to the each RU's State Commitment. This can be opened by each RU to access their current SC, and this is what their ZKP will be compared to. The new state root is an output of the RU's proof. The RUs new SCs is combined in the recursive proofs for the new ShSC (we can easily do this for a MT).
The ShSC and the Shared Transaction Commitment, ShTxC, which we will discuss after the Shared Proof together compromise the global State of the System, and it is the ShP that updates this state.
#### Shared Proof
The shared proof attests to the updates of the State (the ShSC and ShTxC) of the system. A ShP is always constructed for a DAL block, and the ShP also outputs the DAL block's commitment. When verifying a ShP, this DA commitment also has to be checked against the actual DA commitment.
The Shared Proof is constructed by first verifying the proofs of the RUs that are posted to the DA layer in the Bottom Proofs, and then aggregating these in the Recursive Proofs.
The BP is equivalent to a traditional L2's verifier smart contract on the L1. However we need to adapt it to this setting. The process is as follows.
1. We check that the proof and the state diffs are included in the DA layer by opening the DA layer's commitment. We output the DA commitment.
1. We open the ShSC, validate the proofs of the RU compared to the RU's State Commitment. We output the original ShSC and the RU's new SC.
1. We open the ShTxC, check that we consume the received transactions and add the sent transactions. This is a bit complicated, we will discuss this at the ShTxC. We output the original ShTxC, and the rollups new TxC.
In the RPs, we check two BP proofs or RPs:
1. We assert the two DA commitments are equal and output it for the next RP.
1. We assert the two used ShSC are equal, and output it for the next RP.
1. We also combine and output the two SC. We can do this as the ShSC is a Merkle tree of the SCs.
1. We assert the two used ShTxCs are equal and output it.
1. We also combine the TxCs. This is a bit more complicated we discuss it at the ShTxC.
#### Shared Transaction Commitment
This is the complicated part. I will outline two solutions to this, and list their respective advantages and disadvantages. I also outline a third solution in the [Appendix](https://hackmd.io/yt8SDVZeStWBw0mol_Lvnw?both#Appendix).
##### 1. Merkle Tree of mixed transactions that need scanning
This is the simple but inefficient method. Each RU publishes their X-RU txs on the DA layer, and the commitment is a MT to these transactions as leaves. These MT commitments are then further aggregated in the recursive proofs.
The problem here is when accessing these transactions, the RU has to open the commitment at every other RU to read if they sent a transactions or not. We also want atomicity, so the receiving RU needs to do this opening even if there is no transactions, in order to check that there really are none. And they have to do this for every DA block, as every block might have transactions.
This method is most similar to [Sovereign SDK](https://github.com/Sovereign-Labs/sovereign/blob/main/core/specs/overview.md). The fact that each DA block is checked corresponds to the DA scanning. The difference is that we are creating the ShTcC in the shared proof, as opposed to using the DA layer as the Tx commitment. This makes the system more efficient, as we do not have to scan through all the data posted to the DA layer.
##### 2. 2d KZG commitment of organised transactions
The efficient but complicated alternative is that the the ShTxC stores the all of the pending transactions. This means that not every DA block's ShTxC needs to be opened and scanned by the RU. Only the current ShTxC needs to be opened when submitting a proof, and all of the txs will be there.
The basic requirement for this is to change the structure of ShTxC. For $N$ RUs, we will store the unconsumed transactions in an $N\times N$ grid, with transactions going from RU $i$ to RU $j$ in cell $(j, i)$. RUs will add transactions to the respective cell when sending a transactions, and remove it when receiving it.
A RU will be involved in a row where it sends transactions and a column where it receives them. RUs should not access/modify the same cells at a single time. To coordinate this, a single ShP will either be a Sending Proof where rows are modified, or a Receiving Proof, where columns are modified.
We can now commit to this 2D grid of transactions. We can attempt to commit to this using a MT, I outline that solution in the [Appendix](https://hackmd.io/yt8SDVZeStWBw0mol_Lvnw?both#Appendix).
The more elegant method of committing to 2D tx grid is with a 2D
[KZG](https://dankradfeist.de/ethereum/2020/06/16/kate-polynomial-commitments.html) commitment.
The advantage of the KZG commitment is that it can be opened in a specific row, column, or cell. What's more, different rows and columns can be combined arbitrarily, in either order. For each cell, we can add sent, or remove consumed transactions. We can calculate the new commitment for each cell, row or column.
The combination of two commitments $a=[p(s)]$, $b=[q(s)]$, to two disjoint sets $S$, $R$ is $Z(S)(s)\cdot b+Z(R)(s) \cdot a$, where $Z(S)$ is the minimal polynomial that vanishes on S. The result is the evaluation at $s$ of $Z(S) \cdot q+ Z(R) \cdot p$, which is a polynomial that evaluates to $p$ on $S$, and to $q$ on $R$.
$$[(Z(S)\cdot q+ Z(R) \cdot p)(s)]= [Z(S)(s)\cdot q(s) + Z(R)(s) \cdot p(s))]= $$
$$ = Z(S)(s)\cdot [q(s)] + Z(R)(s) \cdot [p(s))]= Z(S)(s)\cdot b+Z(R)(s) \cdot a$$
For example, $S$ and $R$ can be separate rows, and $a$ and $b$ can be commitments to these rows. Then we can store the $Z(S)(s)$ and $Z(R)(s)$ as constants. We can then calculate the commitment to the two rows together. We can repeat this in the next RP, expect, $S$ and $R$ will be two rows instead of one. At the end of this process we get the commitment to the whole $N \times N$ grid. We could have done this in the the other direction, i.e. by aggregating column, in which case we would have gotten the same commitment at the end.
So using 2D KZG allows us to flexibly open and aggregate the commitment for Sending as well as Receiving Proofs independently.
## Interacting with L1
The system has to be backward compatible with Ethereum. This means the DA layer will be Ethereum, and the Shared Proof will be sent to L1 for settlement.
We also need to force transactions to the RUs, send messages to the L1 from the RUs, message with traditional L2s via the non-atomic state proof bridges, and finally the RUs need to be able to join and exit the ShP to become L2s like traditional L3s can leave an L2.
#### Settlement
The L1 settlement will verify the ShP, which attests to the ShSC, ShTXC and the used DA commitments. The DA commitments need to be compared with the commitments on the L1.
#### L1<>L3 messaging
The easiest way of achieving L1<>L3 messaging is via the Tx Commitment. This would mean adding the L1 as an RU to the $N \times N$ Tx Commitment grid. Then we can add the L1->L3 txs to the Tx Commitment, and hence forcing rollups to consume it. Similarly, the L3->L1 txs can be added to the Tx commitment. When the ShP settles to L1, it will have to open the ShTxC to check these added transactions.
#### Connecting with old state proof bridges, L2 <> L3 messaging
We do this similarly to traditional RU<>RU messaging. This means that the original tokens (e.g. Eth) are stored on L1, while the L2s own virtual versions of this token, burning and minting it when bridging between each other. The L2s have to know which other L2s to trust, so they are kept in a L2 maintainer smart contract associated with the pooled funds. The funds can be reclaimed from this pool by any L2 in the maintainer contract.
This messaging system is not atomic, X-RU transactions are executed as normal transactions in the RUs. This means we can do the same in our RUs. For this to work the bridge smart contracts have to be modified so it understands that RUs can now be inside the ShP. We will also need the L1 state root (or an alternative) on which the state proofs will be based. This has to be added to the ShP, compared in RPs, and settled on the L1.
#### Joining Exiting
This is crucial for backwards compatibility. We will want traditional L2s to be able to join the system, and we want RUs in our system to be able to exit and become their own traditional L2s.
This will also work in a similar way to the traditional L2, L3 joining and exiting. There will be a RU maintainer smart contract on L1 and each L2, and contracts will now each others addresses.
When an L2 A joins another L2 B and becomes an L3, then B's maintainer contract receives a forced transaction from L1, adding the A's state root to this maintainer contract on B. When the L3 A wants to leave the B the opposite process happens, the maintainer contract on B will send a message to the L1 maintainer contract, which will
add A's state root.
In our Shared Proof we can do something similar, the difference is there is no L2 maintainer smart contract, but we have the ShSC and ShTxC instead. We can add and remove RUs from these.
## Appendix
#### Merkle Tree commitment to the 2D grid of X-RU Txs
As an alternative to the [two methods](https://hackmd.io/yt8SDVZeStWBw0mol_Lvnw#Shared-Tx-Commitment), we could attempt to combine them into a 2D MT commitment. This is possible, but it is quite involved.
As before, we commit to the 2D grid of unconsumed transactions.
As before, we will have sending proofs that modify rows, and receiving proofs that modify columns.
There are now two ways of committing to the X-RU Tx grid:
**A.** we either commit to the rows (corresponding to the sending proofs) first in a MT, and then to these commitments across the columns in a second MT,
**B.** or we first commit to the columns (corresponding to the receiving proofs), and then to the rows in a second MT.
It is unweildy to mix the A with receiving proofs, or the B method with sending proofs. If we did this, e.g. A with receiving proofs, then when creating the BPs for a single RU, we would be modifing the cells in a single column. But we do not want to commit to these cells together, we first want to group each cell with its row. This means that in the BP we would have to output each cell in a single column seperately. When aggregating BPs in RPs, we can aggregate along rows, but we could still not aggregate along columns. We could only aggregate along columns, once we have finished aggregating across the rows, so once the ShP is finished. This means that each BP and RP would contain $N$ Tx Commitments, corresponding to some cells in a single row.
So we have two options here, either we use a only a single one of the methods and have larger proofs in the other rounds, or we use the appropriate commitment for each round, and switch between the commitment methods.
We can switch between these two methods by proving the [equivalence between polynomial commitments](https://ethresear.ch/t/easy-proof-of-equivalence-between-multiple-polynomial-commitment-schemes-to-the-same-data/8188). Briefly we do this by interpreting the two $C_1, C_2$ MT commitments as polynomial commitments to some $p_1(x)$, $p_2(x)$, calculating $z=H(C_1, C_2)$, and evaluating $p_1(z)$ and $p_2(z)$. If these two values are equal, then it is very likely that the two commitments commit to the same data.
If we use this trick, then we can create the two type of shared proofs after each other, one Sending Proof, and one Receiving Proof, alternating between the **A** method and the **B** methods, and proving the equivalence between commitments. This way our proofs will be small in both rounds, however we need to do extra work by proving the equivalence.
This method is especially good for non-KZG based DA layers, so for example Celestia the original DA layer. This is because with these DA layers we can avoid using KZG and BLS12_381 inside ZKPs.

2023, Edit: this construction is primarily a thought experiment showing the possibility for linear scaling, in a model where every RU is responsible for their own data. In practice the DA layer will allow the RUs access to each others data, and has great scaling properties, even if it is not linear, but \sqrt{N}, in the total number of light clients.

6/22/2023Thanks to Kobi Gurkan, Luozhu Zhang,

6/22/2023I welcome all feedback, escpecially if it's negative! TLDR By importing the L1 state root and a corresponding state proof ZKRollups can have trustless bridges. This solution is cheap, however, it is slow, as the sending rollup needs to settle on L1. If we import the sending rollup's block hash directly, then it is risky. The best of both worlds happens when the rollups are settled to L1 using a shared proof aggregator mechanism. In this case it is possible to make the messaging faster, by accepting block hashes when they are settled inside this aggregator, and settlement here will also take care of messaging. There is a risk here if the proof aggregator fails. To decrease this risk, we provide an "intermediate proof" construction. These intermediate proofs are not settled on L1, but can be used to do so in case of an emergency. Intro to the post

2/18/2023Here we are thinking about a reputation sytem for very big, general topics, that are hard to assess algorithmically, and need a more subjective and local assessment. Intuition behind reputation system There are different kinds of reputations for different use cases. From Uber rating, to the number of Twitter followers to politicians receiving some number of votes, these all confer some kind of repuation. What kind of repuation system do we want to build? Lets quickly have an overview of different reputation systems. Uber model Here Drivers and Riders are rated from 1-5, the ratings are averaged to give an average score. Here 1 is understood to be a bad rating, while 5 is good. This is a rating method used to evaluate a very simple thing, how good of a Driver/Rider the person is. This rating system is not a societal rating, it is based on one-off encounters which all conform to a very constrained segment of life. Twitter model In the twitter model, each person can follow other people, and can follow as many other people as they want to. In contrast to the Uber model, this model can be characterised as a societal repuation score, as people from differnt avenues of life (from politicians to scientists) all use Twitter, and their follower count can be directly compared.

9/19/2022
Published on ** HackMD**