# Research on fast cross-rollup messaging and settlement inside recursive proofs
I welcome all feedback, escpecially if it's negative!
## TLDR
By importing the L1 state root and a corresponding state proof ZKRollups can have trustless bridges. This solution is cheap, however, it is slow, as the sending rollup needs to settle on L1. If we import the sending rollup's block hash directly, then it is risky.
The best of both worlds happens when the rollups are settled to L1 using a shared proof aggregator mechanism. In this case it is possible to make the messaging faster, by accepting block hashes when they are settled inside this aggregator, and settlement here will also take care of messaging. There is a risk here if the proof aggregator fails.
To decrease this risk, we provide an "intermediate proof" construction. These intermediate proofs are not settled on L1, but can be used to do so in case of an emergency.
## Intro to the post
A lot has [already been written](https://vitalik.ca/general/2022/09/17/layer_3.html) about state-proof bridges that enable trustless bridges across ZKRollups. This solution is great, as they allow cheap, trustless messaging between rollups, without going through L1. For completeness, we will first describe this traditional state-proof bridging process.
We also look at the IBC-like method of importing the block hash directly from the sending rollup. This is risky, as if the sending rollup forks, reverts, or fails to settle to L1, the receiving rollup will not be able to settle.
To decrease this risk, we will look at soft finality mechanisms for rollups.
Then we briefly look at different decentralisation methods for rollups, and how they are compatible with soft finality and messaging.
Then we look at mechanisms for shared proving in a centralised aggregator. We found that it is possible to combine fast messaging and cheap settlement.
Then we look at the intermediate proof construction to make a decentralised proof aggregator more safe and trustworthy, by providing intermediate proof to rollups.
Then we will look at messaging again, and see how convenient the intermediate proof construction is for messaging (and not just soft finality).
## 1. Simple importation of block hashes
### 1.1 Importing of L1 state root (original)
The original messaging protocol, as described [here](https://vitalik.ca/general/2022/09/17/layer_3.html) is fairly simple: the state root of L1 can be read from the receiving rollup. Based on this state root, the receiver rollup can verify messages sent to it from any other rollup. If receiving multiple messages, the rollup can import multiple L1 state roots.
Then the rollup's block and corresponding state transition need to be proven. The state transitions can be aggregated across blocks. Assuming there is a shared proof aggregator the proofs of different rollups are further aggregated, and this proof is settled on L1.
When settlement happens on L1, the proof will have to **export and settle the imported L1 state roots** alongside other proof outputs, (such as the L2->L1 messages), and the validity of the state transition depends on the imported L1 state root being correct. The exported L1 state root can be compared against the current L1 state root.
We can simplify this settlement method. When the proofs are aggregated across blocks, it is possible to compress state diffs (as described under applicative recursion [in here](https://medium.com/starkware/recursive-starks-78f8dd401025)). Using the same method we can aggregate the different L1 state roots, as they are linked by a commitment scheme (usually a Merkle proof).
If we have a shared proof aggregator across rollups, then we can continue with this same method, as we can assume that a lot of rollups are importing and exporting the L1 state root. This means that we can aggregate state roots across different rollups. So it makes sense to add two inputs and an output to the recursive proof which aggregates transitions of different rollups. These two inputs and outputs should correspond to a state root of L1. When aggregating two such proofs, if the two L1 state roots are linked, we can provide additional input and prove this link as well, outputting the later L1 state proof (and they should be linked, as correct L1 block hashes are linked). With this method, all rollups can merge their exported state roots and settle using a single output of the total recursive proof, and this can be checked on L1 to match the historical L1 state roots.
Looking at messaging times here, the time it takes a message from rollup A to reach rollup B is the time it takes a proof of A to reach L1. This time has two components, the first is the actual proving time it takes to prove the state transition of a block. Secondly (and more importantly), even when these blocks are generated there is an aggregation time until there are enough blocks to settle to L1. Finally, the proofs of different rollups can be aggregated into a single proof.
In conclusion, messaging using the L1 state root is cheap, and easy, but quite slow. We can also **merge the exported state roots** of different rollups inside the recursive proofs. In proving, we prove a block, aggregate across blocks, and finally across rollups.
### 1.2. Importing from other rollups (IBC)
In this method, we import the state root of other rollups. For speed we import the state root after it has been finalised in the sending rollup's consensus mechanism, but before it is settled on L1. For this, we need to trust the consensus mechanism for the sending rollup.
We prove and aggregate the state transitions of the receiving rollups as before, and similarly to the first method if we imported multiple state roots from the same rollup, we can aggregate across blocks. However, here we cannot assume that different rollups all import from the same rollups, so aggregating across different rollups is not possible.
When we settle to L1, we prove that we imported a valid state root as before. However, we can only do this if the sending rollup has already settled to L1, otherwise, our state transition will be stuck pending on L1. If the sending rollup does not include the sending transaction in the settlement (i.e. it has forked), we will not be able to settle our state transition.
Note on forking: Having a DA layer commit to the transactions and a trusted consensus mechanism for the sending rollup can help here, but is ultimately not enough as the inclusion of the sending transaction is not a question of DA but of execution. ([Taiko](https://hackmd.io/@taikolabs/HkN7GR64i#/5) has some good solutions for this, but that solution is not general for all transactions). There is a simple but crude way of handling this question. If there is a trusted proof aggregator mechanism across the different rollups, they can keep track of the blocks of the sending rollup, verify that the state transitions are valid (ideally via a proof), and committing to a choice of blocks.
Looking at messaging time here, the time it takes a message to reach another rollup is very fast, however, there are considerable risks.
In conclusion, this method **most similar to IBC**, with all its speed and risks. The difference is that eventually things are settled to L1, so the risk is temporary.
## 2. The question of soft finality
After 1.2. to avoid forking, the need presents itself to have a mechanism, that can verify and attest to the state transition of single rollups, before they are settled to L1. This amounts to a soft form of finality. This soft finality can never replace L1 settlement, but it can make it less risky to import the state root of another rollup.
Let's look at different potential mechanisms for this! We will first consider the case, when we are receiving messages of (and so evaluating attestations to the state transitions of) an isolated rollup, without a shared proof aggregator. Then we will look at mechanisms within the shared proof aggregators.
### 2.1 For Isolated Rollups
#### 2.1.1. With a centralised sequencer and prover
Let's start with the case of the centralised prover. A centralised prover can easily attest to the different blocks of the rollup before it is settled to L1. This attestation can happen on, or off-chain. Either way, once the proof is posted to L1, the attestation can be verified against the actual execution. If the attestation was wrong, or if there is no proof, the on-chain stake of the prover should be slashed.
The prover can offer different kinds of promises, corresponding to the lifecycle of a transaction until it is acceptance.
##### 2.1.1.1 Attesting to the inclusion of txs
The centralised sequencer can attest to including the transaction in a block. This does not mean the transaction will succeed. This can be done off-chain, via a simple signature scheme. The sequencer can sign either a simple message saying a transaction or a block of transactions will be included. This can happen directly between the two communicating rollups, or even better it can happen on a non-enshrined DA layer. For Ethereum, this means Celestia, EigenDA, or similar.
The attestation can also be on-chain. In this case, we have the idea of the enshrined DA layer, where transactions are attested to before they are executed. In Ethereum's DA vision, this means only a commitment to the data is accessible from the EVM.
From the point of view of execution security, these two are broadly similar, as the receiving rollup cannot be sure from the inclusion of the transaction that it will actually be executed. The advantage of enshrined DA is that it prevents forking, so users of the rollup can be sure that their transaction will be included, even if it does not guarantee execution (the tx can fail). This means a somewhat stronger form of soft finality.
In the off-chain DA case, the sequencer/prover can be penalised can happen once the ZKP arrives.
##### 2.1.1.2 Attesting to execution of txs
We can also commit to the execution of the transaction. Practically this means calculating the block hash after a block of transactions has been executed. This can be done off-chain, the sequencer can sign a block header, which can be verified by the receiving rollup (it can also be put on a non-enshrined DA layer). Slashing would happen on-chain later in case of a fork or incorrect execution.
It can also happen on-chain, then the signature is pushed on-chain. This is somewhat resembles IBC, with one execution layer verifying the consensus of another.
Compared to the DA method this method's advantage is that the receiving rollup can actually verify a block hash. The disadvantage is that it does not prevent forking, if the block hash is wrong another one will have to be proposed.
The on-chain method means a stronger form of finality as it allows some forking prevention, by providing a timeout during which only the attested proof could be delivered.
##### 2.1.1.3 Attesting to proofs of txs
We can commit to proofs of txs. This is possible, as proofs are not instantly settled on L1, but are aggregated for cheaper settlement, so we can show these complete but unsettled proofs to other rollups. When off-chain the prover can send the receiver rollup (or a non-enshrined DA) a finished proof that are to be further extended with proofs of new blocks. Here on-chain commitment is not possible, as when the proof is sent on-chain the rollup is settled, so that would not be an attestation, but the real thing.
This method would necessarily be slower than other methods, as here the proof has to be constructed.
##### 2.1.1.4 Conclusions
We saw five options here, three on-chain two off-chain, all of them allowing slashing in case the centralised prover forks (i.e. makes a false attestation). The off-chain features are generally cheaper, but the on-chain features allow stronger penalisation via the verifier contracts. This would practically mean a timeout period until which only the attested blocks can be settled.
If we want the strongest attestation we should go for on-chain DA, on-chain execution attestations, and off-chain proofs. This is somewhat of an overkill though, as DA only provides non-forking, and we can also get non-forking from the other two. With off-chain proofs providing verifiability we can be sure that the transactions can be executed, and with on-chain attestations of execution, we can be sure to have enough time for this proof to be delivered. This would both guarantee valid execution, and prevent forking, so the strongest form of soft finality.
This is not our final question here, which will be rollups with decentralised shared proof aggregatore.
#### 2.1.2. With a decentralised sequencer and prover
There are a lot of potential solutions for decentralising rollup sequencing and proving, a complete discussion is out of scope for this post. We will only cover major categories of options, and see how they affect messaging.
##### 2.1.2.1 Decentralising the sequencer
Broadly speaking there are two big methods for sequencing: permissionless and permissioned sequencing. [PoE](https://ethresear.ch/t/proof-of-efficiency-a-new-consensus-mechanism-for-zk-rollups/11988) for example is a permissionless system: anyone can be a sequencer and post txs to L1. Alternatively, if there is any requirement for accepting blocks, (e.g. there is at any moment an allocated sequencer, or there is another consensus mechanism for sequencing) then the sequencer role is permissioned.
Quick confirmation of the acceptance and execution of a transaction is key for the users' experience when using a rollup, so if sequencing is free, then there will need to be another "executor" role, who can confirm execution in sequenced blocks. This would also be needed for cross-rollup messaging [2.1.1.2](https://hackmd.io/HbyU_j_PQna6llYlg9M95w?both#2112-Attesting-to-execution-of-txs). This means having a PoS consensus mechanism for sequencing would make more sense. ([Starkware](https://community.starknet.io/t/starknet-decentralized-protocol-introduction/2671) is doing research in this direction)
We should note that permissionless sequencing is the most "L1-oriented" mechanism, in the sense that any L1 validator can freely recompute the rollup state from the state diffs included in proofs, start accepting L2 transactions, batch and put them on L1, and possibly even post a corresponding proof.
Being L1-oriented is a valid position, it is Ethereum that provides the decentralisation and CR, but as it does not provide soft finality, it is not ideal for users. Fortunately, it is possible to combine the L1-oriented mechanisms that allow free updates of rollups in case the more restrictive mechanisms fail, giving us the best of both worlds.
The question of what exact consensus mechanism to use, and how to make it L1-oriented is out of scope.
##### 2.1.2.2 Decentralising the prover
There are three main mechanisms for decentralising the prover. One is prover competition based on open access to the sequenced transactions, [PoE](https://ethresear.ch/t/proof-of-efficiency-a-new-consensus-mechanism-for-zk-rollups/11988) is a good example. The second is round-based provers, where there is a given prover for a given time frame, with [PoD](https://blog.hermez.io/introducing-proof-of-donation/#:~:text=Hermez%20is%20a%20decentralised%20layer,throughput%20than%20Ethereum%20layer%201.) as a good example. The third is [roller networks](https://scroll.mirror.xyz/nDAbJbSIJdQIWqp9kn8J0MVS4s6pYBwHmK7keidQs-k), where each proof is composed by a network of independent provers (this is possible due to recursive proofs).
There are quite a few questions about potential permissionless provers.
There are more questions around round-based mechanisms, how will the prover be chosen, what happens if they do not create a proof?
But the most complicated construction is of roller network. Besides asking how the provers will be chosen, and what happens if a prover does not create their proofs, we further have to ask how we can incentivise different provers to cooperate. For example, it is possible that neighbouring provers disagree on whether a proof has been created or not.
If these questions are solved we can distribute not just the proving of different blocks but with recursive proofs even the proofs of transactions. This would lead to a very open economy for proving, where even the smallest machines could participate.
### 2.2 For rollups using a shared proof aggregator
Rollups can settle to L1 via a shared proof aggregator. The reason for this comes is economic: it is cheaper to aggregate proofs and settle them together.
In the rollup case, we had different methods for the sequencer (permissionless, permissioned), and for the prover (permissionless, permissioned, roller network). We could have a sequencer for the shared proof aggregator as well, if the underlying rollups provers are competitive, or if they are malicious. We will not focus on this, we will assume that the proof aggregators can simply pick consistent proofs from each rollup.
This means we can think of this as an optional layer on top of the existing two.
Sequencing -> Proving -> Proof aggregation -> L1.
#### 2.2.1 Centralised proof aggregator, how should the shared proof aggregator work?
Let's look at the simple example, of when the proof aggregator is centralised. There are two traditional visions here, luckily we can combine them for the best of both worlds
##### 2.2.1.1 Third-party proof aggregator, the original shared prover
The original shared prover was [Starkware's](https://starknet.io/docs/sharp.html). The concept is simple, different rollups create aggregated proofs of their own blocks (via applicative recursion), these proofs are taken, and verified, and a proof of this verification is the result. This aggregated proof is sent to L1.
##### 2.2.1.2 L3 Settling on L2
Alternatively, we can feed the proof of the L3 into a verifier contract on the VM of the L2. This is how it is currently done on ETH (with L2s and the L1). This contract checks whether the L3 received the messages sent to it, and also receives the messages sent from the L3 to L2. This allows integration with other L2 smart contracts, and messaging with other L3s. In this case, the state diffs of the L3 will have to be sent to L1, so all of that data will be sent from the L2 to L1.
##### 2.2.1.3 Third party + L3 combination
We can combine the previous two versions. This would mean aggregating the blocks separately using applicative recursion, but to enable fast messaging the L2 could import the L3's block hash, similarly to how the L1 state root was imported previously. The L3->L2 messages could be expanded from this block hash. The block hash would have to be exported when settling. We could also aggregate these exports by applicative recursion for multiple blocks of the L2.
Then when settling on L1, we would aggregate the proofs of the different rollups in a single proof. This proof would have to compare L3's block hash exported by the L2 and the real block hash. If these are the same, then the messages imported by the L2 were the same sent by the L3, and the settlement was correct.
This method has a few nice properties:
- L3->L1 messages are kept separately and not sent through the L2's VM
- proofs are aggregated for each chain using applicative recursion, the L2 does not have to verify any proofs
- messages can still be sent between the rollups quickly.
The interesting thing about this method is that the L3's proof is verified and aggregated on the same level as the L2's, so calling it an L3 no longer makes sense. So we will no longer focus on L3s, we will focus on the shared proof aggregator.
However, up to this point, this method depended on the proving being done centrally, as the two rollups have to be aware of the messages they send each other before they are settled to L1. What's more this central party also creates soft finality, by picking a fork (if there are multiple) that they will settle to L1.
This method also allows "L3" to "L3" messaging, the main difference is that an "L2" is associated with the central proof aggregator, and "L2"s should also be able to force txs on an "L3". However, txs can also be forced via L1. We cover some specific details of messaging in [3.1](https://hackmd.io/HbyU_j_PQna6llYlg9M95w?both#31-Centralised-shared-proof-aggregator).
The main purpose of this section was to realise the advantages of shared proof aggregation (compared to traditional L3's), as it can easily provide soft finality (the L3 would be required to settle on the L2 in a given time period), and facilitate quick messaging, while also providing cheap proving (applicative recursion).
#### 2.2.2 Decentralising the proof aggregator
We can decentralise this proof aggregator in the same categories as how we decentralised the prover for an isolated rollup. This is not the main focus of this post (but the quick messaging that some methods enable), so this list might be incomplete.
##### 2.2.2.1 Permissionless proof aggregation
This is the variant of the permissionless prover for the proof aggregator. This is not ideal, as permissionless provers do not enable soft finality, different aggregators might prove different forks of rollups. So this is not compatible with messaging.
##### 2.2.2.2 Round-based provers - the intermediate proof construction
A round-based proof aggregator is the most similar to the centralised case, the identity of the proof aggregator is known. Every sending rollup knows where to send their current latest proof (which might be further aggregated and updated using applicative recursion). The aggregator can easily attest to these received proofs, and so to the soft finality of the sending rollup. Every receiving rollup can easily receive a confirmation and a proof that the sending provers state transition is valid. Finally, the aggregator can take all the rollup proofs, further aggregate them, and push them to L1. However, if this proof aggregation fails then the collapse is huge: every rollup has to revert since a single missing proof from one of the rollups might break all the interactions (as the states of the rollups will all be "entangled"). A further downside is, that aggregation of the different rollup's proofs happens when the rollups are finished. This means that when the aggregator fails, the reversion is not short-term, but lasts the entire slot of the aggregator.
Fortunately, we can get around this by creating intermediate checkpoint proofs, which the rollups can have access to, and can use to settle on L1. Specifically, let's say blocks are created for each rollup every 5 secs, the common proof is settled on L1 every hour. Then instead of aggregating the $60 min \cdot 12 \frac{blocks}{min}= 720$ blocks of each rollup at the end of the hour, we can applicatively recurse every 5 minutes, so every 60 blocks.
For the $i$-th checkpoint we can calculate the resulting $A_i$ proof from $60\cdot i$ blocks for some given rollup. We can aggregate these proofs across rollups to get proof $B_i$. This $B_i$ is the intermediate proof that aggregates everything up to checkpoint $i$. However, we will ideally not use this to settle to L1, so we should make it as small and quick as possible. This means instead of outputting all the data (state diffs, messages, etc.) for any rollup in $A_i$, we should just output the block hash at the current and at the previous checkpoint, as this contains all necessary information showing about our rollup. Then when aggregating the $A_i$-s we should aggregate these hashes, getting a current and previous aggregated block hash for each intermediate recursive rollup. This will also result in $B_i$ having two output hashes, one for the current global aggregated block hash, and one that links it up to the previous checkpoint's recursive proof's block hash.
The previous block hash is needed in $B_i$, so that it is easy to verify that $B_{i-1}$ is extended by $B_{i}$. It would also be great to link the block hashes of these recursive proofs together similarly to how any chain's block hash always contains the previous block hash. This is possible to do here as well, $B_i$'s block hash will not be simply constructed from the two previous recursive proofs' block hashes ($C_i$ and $D_i$ in our diagram), but should also include $B_{i-1}$'s block hash, which we already output in the $B_i$'s proof. This makes, $B_{i-1}$'s block hash a bit harder to compute, as we will not be able to compute it directly from $B_{i-1}$'s underlying proof's block hashes ($C_{i-1}$, $D_{i-1}$), but this is alright, we can simply have the necessary extra $B_{i-2}$'s block hash input as an input to $B_i$'s' proof.
To repeat, this construction needs three inputs to $B_i$, the two underlying recursive proofs, and the block hash of $B_{i-2}$. The proof in $B_i$ computes and outputs the block hash of $B_{i-1}$, and $B_i$. We really need this connection of block hashes, as we will want to settle block hashes inside these recursive proofs, and when we export a previous block hash, we will link it to the current block hash.
![](https://i.imgur.com/zROWrnu.png)
*Fig. 1. A visualisation of the content of the recursive proofs (boxes), and the hashes inside*
When settling on L1 the last $B_i$'s block hash could be expanded to show the state transitions and state diffs of the underlying rollups. In case of emergency, the $B_i$ proof alongside a proof that exposes each rollups block hash (without state transitions) would be enough. To prevent frontrunning on L1, each rollup's verifier contract would have to give a certain time period for this mechanism to have an output. This would also mean that false emergency exits (if a rollup panics) would not have to halt the aggregation. If an aggregator fails and the structured hash of block hashes is known, another aggregator can keep on aggregating, as proofs are constructed from the rollup's applicative recursion. This is also true if we forget the $A_i$ of some proof, we can keep on aggregating in nodes above the tree.
###### Concluding the intermediate proof construction
Concluding this section, the rollups can have an intermediate proof that they can use to settle on L1 with, in case the proof aggregator fails. This intermediate proof construction also assigns "block hashes" to recursive proofs to keep them small. And as we saw, we made these hashes link to each other, as they usually do in blockchains.
##### Shared roller network
This is a very exciting topic, but it is out of scope for this post.
## 3. Back to imports: shared proof aggregator
We saw how a centralised shared prover can generally handle messaging, and how decentralised shared prover could be made secure via intermediate proofs. Let's look at more details.
### 3.1 Centralised shared proof aggregator
As discussed already, in a centralised shared proof aggregator block hashes of other rollups can be directly imported, and these can be settled inside the proving mechanism, there will be no need to do further settlement on L1. Specifically, each recursive proof can output the exported block hashes, and when the recursive proofs meet, these can be compared and settled.
What would happen if every rollup would import the block hash of every other rollup, let's say for 100 rollups? Then each rollup would have to export these 100 hashes, and each following recursive proof would also have to export each hash, as in the last recursive proof, when 50 rollups' proof meets the other 50's proof, one side will need to export and settle the other 50's block hashes. This is ok if this proof is going to be settled on L1 as that means it will probably be large.
### 3.2 Round-based proof aggregator and roller network - Intermediate proofs
As we saw we can import the block hashes of other rollups directly, but this would make the recursive proofs inside our intermediate proof tree very big. Instead, we can import the "block hashes" from the intermediate tree if they are downstream of our rollup (downstream means our rollups' hash is included in the recursive proof). We need the downstream feature, as when we settle these block hashes, it is ideal if settlement does not go further than the block hash's original hash (as this means the hash can be settled at its origin).
As an example, if we import the hash $D_{i-1}$ from our diagram, then we will export this in $A_i$, and this will travel up to $D_i$. During this travel, it might have been merged into $E_{i-1}$. (If it has been merged then a similar argument will hold to the following one, except for $E_i$ and not $D_i$). So if it has not been merged into $E_{i-1}$, then $D_{i-1}$ will travel up the recursive proofs up to the $D_i$ proof, which also computes $D_{i-1}$, which means the two $D_{i-1}$ values can be compared and settled.
Whats more, if we exported $D_{i-2}$, we can still settle it as $D_{i-2}$ and $D_{i-1}$ are linked.
Note: some attentive readers might be thinking what if the $B_{i-2}$ input was incorrect, after all, it did not have a corresponding proof? Then either the proof aggregation will not be able to handle exported block hashes (as the calculated $B_{i-1}$ will not be correct), or there were no messages to handle (in this case the incorrect $B_{i-2}$ is just random noise in the aggregated proof). Looking at this question from the rollups direction: if our new $A_i$ aggregated rollup proof is settled in the $B_i$ global proof, then either all the exported roots were correctly handled, or we did not receive (=export) messages.
If we restrict ourselves to importing downstream block hashes of recursive proofs, then for 100 rollups $\log_2(100)<7$ block hashes will have to be exported in each recursive proof. What's more, the closer we get to $B_i$, the smaller this number gets.
This would make the construction of these intermediate proofs quick, and so enabling very fast cross-rollup messaging, with great soft finality, as messages are settled in the intermediate tree. These proofs would not need L1 settlement, so we could make the checkpoints between different proofs as close as needed for messaging, and as allowed by the cost of construction of these proofs.
### 3.4 Conclusion
Concluding this section we saw how intermediate recursive proofs at checkpoints allow rollups to import the block hashes as if they were settled on L1, as these proofs (1.) settle all messages between rollups (2.) can be sent as settlement to L1 for every rollup that uses the proof aggregator (3.) can be prevented from being front-run, if there are proper locks on the L1 verifier smart contracts for each rollup. This corresponds to the strongest of soft finality conditions in isolated rollups that we covered in [2.1.1.4](https://hackmd.io/HbyU_j_PQna6llYlg9M95w?both#2114-Conclusions), but here the rollup only has to import a single recursive proof and the data structure, and this allows messaging with all other rollups in the shared proof aggregator. Thus this construction allows fast and trusted messaging for rollups.
## 4 Conclusion of the post
We covered multiple topics, first how simple block hashes could be imported from L1 or from another rollup. We saw that this was either slow or risky.
Then we looked at soft finality mechanisms for isolated rollups. We found the strongest: on-chain forking prevention, and off-chain proof.
Then we briefly looked at different decentralisation methods for rollups, and how they are compatible with messaging.
Then we looked at a mechanism for shared proving in a centralised aggregator. We found that it is possible to combine fast messaging and cheap settlement.
Then we looked at the intermediate proof construction to make a decentralised aggregator more safe and trustworthy, by providing intermediate proof to rollups.
Then we looked went back to messaging again and looked at how convenient the intermediate proof construction is for messaging as well. We also looked at the right place to import block hashes from: inside the intermediate proof tree.
As a general conclusion, we saw that the line between settlement and execution can be blurred inside a shared proof aggregator. Settlement of messages can happen before a proof is submitted on L1, and there are good guarantees for participating rollups in case another rollup or the proof aggregator (mechanism) fails. This means very fast and reliable messaging between rollups. Finally, when needed, proofs can be posted to L1.

Cross Rollup Forced Transactions - technical details Abbreviations Abbreviation Meaning RU Rollup X-RU Cross Rollup

6/7/20232023, Edit: this construction is primarily a thought experiment showing the possibility for linear scaling, in a model where every RU is responsible for their own data. In practice the DA layer will allow the RUs access to each others data, and has great scaling properties, even if it is not linear, but $\sqrt{N}$, in the total number of light clients. The DA layer in itself is not a RU architecture (it does not specify how RUs should interact). But the DA layer allows this construction to superseded by a newer one. Some elements of this construction, the bridging, virtual tokens, RU/validium exiting for L2, L3 systems are still be valuable. Tl; Dr The basic question of this document is what fractal scaling will look like, how to make bridging cheap between the different rollups, and how to make the whole fractal system secure for chained validiums. We start with fractal scaling, then we follow with what bridging will look like between the rollups, first for users, then for smart contracts. Then we discuss the question of validium security: exit strategies and the required DA. We find that the existing solutions are not scalable, and propose our solution, a common consensus mechanism for the descendant validiums.

6/4/2023Special thanks to Kobi Gurkan Introduction Cross rollup forced transactions are a crucial part of scaling. L2s do not solve the problem, as forced txs have to go through L1, and the L1 will be bottlenecked by execution. L3s partially ease this problem, as forcing txs on an L2 will be much cheaper than on L1, so L3s are a good intermediate solution. Unfortunately the L2 execution is still a bottleneck. The path for cross rollup messaging is state proof bridges, as described for example here or here. Ideally we would make these state proof bridges forcible. The way to do this is to utilise the DA layer, as described here in Celestia: X-RU txs can be recorded and read from the DA layer. By adding ZKPs (like Sovereign Labs), the RUs can prove both that the txs that are sent are valid, and that they are consumed in the receiving RU. Ethereum already has plans to add a DA layer. This does not yet enable the last component of the Celestia vision: scalable X-RU forced transactions. To make this work securely and with the best possible interoperability, we need a new architecture in the current rollup-centric roadmap. This architecture needs to combine the DA layer, the shared proof, the state of the RUs, the X-RU txs, and the consensus mechanism that secures it, all in a scalable way.

5/31/2023Here we are thinking about a reputation sytem for very big, general topics, that are hard to assess algorithmically, and need a more subjective and local assessment. Intuition behind reputation system There are different kinds of reputations for different use cases. From Uber rating, to the number of Twitter followers to politicians receiving some number of votes, these all confer some kind of repuation. What kind of repuation system do we want to build? Lets quickly have an overview of different reputation systems. Uber model Here Drivers and Riders are rated from 1-5, the ratings are averaged to give an average score. Here 1 is understood to be a bad rating, while 5 is good. This is a rating method used to evaluate a very simple thing, how good of a Driver/Rider the person is. This rating system is not a societal rating, it is based on one-off encounters which all conform to a very constrained segment of life. Twitter model In the twitter model, each person can follow other people, and can follow as many other people as they want to. In contrast to the Uber model, this model can be characterised as a societal repuation score, as people from differnt avenues of life (from politicians to scientists) all use Twitter, and their follower count can be directly compared.

9/19/2022
Published on ** HackMD**