owned this note
owned this note
Published
Linked with GitHub
![](https://hackmd.io/_uploads/r1s8q3lE5.jpg)
# Intmax zkRollup (the deprecated version)
## ~The L2 mass adoption program~
Version: 0.1.0 (draft)
Author: Leona Hioki
## This paper is deprecated. Please take the new version
https://eprint.iacr.org/2023/1082.pdf
## 1. Abstract
A zkRollup with an extreme calldata efficiency.
Introducing limited online assumption, client-side ZKP, and preconsensus mechanism into the zkRollup architecture.
This architecure accommodates at least 10K token transfers per second for one network with the ZKP technologies of 2022.
Limited online assumption and preconsensus mechanism make the zkRollup calldata free, and client side ZKP aggregates more transactions also from a user side.
## 2. Background
Rollup is a scalable Layer 2 network that can inherit the general functionalities and security of Ethereum Layer 1. The root of the entire Merkle tree represents the database, and asset holders can prove their asset data by inclusion proofs to the Merkle root. This root of the Merkle tree can be set as a variable on the Layer 1 smart contract to reflect the asset statuses resulting from a large number of transactions. This method is efficient because only the Markle root, a tiny data representing the entire asset, is written to Layer1. This method was the basic idea behind Plasma.
However, sometimes it is impossible to create a proof of asset when the siblings (Merkle Path) are missing. It happens even if only one piece of data that is supposed to be a leaf in the Merkle tree gets lost. This problem endangers all of the assets of all people in the network and is referred to as the “Data Availability Problem.”
Rollup focused on this problem, enforcing by cryptographic/mathematic means of storing all transaction data in the cheapest space called “calldata” on Ethereum Layer 1 to allow the restoration of the entire database of the network by anyone at any time by re-executing L2 transactions locally. A cryptographic method using ZKP to verify the Merkle root with L2 transactions is called zkRollup, and a method in which verification is delayed in Layer 1 and reverification of the root when verifiers find it to be erroneous is called Optimistic Rollup.
There are two significant costs paid to Ethereum miners as the fee from the aggregator in executing zkRollup in a specification proposed from the early days.
>(1)transaction history data storage
>(2)zkp verification
The more transactions, the greater the cost of (1), while the price of (2) remains constant. Reducing the cost of (1) is an essential solution when assuming large-scale. And for (2), we can introduce a checkpointing approach to cut this cost.
The next chapter discusses these gas cost reductions in detail.
## 3. Intmax zkRollup details
### 3-1 Intmax zkRollup Overview
This zkRollup differs from the regular zkRollup in the following ways.
General reduction in calldata costs, excepting account lists.
We gained this scalability at the limited cost of adding the assumption that each user’s wallet can store a certain size of data.
The client-side ZKP will aggregate many transactions into one transaction, including cases recipients are a lot of addresses.
As a result, unlimited numbers of transactions take less than 10 bytes calldata, and this will be reduced furthermore if a sender transacts many times in the long finality interval with the preconsensus mechanism described in 3-3.
Both costs of
>(1)transaction history data storage
>(2)zkp verification
are eliminated.
### 3-2 zkRollup with limited online assumption
In short, this is a hybrid of zkRollup and Plasma Prime. The key concepts are
>(1) immediate return Merkle proof of tx results to users and deletetx history data by proving this communication with zkp
>(2) use old Merkle proofs by proving no-update of states
The original post: https://ethresear.ch/t/a-zkrollup-with-no-transaction-history-data-to-enable-secret-smart-contract-execution-with-calldata-efficiency/10961
*1) transaction history data storage* can be replaced with the following steps while maintaining the same security of conventional zkRollups.
> (1) The user sends a transfer transaction with the new Merkle root of his/her user asset storage, the root of state-diffs, and the zkp proof of the rightness of these.
>
> (2) The aggregator returns the Merkle proof of the state-diff root proving inclusion in the zkRollup’s Merkle Root.
> (3) The user signs and returns it back to the aggregator so that the aggregator satisfies the batch circuit. The users store data received from the aggregator at (1). When this fails, the aggregator excludes the address from the address list which will be commited to Layer1 enabling the user to merge the asset to his/her user asset storage (refund).
>(4) The aggregator publishes the entire address list of the transaction origins of the batch on-chain.
>(5) Upon withdrawal to Ethereum Layer 1, submit an inclusion proof to the zkRollup Merkle Root on L1 distributed by the aggregator at (1). The verification of this proof can be executed on the smart contract on L1.
In addition to (4), the withdrawer submits a non-inclusion proof of his/her address to the address list to prove that his/her asset state has not changed after the aggregator gave him/her the proof.
>(6) The user lets the recipient know the contents of the state diff with the Merkle proof given by the aggregator so that the recipient can merge the state-diff to his/her user asset storage when he/she uses it for the new transaction.
This process does not require writing transaction data to the calldata. Only the result of the transaction and its proof exist on the user side and can be proved on-chain at any time.
“It is impossible to create a proof of asset when the siblings(Merkle Path) are missing. It happens even if only one piece of data, supposed to be a leaf in the Merkle tree, gets lost.”
This problem, the reason why it needs to put transaction data on zkRollup described in the previous chapter, is solved by forcing the aggregator to prove that the siblings were passed to users directly in the zkp circuit. In other words, we make it a constraint in the aggregator-side ZKP circuit that there are no failed distributions of that Merkle proof.
Considering that this zkRollup stores only the address lists of many transactions in a batch, the effectiveness of the compression increases with longer finality. The following section (4-4-2 Preconsensus mechanism for the zkRollup) describes how to optimize this compression and zkp’s on-chain verification cost by safely lengthening the finality without sacrificing convenience.
The entire steps are shown in the following sequence diagram.
![](https://i.imgur.com/AZOabj7.jpg)
The cancelation of a tx when a sender failed to be online and sign it back to an aggregator is possible with the check of the block number he/she specifies (non-inclusion proof of the address list). The sender of the failed tx will be excluded from the address list and this is compelled by a zkp circuit.
### 3-3 Preconsensus mechanism for the zkRollup
The detail: https://ethresear.ch/t/a-pre-consensus-mechanism-to-secure-instant-finality-and-long-interval-in-zkrollup/8749
***2) zkp verification***, one of the two gas costs mentioned above, can be reduced by the following means.
> (1) Put only the batch proof data on the Layer1 as a provisional finality and do not run the zkp verification code.
> (2) To make a finality, use recursive zkp to verify multiple batches of provisional finality at once.
At this time, a provisional finality implies a transition from one previous provisional finality. If any of these batches have an error, the circuit of the summarized finality will fail to generate the proof.
At the same time, any provisional finality with the right proof will get included in the main finality.
Anyone without a full node can check for incorrect batches and proofs by running the zkp verification locally. With verification, anyone can eliminate the incorrect batch on-chain. This fact also means that a user without a full node can make an objective finality of a Layer2 transaction because the user can see the right proofs (commitments), which will be included in the next aggregated finality.
When a user does not want to wait for the finality interval to exit money to L1, he/she can make the finality by the same procedure the aggregator makes.
With this checkpoint, we can shorten the 7-day verification period of Optimistic Rollup to a finality interval of a few hours or an arbitrary time of our choice. In a zkRollup context, we can make the verification gas cost cheaper by keeping the batch interval and lengthening the finality period.
In addition, the solution described in 3-2 (zkRollup with limited online assumption) works with this preconsensus mechanism very well. The preconsensus mechanism can lengthen the interval of finality to help accumulate the address list for a longer period. The aggregator can dump an intermediate address list for a provisional finality to the storage with a limited period like EIP4844's blob, which will banish after a couple of months.
#### Appendix: Storage with time limit
Intmax zkRollup uses decentralized storage with a limited period like EIP4844's blob to accumualte the address list data. This helps the reduction of the calldata uses regarding the list. Preconsensus mechanism lengthens the finality and the intermidiate data will be on the blob to banish after the finality is done.
The security requirement is that decentralized security be no lower than that of Layer 1. Unlike the typical zkRollup architecture, where data must be stored semi-permanently, leaving all users open to a Data Availability Attack, with Intmax zkRollup, user assets are secure as long as only temporary data transfers can be completed. In addition, the risk is isolated for each user.
Compare the number of nodes required to hold data for 5 years (60 months) without failure, as in a typical zkRollup, with the number of nodes required to hold data for 3 months without failure. Here, we assume that users can get data if one honest node, which guarantees data availability, is not wiped out. Furthermore, as an assumption used in simulations in systems engineering, the time t from starting at the time 0 to the first failure of the machine follows an exponential distribution.
Assume that a node has a $p$ probability of failure in unit time. The average time until a node fails is $1 / p$. Assume that starting from time $0$, the time $t$ when the machine first fails follows the following exponential distribution.
$$f(t) = p e^{- p t}$$
Since the probability that a node does not fail by time $T$ is expressed as a cumulative distribution function of this exponential distribution
$$F(T) = 1 - e^{- p T} $$
Thus, when the number of nodes is $N$, the probability $R(T)$ of any failure during the period $T$ is
$$ R(T) = (1 - e^{- p T})^{N} $$
since this is tried in parallel and independently by many nodes.
The following pattern simulation of probabilities p and N, respectively, yields the following results.
![](https://i.imgur.com/N0P0J0D.png)
Here, we can calculate that the probability of a network of 100 nodes failing to retain data for 2 months is equal to or less than the probability of losing Data Availability after 5 years of operation of 5000 Ethereum and is not less than the security in Ethereum. The calculation shows it to be no less secure than the security in Ethereum. We can eliminate the malicious activity that withholds data by using or paying for VM execution on that network to change the global state of the network with the data as an input.
The above allows for an overall solution in the case of user-side failure to pass signatures to a Merkle root without compromising scalability through the use of Data Avaialbility for a limited period of time.
This is why EIP4844's blob satisfies the security requirements.
## 4. Interoperability with the other Layer2s
A sender of a transaction needs to convey the proof data to a recipient in Intmax transfer protocol. We can introduce a refund time limit enabling a sender to cancel a transfer when the recipient does not merge the state diff to complete the transfer.
The random number beneath the state-diff root works as a preimage of HTLC. Atomic swap is enabled with HTLCs from both sides of transactors; then, this feature allows users to swap inside Intmax transfer protocol and also with Layer2s outside of Intmax.
The detailed steps are as follows.
> (1) A user A on Intmax sends a transaction to an aggregator with a random number x beneath the state-diff root, and confirms the transaction by receiving the Merkle proof of the state-diff root from the aggregator.
> (2) A user B on another L2 will set the state-diff root to a HTLC, making the publication of random number and all data beneath the state-diff root the transfer condition.
> (3) User A submits the random number X and all state-diff data to the HTLC and takes the fund on the L2.
>(4) User B knows all the data of the transfer on Intmax, then he/she can merge it to his user asset storage.
Introducing the refund time limit requires publishing state-diff root data in calldata/blob. And introducing an atomic swap inside Intmax requires publishing the random number beneath the state-diff root in calldata/blob to prevent withholding a preimage in the atomic swap protocol.
Pools on each L2 bring the functionality of withdrawing funds to each L2 by the atomic swap described above.
## 5. Intmax Layer3
This is not in the L2 mass adoption program but making the most exciting use cases on Intmax clear. And Intmax project will follow this direction in the long term.
Intmax Layer3:
https://hackmd.io/Upm9wD-0ToK1nDAd9qcH_w
## Reference
[The original proposal of zkRollup by Vitalik Buterin](https://ethresear.ch/t/on-chain-scaling-to-potentially-500-tx-sec-through-mass-tx-validation/)
[Secret zkRollup proposal by Leona Hioki](https://ethresear.ch/t/a-zkrollup-with-no-transaction-history-data-to-enable-secret-smart-contract-execution-with-calldata-efficiency)
[Check point proposal by Leona Hioki](https://ethresear.ch/t/a-pre-consensus-mechanism-to-secure-instant-finality-and-long-interval-in-zkrollup/)
[Data Availability Sampling by Dankrad Feist](https://dankradfeist.de/ethereum/2019/12/20/data-availability-checks.html)
[Limited data availability @Themis RFC "This data remains in only the validation period"](https://github.com/brave-intl/themis-rfcc/blob/main/submissions/parity_plasm/Parity_Plasm_themisv2_submission.pdf)