This is the seventh update for my work in the CDAP. What I have done since last update I have been working on Springrollup. First, I managed to implement two circuits. One for adding pending transactions, and one for processing pending transactions. Then I realized that the rollup could be simplified a lot; instead of having pending transactions, we could simply require that the operator sends witnesses to all senders in a rollup block, who then confirm their transactions, before the operator publishes the block. Only transactions whose senders have confirmed, are processed. With this new mechanism, there is no longer any need for maintaining pending transactions. Another challange I faced was that I learned that circuits are very strict about the sizes of their inputs. This means that for instance the number of transactions in a block must be hard-coded in the circuit. One solution to allow dynamically adjusted block sizes is to create several circuits for different sizes and allowing the operator to pick one circuit each time they want to publish a block. However, this challenge is even more difficult in our case, since not only is the number of transactions variable, but also the number of senders. One solution for this is to have two circuits that are processed after one another. The first circuit processes all balance updates, and the second circuit processes all senders (checks their signature, and checks that their account index is part of the calldata). The first circuit could compute a hash of all senders (along with the new state root), which is fed as an input to the second circuit. What I will do next
11/9/2021(The newest version of this document can always be found on hackmd or GitHub) We introduce Springrollup: a Layer 2 solution which has the same security assumptions as existing zk-rollups, but uses much less on-chain data. In this rollup, a sender can batch an arbitrary number of transfers to other accounts while only having to post their address as calldata, which is 6 bytes if we want to support up to 2^48 ~ 300 trillion accounts. As a by-product we also achieve increased privacy, since less user data is posted on-chain. General framework We start by introducing the general framework that we will use to describe the rollup. The rollup state is divided in two parts: On-chain available state: State with on-chain data availability. All changes to this state must be provided as calldata by the operator.
10/20/2021This is the fifth update for my work in the CDAP. What I have done since last update Since my last update, I have been working on my zk-rollup concept, which allows for greatly reduced calldata usage while still preserving the same security of zk-rollups. The document is now expanded with more details about deposits, withdrawals, calldata estimates and examples. Anyone familiar with how zk-rollups work should now hopefully be able to grasp the main idea in my proposal. The reason I haven't posted to ethresear.ch yet, is that I want to implement an optimization first. (see below) What I will do next I am currently working on an optimization which would allow even less calldata usage. The current proposal uses 8 bytes of calldata for a batch of up to 65536 transfers from the same sender, but with the extra optimization this will be reduced to 6 bytes for a batch of an unlimited amount of transfers from the same sender. Once this is done, I will post the proposal on ethresear.ch to get some feedback on the proposal. I will reach out to zk-people to try to collaborate on an implementation.
10/12/2021This is the fourth update for my work in the CDAP. Until now, I have mainly been focusing on analyzing the existing uses of the SELFDESTRUCT opcode in Ethereum. In the last couple of weeks, however, I have in addition started working on defining a new Layer 2 technique, which is similar to zk-rollups but requires less data on-chain. SELFDESTRUCT analysis When I wrote my last update, I was still waiting for some feedback on whether the backwards-incompabilities I found were critical enough to try preserving SELFDESTRUCT as is. Shortly after, I got feedback that this was probably not enough to outweight the advantages of neutering SELFDESTRUCT, and that I should do more analysis on what might break when neutering SELFDESTRUCT. I also talked to the creator of Pine Finance, and it turned out that the situation was a bit different than I first thought. At first, I was thinking that the redeployed contracts were the result of a user using the same vault more than once, but I was told that the front-end creates new vault addresses each time, so this was not the case. Instead, the redeployed contracts were the result of an edge-case where a user tries to cancel an order, but the call runs out of gas when trying to transfer the tokens in the vault. As a result, the call that tries to move the tokens in the vault reverts, but the vault still selfdestructs. If SELFDESTRUCT is neutered, the remaining tokens in the vault would be at risk in this edge case. Here is a list of redeployments of the same vault. As can be seen on etherscan, all redeployments except the last were the result of a failed cancelOrder. It is still unclear to me if this problem can be avoided in the front-end, or if it is necessary to deploy a new version of pinecore. If it can be prevented in the front-end, the situation would not be so bad after all.
9/28/2021