Albus Dompeldorius

@albus

Joined on Jun 4, 2021

  • This is the seventh update for my work in the CDAP. What I have done since last update I have been working on Springrollup. First, I managed to implement two circuits. One for adding pending transactions, and one for processing pending transactions. Then I realized that the rollup could be simplified a lot; instead of having pending transactions, we could simply require that the operator sends witnesses to all senders in a rollup block, who then confirm their transactions, before the operator publishes the block. Only transactions whose senders have confirmed, are processed. With this new mechanism, there is no longer any need for maintaining pending transactions. Another challange I faced was that I learned that circuits are very strict about the sizes of their inputs. This means that for instance the number of transactions in a block must be hard-coded in the circuit. One solution to allow dynamically adjusted block sizes is to create several circuits for different sizes and allowing the operator to pick one circuit each time they want to publish a block. However, this challenge is even more difficult in our case, since not only is the number of transactions variable, but also the number of senders. One solution for this is to have two circuits that are processed after one another. The first circuit processes all balance updates, and the second circuit processes all senders (checks their signature, and checks that their account index is part of the calldata). The first circuit could compute a hash of all senders (along with the new state root), which is fed as an input to the second circuit. What I will do next
     Like  Bookmark
  • This is the sixth update for my work in the CDAP. What I have done since last update Springrollup specification Since my last update, I have finally figured out how to do deposits/withdrawals in my zk-rollup (called Springrollup), and I have posted the first description of it on ethresear.ch. I realized that because of the differences between my design and regular zk-rollups, deposits and withdrawals were more complicated in comparison with a regular zk-rollups, but I found a way to do it. I figured out that it was much easier to do withdrawals and deposits if we represented each rollup account's balance as a sum of two balances: one balance which keeps track of the amount deposited on L1 to the account minus the amount withdrawn on L1 from the account, and another balance which keeps track of the amount received to the account minus the amount sent by L2 transfers from the account.
     Like  Bookmark
  • (The newest version of this document can always be found on hackmd or GitHub) We introduce Springrollup: a Layer 2 solution which has the same security assumptions as existing zk-rollups, but uses much less on-chain data. In this rollup, a sender can batch an arbitrary number of transfers to other accounts while only having to post their address as calldata, which is 6 bytes if we want to support up to 2^48 ~ 300 trillion accounts. As a by-product we also achieve increased privacy, since less user data is posted on-chain. General framework We start by introducing the general framework that we will use to describe the rollup. The rollup state is divided in two parts: On-chain available state: State with on-chain data availability. All changes to this state must be provided as calldata by the operator.
     Like  Bookmark
  • This is the fifth update for my work in the CDAP. What I have done since last update Since my last update, I have been working on my zk-rollup concept, which allows for greatly reduced calldata usage while still preserving the same security of zk-rollups. The document is now expanded with more details about deposits, withdrawals, calldata estimates and examples. Anyone familiar with how zk-rollups work should now hopefully be able to grasp the main idea in my proposal. The reason I haven't posted to ethresear.ch yet, is that I want to implement an optimization first. (see below) What I will do next I am currently working on an optimization which would allow even less calldata usage. The current proposal uses 8 bytes of calldata for a batch of up to 65536 transfers from the same sender, but with the extra optimization this will be reduced to 6 bytes for a batch of an unlimited amount of transfers from the same sender. Once this is done, I will post the proposal on ethresear.ch to get some feedback on the proposal. I will reach out to zk-people to try to collaborate on an implementation.
     Like  Bookmark
  • This is the fourth update for my work in the CDAP. Until now, I have mainly been focusing on analyzing the existing uses of the SELFDESTRUCT opcode in Ethereum. In the last couple of weeks, however, I have in addition started working on defining a new Layer 2 technique, which is similar to zk-rollups but requires less data on-chain. SELFDESTRUCT analysis When I wrote my last update, I was still waiting for some feedback on whether the backwards-incompabilities I found were critical enough to try preserving SELFDESTRUCT as is. Shortly after, I got feedback that this was probably not enough to outweight the advantages of neutering SELFDESTRUCT, and that I should do more analysis on what might break when neutering SELFDESTRUCT. I also talked to the creator of Pine Finance, and it turned out that the situation was a bit different than I first thought. At first, I was thinking that the redeployed contracts were the result of a user using the same vault more than once, but I was told that the front-end creates new vault addresses each time, so this was not the case. Instead, the redeployed contracts were the result of an edge-case where a user tries to cancel an order, but the call runs out of gas when trying to transfer the tokens in the vault. As a result, the call that tries to move the tokens in the vault reverts, but the vault still selfdestructs. If SELFDESTRUCT is neutered, the remaining tokens in the vault would be at risk in this edge case. Here is a list of redeployments of the same vault. As can be seen on etherscan, all redeployments except the last were the result of a failed cancelOrder. It is still unclear to me if this problem can be avoided in the front-end, or if it is necessary to deploy a new version of pinecore. If it can be prevented in the front-end, the situation would not be so bad after all.
     Like  Bookmark
  • SELFDESTRUCT is an opcode that a contract can use to delete itself, meaning both code and storage are deleted from the state tree, and all ETH in the contract are sent to a specified address. It turns out that SELFDESTRUCT causes several complexities. Among other things, allowing contracts to selfdestruct causes complexities when switching to Verkle trees, and the current proposal for Verkle tries requires neutering the opcode. This means that the opcode is renamed to SENDALL, and the only thing it does is to send the contract balance to the specified address. Goal of the project Neutering SELFDESTRUCT is a backward-incompatible change. The goal of this project is to analyze all existing uses of SELFDESTRUCT to evaluate the potential effects of this change. What have been done First, I downloaded the contract code of all contracts deployed on the Mainnet from the Public BigQuery Ethereum dataset. This dataset was created using Ethereum ETL. I then created a PostgreSQL database where I inserted this data. I will use this database to store metadata about the contracts, as I go along. Currently, I have added the following metadata: has_selfdestruct Does the code have the SELFDESTRUCT opcode? selfdestruct_recipients The potential recipients of the contract balance, extracted by static analysis (see below).
     Like  Bookmark
  • This is the second update for my CDAP Project, which is about analyzing the existing uses of the SELFDESTRUCT opcode in Ethereum. Progress since last update Since the last update, I have focused on analyzing existing uses of redeployed contracts, which is a contract that is deployed at the same address as a selfdestructed contract. Since redeploying a contract will no longer be possible if SELFDESTRUCT is neutered, this is an interesting use case to analyze. The analysis revealed that disabling or neutering SELFDESTRUCT opens a security risk to uninformed users of a contract used by Pine Finance. In short, users of this contract send tokens to an predetermined address called a vault. Relayers then trigger a function in the PineCore contract to create a new contract at this predetermined address using CREATE2, which executes a trade and then self destructs. Under the current behaviour, it is possible for a user to use the same vault several times. If SELFDESTRUCT is neutered, however, this would no longer be possible. In fact, if a user then tried to send tokens to a used vault, anyone could steal the tokens in the vault. The full analysis is hosted here. What to do next
     Like  Bookmark
  • This is the third update for my CDAP Project, which is about analyzing the existing uses of the SELFDESTRUCT opcode in Ethereum. Progress since last update Based on my analysis of the usage of SELFDESTRUCT, which revealed that neutering SELFDESTRUCT could damage certain DeFi users, I am leaning towards trying to preserve the functionality of SELFDESTRUCT when switching the Ethereum State to Verkle Tries. To that end, I have modified the current Verkle tree proposal to support SELFDESTRUCT. My version is hosted here. My version attempts to be the simplest possible modification that supports SELFDESTRUCT. The gas usage by contracts that are only deployed once should be almost unchanged, except that an additional slot incarnation_number in the contract header needs to be accessed. Gas usage by contracts that have reincarnated at least once is a bit higher, since each contract call needs to access at least two branches. In addition to implementing support for SELFDESTRUCT, I also simplified the original specification somewhat by performing the splitting of the least significant byte of each sub-tree index with the rest as part of the function get_tree_key. This allowed for removing multiple instances of // 256 and % 256 in the spec, and also shortened the syntax of the access events from the form (address, sub_key, leaf_key) to the form (address, sub_key). What to do next
     Like  Bookmark