Try   HackMD

High-level Overview

Overview

A plasma like chain is described that in the happy case requires constant gas per state transition regardless of the number of transactions included. The operator is heavily constrained by a snark that limits their ability to update the state. The only attacks they can perform are to censor transactions or make data unavailable. In the former case, the users can exit their coins. In the latter, they may also exit and/or the operator can be replaced. In the happy case there is no need for the users to monitor the chain for potential old state exits.

High-level operation

Each leaf in the merkle tree represents a single non-fungible token. Ownership of the token can be transferred from one person to another by updating the public key in the leaf.

Transactions are submitted to the operator, the operator cannot forge transactions as they are verified by the snark, the transactions change the ownership of leafs by providing a signature for the public key in the leaf.

batches multiple transactions into a single merkle root on chain update. Using a snark they prove the update was done correctly succinctly, a proof the smart contract will verify before accepting the new root.

To maintain data availablity, we use a hybrid approach. Firstly we have an on-chain "priority que", which the opperator must answer requests inside a given time limit. If they do not, we can therefore assume data is unavailable. We then disable the operator and begin rolling back the chain until either everyone has withdrawn or a new operator is selected.

Terminology:

  • User - somebody who owns one or more leafs
  • Operator - who runs the zkSNARK prover, and varifies the proof with the smart contract
  • Round / Epoch - a block mined on the side-chain
  • address_bits (leaf index) - uniquely identifies a leaf

Other plasma implementations

To be used as a reference, or to straight up borrow code from:

On-chain components

The 'Root' contract handles the following:

  • Set of 'exited' (nullified) leaves
  • List of rounds and their merkle roots
  • Queue for data availability requests
  • Queue for exit requests
  • Challenges to exit requests

Chain roll back on data unavailability.

If the current operator fails to process the priority que inside the time limit they are slashed and we begin exiting users from processing the most recent states first.

Unlike plasma we know that each state is leagal. So we can process the previous state without any time limits. We can also resurect the chain from a previously known state with a different operator.

To do this we hold a reverse auction where at each block new opeartors can bid for the position the highest bit in the most recent block is selected as the operator.

Adding an item to the queue requires a fee, fees need to be burned so nobody can abuse the queue, if we have a fee and we pay it to the operator the operator can abuse the queue for free etc. Nobody has any advantage, anybody adding request to the list has to pay a certain amount.


General API

  • Is leaf nullified?

Is leaf nullified?

Has the leaf been 'exited', once exited its value is zero, null, void etc.

function isLeafNullified(
    uint32 address_bits
) returns (bool);

User API

  • Submit exit request
  • Submit priority queue exit

Submit 'fast' exit proof

When the operator is compliant, a user can exit without challenge if the user proves ownership of an ethereum address in the leaf.

event ExitSuccessful(uint32 address_bits);

function userFastExit(
    uint256 root,
    uint32 address_bits,
    uint256[] path
);

To make a 'quick exit', the user submits a transaction which proves ownership of the ethereum address in the leaf.

             ,-------^------,
             |              |
            LHS            RHS
    ,--------^---------,
    |                  |
  pubkey         address(exiter)

When msg.sender == exiter, the 'quick exit' is possible. This can be verified without checking the leaf signature.

Not going to zero out the leaf, will have the user 'fast exit', but there will be a delay window of 1 or 2 epochs before exits can happen.

The merkle root is never updated on-chain.

Submit priority que exit request

Add leaf to the exit queue, for the 'slow path', where the normal plasma challenge window applies.

event ExitRequested( uint256 root, uint32 leaf_address );

function userRequestExit(
    uint256 root,
    uint32 address_bits,
    uint256[] path,
    uint256[] leaf_pubkey_xy,
    uint256 leaf_rhs
    uint256 sig_S,
    uint256 sig_R
) payable;

Emits ExitRequested when arguments are validated.

The root must be one the most recent root.

The address bits is used as the left/right for the path, and also uniquely identifies the leaf index.

This confirms that the user owns the leaf (via sig_S and sig_R), that the root is a valid block, and that the leaf exists in that block.

This request is put into a queue. Only one exit request can exist for any one leaf, it must be for the tip of the tree.

Operator API

  • Publish block
  • Respond to data availability request
  • Respond to exit request

Publish Block

function operatorPublish(
    uint256 previous_root,
    uint256 new_root );

Propose to quit being the operator

Operator wants to stop being the operator, they want their bond back, and want another operator to gracefully take-over.

TODO: specify API for operator replacement / roll-up.

Cannot reduce the deposit for the operator, must be equal or greater.

zkSNARK components

From a starting merkle root, apply each transaction in a sequence determined by the operator, this results in a new merkle root:

f(R,T)R

Each transaction

T0Tn is:

  • Operator hints
    • Leaf index
  • Public key (X,Y coordinate)
  • Old RHS
  • Signed data
    • New Leaf Value
      • H(LHS, RHS)
        • LHS = H(new_pubkey_x, new_pubkey_y)
        • RHS = arbitrary data?
    • Nonce - TBD, see Questions below
  • Signature
    • R point (X,Y coordinate)
    • S value (scalar)

Notes:

  • Talk about replay protection in future

  • PhABC: shouldnt nonce be replaced by "epoch to be included in"?

  • HaRold The purpose is a Nonce, but I've outlined a few specific variants (see Questions section immediately below)

Signature validation

By providing the public key X/Y points and the old 'right hand side', we prove that we know the preimage to the leaf.

Then by providing the valid signature we authorise its value to be changed to a new one.

assert leaf == Hash(Hash(Tx.pub.x, Tx.pub.y), Tx.old_RHS)
msg = H(nonce, H(leaf, Tx.new_leaf))
assert eddsa_verify(Tx.pub, msg, Sig.R, Sig.S)

Transaction creation

This is performed by the operator to create the merkle paths for each transaction.

Each transaction is applied in-sequence, to generate the merkle paths for the circuit to validate. These paths are provided as auxilliary input to the circuit.

tree = ...
nonce = ...
for T_i in T:
    require( T_i.validate(nonce) )
    T_i.path = tree.get_path(T_i.leaf_index)
    T_i.root_before = tree.root()
    new_root = tree.update( T_i.leaf, T_i.path, T_i.new_leaf )
    T_i.root_after = tree.root()

Questions

  • There are multiple options for what to use as a Nonce
    • Previous Merkle Root
      • Each signature can only be applied to a single block, and future signatures cannot be predicted, the user must stay online to ensure transaction is processed
    • Round Sequence
      • A user can make signatures for N blocks in the future, which gives it N chances of being included in a block.
    • Per-leaf Nonce
      • Allows one-time key pairs to be derived from a long-term key
      • Signatures are valid for an infinite number of blocks as long as the one-time key pair is still the owner
      • Requires Nonce to be stored and incremented in the leaf
  • If only the leaf value is stored, and no other information, it may be possible for the owner of a leaf to lose access to it.
    • The operator must store the pubkey and RHS variables, and make the data available.
    • The operator must be able to prove this data is available, on-chain, upon request.

Leaf Format

TODO: update the leaf with ethereum address for cheap exit


                                        LEAF
                        +----------------^----------------+
                       LHS                               RHS (arbitrary data)
               +----------------+                
               |                |
               |               Ethereum address
  Public_key_x - public_key_y         

The leaf is then injected into a merkle tree.

A transaction updates a single leaf in the merkle tree. A transaction takes the following form.

1. Public key x and y point
2. The message which is defined as the hash of the old leaf and the new leaf. 

                                      MESSAGE
                        +----------------^----------------+
                     OLD_LEAF                          NEW_LEAF

3. the point R and the integer S. 

In order to update the merkle tree the prover needs to aggregate together X transactions. For each transaction they check

  1. Takes the merkel root as input from the smart contract (if it is the first iteration) or from the merkle root from the previous
    transaction.
  2. Find the leaf that matches the message in the merkle tree.
    NOTE: If there are two messages that match, both can be updated as there is no replay protection. This should be solved on the next layer
    this is simply the read and write layer, we do not check what is being written here.
  • PhABC: isnt this solved by the nonce/epochToBeIncludedIn scheme?

  • HaRold See under 'zkSNARK components', the 'Operator Hint', which tells them which leaf. If the signature signs the 'address bits' then each signature is unique to that specific leaf even if the same public key and RHS are used twice in the tree.

  1. Check that the signer's public key matches the owner of that leaf.
  2. Confirm that the signature is correct.
  3. Confirm that that leaf is in the merkle tree.
  4. Replace it with the new leaf and calculate the new merkle root.
  5. Continue until all transactions have been included in a snark

Arbitrary data in the leafs

The RHS argument allows arbitrary data to be stored in the leaf, this could be a single value, or a merkle tree.

Examples

Priority queue

  • pop an item off the queue, FIFO
  • how are the number of items from the top of the priority queue need to be processed?
  • increasing fee depending on number of items in the queue, to prevent DoS?
  • if the head request of the queue timed out then the operator timed out

Example exit using the priority queue

When data become unavailable, our first defense is the priority queue. Any user can enter the queue by calling the enter_priority_que function in the smart contract.

void enter_priority_que(uint merkle_tree_address, 
                        uint state, 
                        bytes32[tree_depth] markle_proof, 
                        bytes32 leaf,
                        bytes32 authorization_signature
                        bytes32 public_key_of_leaf, 
                        bytes32 rhs_of_leaf)

This function

  1. Checks the merkle_tree_address is not aready in the priority_que if it is it uses the most recent state
  2. Checks the merkle_tree_address has not already been exited.
  3. Confirms that the authorization_sigature matches the public key of the leaf
  4. Calculate leaf Hash(public_key_of_leaf ,rhs_of_leaf)
  5. Checks that the merkel proof of the leaf results in the merkle root of the state.
  6. Set this priority que timeout to now + deadline

Now the operator needs to respond to this request before the time reaches timeout. If they don't respond, we assume data is unavailable and we enter the rollback mechanism. See next example.

They do so by calling the clear_priority_que

void clear_priority_que(uint que_entry, 
                        bytes32[tree_depth] merkle_proof, 
                        )
  1. We take the the que_entry element from the priority que.
  2. We check that the merkle_proof provided with the priority_que[que_entry].leaf and priority_que[que_entry].address results in the current merkle root.
  3. We remove the balance of this leaf from the snark and allow the user to withdraw it.
  4. We set a nullifier for that leaf.

If an element in the priority que has not been processed after its timeout has expired anyone can call the kill_operator(uint que_entry function.

It checks that the time limit has elapsed on that request, slashes the current operator and activates the roll back mechanism.

HaRold This is very similar to what I described with operatorAvailability (being equivalent to clear_priority_queue). However, you need the operator to provide both the public key and the RHS, all information that it knows, otherwise it can still make data unavailable - e.g. without the RHS - if you've forgotten it - then you can't possibly check if it's one of your public keys.

Example rollback mechanism

If the operator fails to process requests in the priority queue, the system is paused for a long period of time during which users can bid to become the new operator starting at some epoch in an open auction or submit exit requests.

The pre-rollback time period can be split into 2 phases:

  • Operator Bidding Phase
  • Exit Request Phase

Operator Bidding Phase

During this phase, a user can request to be the new operator by calling

bid_new_operator( uint state, bytes32[tree_depth] markle_proof, bytes32 leaf )

They pass state which is the state that they want to start operating from as well as a deposit.

They must also provide a valid proof of knowledge of the data that resulted in the previuos operator being slashed.

PhABC: should be clarified. Also, why do we need a zk-proof? Surely this is something the smart-contract can handle?

For each request we check wether the state is newer than the current higest bid and that their bid is higher than the current highest bid. If either is true we replace the current highest bidder with them.

PhABC: These should be bigger or equal. Higher bid on same state is a valid new bid, same goes to same bid on a higher epoch.

At the end of this phase all users should know the epoch that a rollback will end and use that information to determine whether or not they want to submit an exit request in the subsequent phase (if a user knows that the rollback will end before an epoch at whith it received a leaf then perhaps the user chooses not to submit an exit request at all).

Exit Request Phase

During this phase, a user can submit an exit request by calling

roll_back_exit(uint merkle_tree_address, uint state, bytes32[tree_depth] markle_proof, bytes32 leaf, bytes32 authorization_signature bytes32 public_key_of_leaf, bytes32 rhs_of_leaf )

  1. The contract then validates these requests are valid as in "Example exit using the priority queue"
  2. It confirms that that leaf has not been exited.
  3. It confirms that that leaf is not already in the exit request. If it is it will use the request with the latest state.

rolling back

Then we start to roll back the chain. We step back the state to the previous one. We then

  1. Process all exit requests
  2. Check if we have a new operator

We continue this until we either have a new operator or roll the state back to state zero.