Try   HackMD

Native Rollup Deposits by Passing L1 Context

Native rollups must at some point consume data from L1. The most straightforward example is deposits, but there might be many other use cases.

Then, the question is: How can the native rollup access L1 data in a verifiable way?

Implementing Deposits by Passing Arbitrary L1 Context

tl;dr: The L1 rollup contract constructs l1Context (a subset of L1 state) in memory and passes it to EXECUTE. L2 contracts can read this data and process it in any way.

We add a new argument l1Context to EXECUTE:

// on L1

interface IExecutePrecompile {
    function execute(
        bytes32 payloadPointer,
        bytes32 preStateRoot,
        bytes32 postStateRoot,
        uint256 gasUsed,
        bytes memory l1Context
    );
}

In the L1 rollup contract, we construct the L1 context as a list of deposit hashes and pass it to EXECUTE.

After verifying the batch, we do a state read from L2 to see how many messages were actually processed in this batch.

// on L1

contract DepositQueue {
    uint256 public nextQueueIndex;
    uint256 public firstPendingQueueIndex;
    
    mapping (uint256 => bytes32) public queue;
    
    function deposit(address from, address to, uint256 value, bytes calldata data) external {
        uint256 queueIndex = nextQueueIndex;
        queue[queueIndex] = _computeDepositHash(queueIndex, from, to, value, data);
        nextQueueIndex += 1;
    }
    
    function _computeDepositHash(
        uint256 queueIndex,
        address from,
        address to,
        uint256 value,
        bytes calldata data
    ) internal pure returns (bytes32) {
        return keccak256(abi.encode(queueIndex, from, to, value, data));
    }
}

contract L1Rollup is DepositQueue {
    bytes32 public l2StateRoot;

    function proposeBlock(
        bytes32 preStateRoot,
        bytes32 postStateRoot,
        uint256 gasUsed,
        uint256 lastProcessedIndex,
        bytes calldata l2BridgeMerkleProof
    ) external {
        // construct L1 context: a list of pending deposit message hashes
        bytes memory l1Context;
        
        // append `uint16(firstPendingQueueIndex)` to l1Context
        // ...

        for (uint256 ii = firstPendingQueueIndex; ii < firstPendingQueueIndex + NUM_MSGS_PER_BATCH; ii++) {
            bytes32 msgHash = queue[ii];
            // append `msgHash` to l1Context
            // ...
        }

        // verify state transition
        bool success = IExecutePrecompile(EXECUTE_ADDRESS).execute(
            blobhash(0), // l2 tx data
            preStateRoot,
            postStateRoot,
            gasUsed,
            l1Context // pass context to EXECUTE
        );

        require(success, "State transition verification failed");

        // extract the last processed queue index from L2 state
        bytes32 leaf = keccak256(abi.encodePacked(lastProcessedIndex));
        success = MerkleProof.verify(l2BridgeMerkleProof, postStateRoot, leaf);
        require(success, "Merkle proof verification failed");

        // enforce invariants: must relay at least N messages
        uint256 numPendingMessages = nextQueueIndex - firstPendingQueueIndex;
        uint256 numRelayedMessages = lastProcessedIndex - firstPendingQueueIndex;

        require(numRelayedMessages >= Math.min(numPendingMessages, MIN_MESSAGE_RELAY));

        // update L1 state
        l2StateRoot = postStateRoot;
        firstPendingQueueIndex = lastProcessedIndex + 1;
    }
}

On L2, we add a new precompile that allows reading from l1Context (essentially a new data location). L2 contracts can read arbitrary data, the format and way to process this data varies by each rollup.

// on L2

interface IL1ContextPrecompile {
    function read(uint256 offset, uint256 len) external returns (bytes memory);
}

Relayers/sequencers can execute deposits by calling L2Bridge.relay, using any transaction type.

We assume that the ETH total supply is pre-minted to L2Bridge. This way we do not need to add a new transaction type to mint ETH.

// on L2

contract L2Bridge {
    uint256 lastRelayedQueueIndex;

    function relay(
        uint256 queueIndex,
        address from,
        address to,
        uint256 value,
        bytes calldata data
    ) {
        // read `firstPendingQueueIndex` from L1 context
        uint256 firstPendingQueueIndex = IL1ContextPrecompile(L1_CONTEXT_ADDRESS).read(0, 2);

        require(queueIndex == lastRelayedQueueIndex + 1, "Invalid queue index");
        require(queueIndex >= firstPendingQueueIndex, "Invalid queue index");

        // read `msgHash` from L1 context
        uint256 offset = queueIndex - firstPendingQueueIndex;
        bytes32 l1DepositHash = IL1ContextPrecompile(L1_CONTEXT_ADDRESS).read(2 + offset * 32, 32);

        // compute deposit hash
        bytes32 l2DepositHash = _computeDepositHash(queueIndex, from, to, value, data);

        // verify deposit
        require(l1DepositHash == l2DepositHash, "Unexpected deposit message");

        // update bridge state
        lastRelayedQueueIndex = queueIndex;

        // execute call
        payable(to).call{value: value}(data);
    }
}

Discussion

Pros of this approach:

  • Flexible: We can pass any subset of L1 state and it's up to the L2 EVM logic how we interpret and process the data.
  • Verifiable: ZK verification is possible. The hash of the arguments to EXECUTE would be the public input to the proof.

Cons:

  • Overhead: We need to store some data in L1 state.
  • Limited context: We can only pass context from L1 state, L1 context (block number, hash, timestamp), and L1 call arguments. Historical ledger data (e.g. receipts) is not available.

Alternative Approaches

Option 2. Pass the L1 block hash.

The L1 block hash (or state root) can be part of payload. Then, any L2 EVM contract inside the native rollup can query parts of the L1 state or ledger.

Cons:

  • L2 batch must provide proofs (header preimages, MPT proofs), which could take up a significant portion of the blob.
  • Historical lookups (e.g. a receipt 100k blocks ago) are prohibitively expensive.

Option 3. Native rollup node fetches data through standard JSON-RPC.

Since native-geth can fully trust geth, it can simply fetch any data using standard RPC calls. As long as we define what's allowed (e.g. only allow querying the recent 100 blocks), this is deterministic.

Cons:

  • There is no root of trust, zk verification is challenging or impossible.

Option 4. L1 rollup contract constructs batch with deposits.

Process deposits by constructing payload inside L1 EVM and passing it to EXECUTE.

// on L1

function proposeBlock(/* ... */) {
    bytes memory depositPayload;
    
    // construct `depositPayload` by reading from L1 state
    // ...
    
    // verify deposits, block proposer pays for this
    bool success = IExecutePrecompile(EXECUTE_ADDRESS).execute(
        depositPayload,
        preStateRoot,
        postStateRoot1,
        gasUsed1
    );
    
    require(success);
    
    // verify L2 transactions
    success = IExecutePrecompile(EXECUTE_ADDRESS).execute(
        blobhash(0), // l2 batch
        postStateRoot1,
        postStateRoot2,
        gasUsed2
    );
    
    require(success);
    
    l2StateRoot = postStateRoot2;
}

Cons:

  • Overhead: Need to store full deposit message data in L1 state.