Introduciton

State circuit iterates over random read-write access records of EVM circuit to verify each data is consistent between different writes.

To verify if data is consistent, it first verify all access records are grouped by their unique identifier and sorted by order of access - a sequential counter named rw_counter, then verify the records between writes are consistent. It also verifies data is in correct format.

It then serves as a lookup table for EVM circuit to do consistent random read-write access.

Different models

To have a complete random read-write access model, we can begin with the simplest one and extend it with features required by us step by step, for easier understanding.

Read-only array

Let's begin with the simplest model, say there is a VM accessible to a read-only array as random access memory [1], which consumes a program as witness for execution.

Say the memory size could be up to 264-1, since the index of memory access is witness in program, for only VM itself it's impractical to track such large chunk of witnesses and use a giant multiplexer to select the value at the index.

But without tracking witnesses, how do we know the random access memory is consistent in the execution trace? In PLONK (or RAP) we could introduce challenge as randomness to achieve some "global" property, for example, poly fa is a shuffle of fb, which is required by us here. With such shuffle argument[2], we could verify the random access memory is consistent.

For simplicify, we ask prover to put memory index and value in different polys findex and fvalue, then we let the prover to read the value at the index directly without any constraint.

Then with findex and fvalue commited, we then ask prover to provide shuffle version of them f’index and f’value[3], and checks 2 things:

  1. Indices in f’index are sorted (either ascendingly or descendingly) [4]
  2. Values in f’value remain the same as long as index remains the same

And both checks here could be simply implemented with "local" constraints without tracking all of witnesses at a gate, thanks to the shuffle argument.

Read-write array

Most VMs are not designed to support a read-only array model, such as EVM. Since in our context we want to be as similar to EVM's original behavior as possible (not design a whole new VM to emulate EVM), so we need a read-write access to stack, memory, storage, etc

But instead of extending read-only array model to support so many things, let's begin with extending the read-only memory with write feature.

So to support write, we add another poly fis_write to let the program specify if this access is a write. Then also ask prover to provide the shuffle of it as f’is_write, the checks now becomes:

  1. Indices in f’index are sorted (either ascendingly or descendingly)
  2. As long as index remains the same
    2.1 Values in f’value remain the same when f’is_write is false
    2.2 Values in f’value could be any value when f’is_write is true

But there is a problem, it seems that in VM prover can read a value that is actually written in the future, but the shuffled polys still pass the constraints, because there is no constraint on how prover shuffles the access to the same index.

This issue pops up because when we say "something is updatable", we are actually saying "something is updatable over time", the concept of time natually comes to play a important role.

To resolve this issue, we introduce another poly frw_counter [5], and ask prover to increase this counter every time when it does random access. Then add another constraint before second point:

  1. As long as index remains the same
    2.0. Counters in f’rw_counter are sorted ascendingly.
    2.1.

Read-write dictionary

Finally, because EVM can access to several kinds of data in the execution of a block, so we need a dictionary rather than an array for easier usage.

Since we are operating on a large field, which have 254 bits to carry information, a naive approach is to encode as many information into findex as possible, and semantically turn findex into fkey. For example, say we only have stack and memory, we can reserve the first bit as a domain separator to specify which kind of data we are accessing, and set the following 24 bits as call id, then set the following 40 bits as index of stack and memory.

Another approach is to introduce more polys, and add more layers of sorting (e.g. add ftag, fcall_id and sort it in order of ftag, fcall_id, findex).

Or we could even directly sort the random linear combination or these polys as the key (e.g. f’key = sorted(ftag + βfcall_id + β2findex)), but it might become annoying and more expensive to sort field elements instead of something with restricted range (e.g. 40 bits).

Implementation

Data kinds

Here is the list of different kinds of data EVM can access during the execution.

Key 1 (Tag) Key 2 Key 3 Key 4 Value 1 Aux 1 Aux 2
TxAccessListAccount tx_id (24) account_address (160) is_warm
TxAccessListAccountStorage tx_id (24) account_address (160) storage_key (rlc) is_warm
TxRefund tx_id (24) value
Account account_address (160) field_tag (~) value
AccountStorage account_address (160) storage_key (rlc) value tx_id (24) 1 committed_value
CallContext call_id (24) field_tag (~) value
Stack call_id (24) stack_pointer (10) value
Memory call_id (24) memory_address (40) byte
Note:
TxLog and AccountDestructed are complicated so ignored for now.
The number following is tha max amount of bits the key could contain.
~ means continous integer.
rlc means random linear combination of EVM word, and it's a field element.
Also:
1. Since the gas cost function of SSTORE not only takes current and previous value, but also takes the original value finalized in previous transaction, so we need to track the last value as committed_value when tx_id changes.

Currently the implementation try to have each key as a column, and verify they are sorted by comparing the difference to previous row is positive (in range of the bits). But we never have such large range lookup table, so using running sum trick to verify it by chunks seems inevitable. As for random linear combination, we can decomposite it into 32 bytes and then compare.

Property of different kinds

Different kinds of data also have different properties we migt need to take extra care of:

Tag Initial value Reversible 1 Write only persistent 2
TxAccessListAccount 0 3 Yes
TxAccessListAccountStorage 0 Yes
TxRefund 0 Yes
Account any 4 Yes
AccountStorage any Yes
CallContext any 5
Stack any 6
Memory 0 7
Note:
1. When data is reversible, every write access to it might probably accompanied by a reversion write access in the future (with a larger rw_counter and value before the write).
2. Some reversible data will not affect the further execution of a tx like TxRefund, TxLog and AccountDestructed (SELFDESTRUCT takes effect in the end of tx). So they will only be written when the call is persistent.
3. TxTxAccessList* and TxRefund are always operated with a diff, so their initial values are constraint to be 0.
4. Account*'s initial value will be verified with previous state/storage trie, but how to connect this with MPT circuit is yet to be determined.
5. CallContext is always operated properly, so no specific initial value, and we allow it to read before write
6. Stack is always operated by push and pop defined in circuit, so there will not be read without write situation.
7. Memory can be read without write because the memory is always expanded before execution implicitly, so their initial values are constraint to be 0.

Reversible data

To achieve full equivalence with EVM, we need to support reversion.

In EVM circuit, current approach to support reversion is to ask prover tell us if the state updates in this call are persistent or not, and if not what's the rw_counter end of reversion, for reverting the state update at the same step.

So every time we encounter state update, we need to read the previous value first, and do the write in case we need reversion. To simplify this procedure, we can read the rotation of the fvalue to get access the value at previous row at the same time.

Uniqueness of rw_counter

The way we connect EVM circuit and State circuit is going to use shuffle argument [6]. However, this doesn't make sure the rw_counter in State circuit are unique because in EVM circuit we might have some "unconstraint cells" (unused cells) that can be loaded with any value, so we need to make sure they are filled with 0 (the blue part in the illustration below) with some constraint.

Another approach to make sure the blue part not showing in State circuit, we can introduce extra advice column flag_rw, which shows if this cell should exist in State circuit or not. To make this flag_rw have static cost, we can use "countdowner trick" which requires 2 extra advice column with few constraints.

Previously the way to make sure rw_counter is unique in State circuit is to count how many rows in State circuit with rw_counter != 0, and accumulate this value then populate it as public input, then share with EVM circuit. This approach requires 2 extra columns with 1 extra public input.

Not sure which approach has less complexity, still exploring. But in regard to aggregation, the less public input the better (even better if public inputs are not shared between circuits).

Reference


  1. Cairo also chooses to use read-only memory model, and Ch8.5 - Read-write memory explains things very clealy and I would recommend to have a look. ↩︎

  2. Or "multiset check" described in Ariel's writeup. I say "shuffle argument" because I'm afraid "permutation argument" might sound like the one used in PLONK, which I think is abbreviation of "permutation argument with identity map", so I would say "shuffle argument" to avoid ambiguity.

    In halo2 it uses both "permutation argument with identity map" and "permutation argument without identity map", the former is also called copy constraint, and the latter is used as sub-argument in both copy constraint and lookup. ↩︎

  3. To make sure f’index and f’value are shuffle in the same way, we can use random linear combination trick to ensure it. For example, we check if f = findex + αfvalue is a shuffle of f’ = f’index + αf’value, instead of checking them respectively, where the α is randomness derived from transcript. ↩︎

  4. In Cairo they are using even simpler model, they require the memory is continuous accessed (no any skip), then first check becomes "Indices in f’index is continuous". See Ch9.7 - Nondeterministic continuous read-only memory for the concrete constraints. ↩︎

  5. It's previously named global_counter, but I found there might be more counters in circuit, so to be more precise, I try to name it rw_counter. (I also considered about the name timestamp, but afraid that it woud be confused with the timestamp in block header) ↩︎

  6. Interestingly, in halo2, they don't provide shuffle argument as an external api, even it already uses it in subset argument. So when we want to implement a shuffle argument, we will need a new api to query extra challenge from transcript after we have commit both circuits (see implementation and other potential usages if we have such api). ↩︎

Select a repo