sungmingwu
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Versions and GitHub Sync Note Insights Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       owned this note    owned this note      
    Published Linked with GitHub
    2
    Subscribed
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    Subscribe
    zkEVM Proof Chunk via RW permutation check without re-ordering ======== ### Change History | Date | Memo | Editor | | ---------------- | ------------------------------------------------- |--------------- | | 2023-09-22 | second version | @rrtoledo | | 2023-09-19 | requirement from Arantxa to expand few details | @ArantxaZapico | | 2023-09-19 | address comments. address evm step state chunk | @sungmingwu | | 2023-09-07 | add option for challenges | @sungmingwu | | 2023-09-06 | updated docs: use simpler permutation fingerprint | @sungmingwu | | 2023-09-05 | new idea proposed | @ed255 | | 2023-09-04 | first version | @sungmingwu | ### Complexity Simple summary of the extra cost for permutation fingerprint to address set equality: > [name=Edu Is it fair to call this a "lookup scheme"? I think it can bring confusion. Is this referring to a lookup to an updatable table which is the RAM memory? The solution actually uses a permutation argument, not a lookup.] > [name=Ming yeah it will be more fair to say we are introduce permutation fingerprint workaround to assist existing lookup schemes to achieve cross chunk lookup ] | Extra prover cost w/ permutation fingerprint | Extra proof size w/ lookup | Verifier marginal work | | ------------------------------------------------------------------ | -------------------------- | ---------------------- | | $\Theta(m) \times \mathbb{F}$ + marginal cost in final aggregation | | | where `m` is the total number of memory accesses across all proof chunks. The cost to append elements in set is just $\mathbb{F}$-arithmetics. Here elements in set can represent any generalization of read/write trace to a given table. > [name=Edu See my previous comment. I think calling read/write operations "lookups" is confusing. Maybe call them "memory access", or "memory read/write", or "table read/update"?] > [name=Ming Agree, name it as permutation fingerprints to address set equality is proper. Besides `memory` term a bit confused against EVM memory, in our case it's a generalization of read/write trace which can be anything.] ### Background In the zkEVM project, we encode in the circuitry of the proof system all operations allowed in the EVM as well as its consequences on storage, memory and gas. When creating a proof of validity of an Eth block, we pass to the prover the trace of the block (computed with Geth then enriched with additional information) as witness. The trace witness determines which operation needs to be validated at each step (e.g. we allow POP/PUSH opcodes and step 42 of the trace is PUSH0), and thus which part of the circuitry is activated. A significant part of the witness is encoded in a table. Furthermore, for practical reasons, we define as *subtables* (potentially column-wise permuted) susbset of this table (see further below example). > [name=Edu About "witness is encoded in a lookup table". Is this meaningful in this context? Which is this lookup table? Can we call it just "table" instead?] In the zkEVM project, we are using the Halo2 proof system, including its lookup scheme (see [Halo2 book](https://zcash.github.io/halo2/design/proving-system/lookup.html)). As such, a lookup table is represented as a matrix of field elements $\mathbb{F}^{n*c}$ where $n$ is the number of rows and where $c$ is the number of columns. > [name=Edu About "maximum between the number of constraints of the proof and number of rows". What is the definition of constraint here? Is it the sum of each gate times the number of rows where the gate is active? Why not say that "n is the number of rows (where all gates must pass)" or something similar?] > [name=Raphael To me a constraint (usually) is a row (technically it could be several if you use the next row selectors). A gate can defined constraints on several rows. I'm updating it to row to make this clearer. The point I was making was the case of extra padding when considering the (lookup) table size and the constraint system itself.] More concretly, in the zkEVM project, we store in lookup table ranges, counters, tags and bytes encoded as field elements among other things. A non-exhaustive look at the RWTable is shown as follows, > [name=Edu This table doesn't represent the approach we're following in the zkEVM for the RwTable regarding the counter. In our design the counter is global and increases for every operation (there's no unique counter per address). The result we get is the same and allows us to track a single counter per step. Otherwise I'm not sure how the circuit can verify that the counter is always increasing per access to the same address.] > [name=Raphael My bad. I made a mistake when updating a previous version, sorry] | address | counter | value | operation | | -------- | ------- | ------- | --------- | | abcdefgh | 0 | 0x0 | write | | deadbeef | 1 | 0x5 | write | | abcdefgh | 2 | 0x0 | read | | ... | ... | ... | ... | The first row of the table is: $row_1$ = (abcdefgh, 0, 0x0, write). > [name=Edu This table is referred as the RwTable and is chronologically sorted. In the next section it says that the Rw table is a permutation of the trace. I think it would be more clear if we have a name for each one, and use the correct one every time a table appears.] #### Explaining this table The table represents a public state $S$ and its evolution thoughout a block. More precisely, it is written as an append-only log of a block's memory accesses done on a snapshot of the state. The state itself can be seen as a one column table whose indices We start with a public state $S_0$ and apply the first operation: $row_1$ = (abcdefgh, 0, 0x0, write); with it, perform a first (counter=0) R/W operation on address "abcdefgh", more precisely we overwrite the value with 0x0. We thus modify the state which becomes $S_1$. > [name=Edu About "public state". Please describe what is `S`. For example, `S` is a memory snapshot that can be seen as an updatable table with a single column.] > [name=Raphael: why single column?] Likewise for the second operation, we access address "deadbeef" for the first time and write the value 0x5 there changing the state to $S_2$. We then apply on $S_2$ the third operation: $row_3$ = (abcdefgh, 0, 0x0, read). We access again the address (counter=1) "abcdefgh" and read the value 0x0. The state is unchanged as *read* do not modify it. It is important to note that we check furthermore that read values are equal to the the previously written value at the same address. As such, for the RW table, we do not actually work on with a chronological order, but on a permutation of the trace sorted by address and counters as follows: <!-- | address | counter | value | operation | | -------- | ------- | ------- | --------- | | abcdefgh | 0 | 0x0 | write | | abcdefgh | 2 | 0x0 | read | | deadbeef | 1 | 0x5 | write | | ... | ... | ... | ... | --> ![](https://hackmd.io/_uploads/SJendzikp.png) The above diagram uses `ts` to refer to the `counter`. ### Problem statement To be able to verify proofs onchain, we are restricted by the fields that are supported in the EVM. Currently, only BN254 is available. We thus decided to work on a non-recursive fork of Halo2 based on the KZG polynomial commitment (which is highly similar to UltraPlonkish = TurboPlonk + Halo2's lookup argument). As such, while the theoretical maximum number of constraints that we can handle with BN254 is $2^{28}$, we concretely can use up to $2^{27}$ constraints because we use polynomial identities of degree 3 in Halo2. > [name=Edu About "polynomial identities of degree 3". Well, actually 🤓 we're over degree 3. I think we're at 5 or 8] > [name=Raphael I imagine! That's why I am talking about Halo2 proof system and not about the zkEVM gates.] This number of constraints, unfortunately, is not large enough to handle all blocks. More particularly, we cannot encode especially gas intensive blocks in our lookup tables. This document attempts to answer this problem with the use of advanced zk-techniques such as proof aggregation, folding or recursion thanks to the used of smaller -chunked- proofs and more specifically on how to link these together, as well as their corresponding tables. ### General overview The overall idea behing our solution is to use both ordering we mentioned before, let's call them $T_{chr}$ and $T_{add}$ for chronological and address-sorted tables respectively. The "local" correctness of $T_{chr}$ is checked by halo2 constraints that verify that written memory values are correct according to read memory values. For example, if an instruction adds two values from memory `a`, `b` and stores the result in `r`, we will have a gate that verifies `r = a + b` where these 3 values appear in $T_{chr}$ as two reads and one write. The "local" correctness of $T_{add}$ is checked by halo2 constraints that verify that all the entries in (address, counter) are lexicographically sorted, and that every non-first read at an address has the same value as the previous entry. Then we will prove that each table is a permutation of the other, which guarantees that each instruction is verified, and that the memory used by each instruction is consistent. Our goal is to chunk $T_{chr}$ and $T_{add}$ while maintaining the same checks. When chunking, we will add some redundancy by duplicating the last row of a chunk as the first row of the following one, this is important for linking as well as enforcing some inter-chunk properties. #### Idea Elaboration ##### Permutation argument > [name=ArantxaZapico Please describe in detail why the new table looks like that.] To put it simply, we want to encode or "fingerprint" each chunk and be able to combine these together to check whether two permuted tables (collection of chunks) are equal. We consider here a simpler variant of the Nova's lookup argument (see [Lookup Argument in Nova tricks: permutation checking without re-ordering](https://hackmd.io/C6XlfTRFQ_KoRy1Eo-4Pfw)) that relies on permutation-invariant fingerprinting. <!-- Motivated from idea [Lookup Argument in Nova tricks: permutation checking without re-ordering](https://hackmd.io/C6XlfTRFQ_KoRy1Eo-4Pfw) which originally aim to address an efficient lookup argument in Nova folding scheme to solve memory rw consistency/range check..., which might be state-of-the-art lookup argument. The idea is via proving memory rw consistency via permutation-invariant fingerprints on R and W set. --> > [name=ArantxaZapico Carefully read the article!!!] <!-- It's nice that similar idea, but more simpler version of memory r/w consistency check can be applied on zkEVM proof chunk. Giving a *row i* of chronologically memory execution trace, means a list of cells. We can use 2 challenges $\alpha,\gamma$ to encode all cells into a field fingerprints, as follows: --> Given random challenges $\alpha$ and $\gamma$, we define the row fingerprinting function as: $$fp_{row}(row_i) = \alpha - \sum_j (\gamma^j \cdot cell_i(j))$$ We define the chunk fingerprinting function as: \begin{align} fp_{chunk}(chunk_i) &= \prod_{row_j \in chunk_i} fp_{row}(row_j)\\ &= \prod_{row_j \in chunk_i} \left ( \alpha - \sum_k (\gamma^k \cdot cell_j(k)) \right ) \end{align} We define similarly table fingerprinting, i.e. $fp_{table}(table) = \prod_i fp_{chunk}(chunk_i)$. Note that if there are duplicated rows across chunks for linking, as suggested before, we need to remove such redundancy to compare the fingerprints of permuted tables. Given that $fp_{row}$ is index-independent and that $fp_{chunk}$ is index-independent and commutative, we can easily see that: - $\forall (T_1, T_2) \in (\mathbb{F}^{n\cdot c})^2, \text{ if } \exists \text{ permutation } \sigma \text{ s.t. } T_2 = \sigma(T_1), \text{ then } fp_{table}(T_1) = fp_{table}(T_2)$ - $\forall (T_1, T_2) \in (\mathbb{F}^{n\cdot c})^2, \text{ if }fp_{table}(T_1) = fp_{table}(T_2), \text{ then } \exists \text{ permutation } \sigma, \text{ s.t. } T_2 = \sigma(T_1) \text{ with overwhelming probability}$ (we assume that the challenges $\alpha$ and $\gamma$ are sampled after the tables were chosen via FS) For instance, for the following table we have: | address | counter | value | operation | | -------- | ------- | ------- | --------- | | abcdefgh | 0 | 0x0 | write | | deadbeef | 1 | 0x5 | write | | abcdefgh | 2 | 0x0 | read | | ... | ... | ... | ... | - $fp_{row}(row_1) = \alpha - (abcdefgh +\gamma \cdot 0 +\gamma^2 \cdot 0 +\gamma^3 \cdot write) = \alpha - (abcdefgh +\gamma^3 \cdot write)$ - $fp_{chunk}(row_1..row_2) = \left ( \alpha - (abcdefgh +\gamma^3 \cdot write) \right ) \cdot \left ( \alpha - (deadbeef +\gamma^2 \cdot 0x5 +\gamma^3 \cdot write) \right )$ - $fp_{chunk}(row_3) = \alpha - (abcdefgh +\gamma + \gamma^3 \cdot read)$ - $fp_{table} = \left ( \alpha - (abcdefgh +\gamma^3 \cdot write) \right ) \cdot \left ( \alpha - (deadbeef +\gamma^2 \cdot 0x5 +\gamma^3 \cdot write) \right ) \cdot \left ( \alpha - (abcdefgh +\gamma + \gamma^3 \cdot read) \right )$ <!-- $fp_2 = \alpha - (abcdefgh +\gamma 0 +\gamma^2 0 +\gamma^3 write)$ $fingerprints\ of\ row\ i = \alpha - (cell_i[0] + \gamma \times cell_i[1] + \gamma^2 \times cell_i[2] + ...)$ So as in a chunk, we can product all the fingerprints value into a single fingerprint $fingerprints\ of\ a\ chunk = \prod_{i=0}^{m-1} (fingerprints\ of\ row\ i)$ $= \prod_{i=0}^{m-1} (\alpha - (cell_i[0] + \gamma \times cell_i[1] + \gamma^2 \times cell_i[2] + ...))$ --> > [name=ArantxaZapico I am not sure about this encoding. Which kind of computations are you then using this fingerprint for? PERMUTATIONS - permutation argument described in the Lookup Argument in Nova...] <!-- For another memory execution trace sorted by address, we can follow same rules to encode cells into another fingerprints field value. We defer the permutation validity check to the end, such if 2 fingerprints fields equality are hold, then below 2 trace - chronologically sorted memory execution trace - address sorted memory execution trace are permutation of each others. --> ##### How to make and manage chunks Now that we have a method to prove that discting tables, or aggregated chunks, are permutations of each other comes the time to discuss how we make these chunks. To do so, let's first describe the current zkEVM architecture. For practical reasons and modularity, we split our circuit into different sub-circuits that have independent logic (and also may require some externalized verification work). These subcircuits are then handled into a supercircuit in a flexible manner: we can generate from the supercircuit a single proof or divide it logically according to (subsets of) the subcircuits and generate several proofs. ![](https://hackmd.io/_uploads/SkUnTIVxp.png) We would make use of our flexible supercircuit approach to handle chunks and generate several *linked* proofs. More precisely, we would adopt an approach *à la rollup* where the aggregated fingerprint would represent the the rollup state and each chunk a batch of transaction. Each chunk would have duplication (the last $m$ rows of a chunk's trace being copied as the first $m$ of the following) to enforce not only the continuity and sequencing but also check properties on consecutive constraints, e.g. in the RW table that the counter is correctly incremented. The fingerprints and duplicated rows would be public inputs to pass them between proofs. N.B. We can see how the use of recursion or aggregation (especially when dealing with public inputs such as in [APlonk](https://eprint.iacr.org/2022/1352)) could optimize the protocol (and potentially hide the linking public inputs). #### Implementation We will not introduce this protocol as another lookup argument nor as a replacement of Halo2 lookup or permutation arguments but directly implement the fingerprint check with constraints in our plonkish circuit. More preciselym we will have two aggregated fingerprints (one per table) that will be updated with the rows of - $T_{chrono}$: chronologically sorted memory execution trace table - $T_{add}$: address sorted memory execution trace table One nice feature to observe here is that we can constraint that the last row of chunk $j$ is equal to the first row of chunk $j+1$. With that tricks we are able to maintain continuity of - permutation fingerprints - address sorted memory execution trace across chunks. This can be done as additional constraints in the supercircuit with no further ado. When splitting it in several chunks, we make use of public inputs to pass the last rows of a chunk. > [name=ArantxaZapico I need more on this] ### Chunk on EVM execution steps Our goal is to chunk at any EVM execution steps. The validity of EVM execution step are constrained by program counter + opcode lookup in bytecode table, which is easily tackled due to NO lookup across chunk. However there are some step state transition fields constrained via `constraint(current step, next step)`. We need to address this Take snippet from [zkevm constraint_builder.rs](https://github.com/privacy-scaling-explorations/zkevm-circuits/blob/main/zkevm-circuits/src/evm_circuit/util/constraint_builder.rs#L57-L67) ``` pub(crate) rw_counter: Transition<Expression<F>>, pub(crate) call_id: Transition<Expression<F>>, pub(crate) is_root: Transition<Expression<F>>, pub(crate) is_create: Transition<Expression<F>>, pub(crate) code_hash: Transition<Word<Expression<F>>>, pub(crate) program_counter: Transition<Expression<F>>, pub(crate) stack_pointer: Transition<Expression<F>>, pub(crate) gas_left: Transition<Expression<F>>, pub(crate) memory_word_size: Transition<Expression<F>>, pub(crate) reversible_write_counter: Transition<Expression<F>>, pub(crate) log_id: Transition<Expression<F>>, ``` Those are the step state fields need to constraints across chunk. We address this by introducing 2 dummy step: `BeginChunk`/`EndChunk` to write/read step state fields into RwTable with new tag + field_tag via RwTable lookup! It helps to leverage constraints for read/write consistency in RwTable. Note, there is a special field `rwc` which can not be written into RwTable because RwTable rely on `rwc` to prove consistency, otherwise it make dependency loop. Therefore, RWC need to be in public input. Regarding to transition logic `from -> to` - Arbitrary step state -> `EndChunk` - `EndChunk` -> `EndChunk` - `BeginChunk` -> Arbitrary step state - `BeginChunk` -> `BeginChunk` And also - First step of chunk should be `BeginChunk` - Last step of chunk should be `EndChunk` ### Random oracle Challenge in permutation fingerprints. One special notes is how we deal with consistent $\alpha, \gamma$ in permutation fingerprint. The idea is same as [Lookup Argument in Nova tricks: permutation checking without re-ordering](https://hackmd.io/C6XlfTRFQ_KoRy1Eo-4Pfw). In one sentence, we trust $\alpha, \gamma$ from prover as witness, and defer the verification of them at the end. There are two options > [name=ArantxaZapico What are global challenges?] #### Option 1 - circuit internally rolling update intermediate challenge $\alpha_i, \gamma_i$ via random oracle circuit. By Fiat–Shamir, we can implement random oracle with serialized cells as input. Per row per cells, we RLC all cells, plus $RLC_i = cell_i[0] + lookup\_challenge \times cell_i[1] +\ ...$ $+\ lookup\_challenge^{n-1} \times cell_i[n-1] + lookup\_challenge^n \times (RLC_{i-1})$ At the end row of chunk j, we verify $\alpha_j = RO(input, \alpha_{j-1})$, $\gamma_j = RO(input, \gamma_{j-1})$, $\alpha_{j-1}, \gamma_{j-1}$ come from previous chunk j-1 - at the last chunk n, constraints $\alpha_n = \alpha$ and $\gamma_n = \gamma$, $\alpha, \gamma$ are challenges used in permutation fingerprint. Pros - less code change Cons - massive overhead to unroll random oracle circuit because of long inputs in one chunk: cells_per_row * rows #### Option 2 by @ed255 - global challenge = hash([Commit(proof_chunk_i_mem_colj)] for all i,j and then concatenation) $challenge_{global} = Hash(\dots\ ||\ Commit({proof\ chunk_i}_{mem\ col_j}) \ ||\ \dots)$ Pros - there are no intermediate challenges in plonkish circuit. The poseidon input length reduce to $\#chunk \times \#mem\_cols\_per\_chunk$ Cons - memory commitment need to happen twice, one before witness assignment to calculate global challenge in advance (need to modify bus_mapping), another is part of halo2 proof generation process. - need to change root-circuit and snark-verifier to implement poseidon circuit logic. > Update 07 Sep 2023: option 2 is preferable due to performance concern. For RO we can choose Poseidon hash as random oracle. On option 2, poseidon hash circuit are implemented in snark-verifier plonk circuit. ### Change scope summary We here focus on the different subtables that we have and specify how to perform the chunking. - For tables whose lookup checks are naturally bounded such as the MPT , Bytecode and Exp circuit, we would just keep them in a single proof chunk as usual. > Bounded size definition see [2023-06-22 block proof chunking & dynamism](https://hackmd.io/1h_McCIcTpmILN0xZlbnSw) > For some lookup, e.g. Keccak, although there is no bounded, implies that there can be arbitrary number of lookup in one chunk. But there are max gas limitation per block, it can be safely assume a single proof chunk can consist of keccak invokes on this proof chunk. - For the Memory consistency table (also called RW table), we would use the previous method combining row fingerprinting and in-circuit/constraint check. We can for instance see in the following diagram that the EVM circuit would perform lookups on a chronologically sorted *chunked* trace but the State circuit would furthermore handle the address-sorted chunk and handle the fingerprints and associated challenges. > TODO: update with option 2 challenge logic ![](https://hackmd.io/_uploads/H1ZcyjGxp.png) - Finally, the Root circuit would be extended with extra constraints to ensure additional properties between the chunks and at the last one, checks between the tables fingerprints and potentially challenges. We can see in the following figure how the proof chunks are chained with one another and the final check on challenge and fingerprint equalities. ![](https://hackmd.io/_uploads/rJg_xjzxT.png) ### (Low Priority, future thought) Recursive Proof on Root Circuit The Root circuit can only handle around ~4k non-native ec multiplication, equivalent to squeeze only ~2 super circuit proofs and so would not be enough for our application. As suggested by @Han, we can use recursion to verify all chunks by encoding the root-circuit KZG verifier circuit as shown below. > [name=Edu Here are some thoughts: 1. In the presented design, each root circuit verifies a root circuit proof and 2 super circuit proofs. If we only include 1 root circuit proof and 1 super circuit proof, do we get similar performance overall? 2. Consider that if we have 2 super circuit proofs per root circuit, we need to handle the case were only 1 is enabled (which means that the last chunk can be either in slot 1 or 2)] > [name=Ming two super circuit proof here just to illustrate the idea, approach can works on arbitrary super circuit proofs under root circuit SRS bound. For performance, more super circuit proofs in one recursive should got better performance because we verify less root circuit proof (less pairing non-native operation). Detail need to benchmark] > [name=Ming > 1 super circuit proof in one chunk need some special logic in last chunk, because last proof can be in any slots. Another idea is maybe padding some trivial super circuit proof, and only allow enable is_last_chunk on last slot.] > [name=Edu About "~4k non-native field arithmetic operations". Can you give more details on this metric? Do you mean that the root circuit can only verify ~4k non-native field arithmetic operations?] > [name=Ming should be 4k non-native ec multiplication] ![](https://hackmd.io/_uploads/SJfXBsGgT.png) In the $i$th round, the root circuit accepts the previous $i-1$th r proof as aux input, along with fresh super circuit proof(s) to generate iteration $i$ proof and send to next round. The Root circuit will constraint previous proof's public input equal to first fresh super-circuit public input, the other constraints remaining the same. The last proof should got extra public input to constraints there is the last super-circuit proof. Recursive proof demonstrates a nice pattern to explore plonkish folding scheme later on. There are 2 options to cut the boundary for folding - Option 1: folding on super circuit instances - Option 2: folding on root circuit instances. Each root circuit instance include super circuit proof Option 1 is more preferable since we dont need to compute KZG quotient polynomial i.e. FFT anymore. To explore folding scheme, we can keep Halo2 frontend, while need to explore alternative folding scheme backend, such as [ProtoStar](https://eprint.iacr.org/2023/620.pdf). ### Task/Milestone break down (TBD) - [ ] Implement new memory trace consistency mechanism - remain single proof chunk. - expose field values as new public inputs for future linked - lookup to chronologically memory trace - constraints for fingerprints update - get challenge from public input - [ ] Execution step state chunk by dummy steps - introduce `BeginChunk` and `EndChunk` dummy step - `BeginChunk` get rwc from public input, and lookup read step state from RWTable - `EndChunk` expose rwc to public input, and lookup write step state to RWTable - constrain on state transition - [ ] root-circuit and snark-verifier - implement global challenge in witness generator (bus-mapping) - implement global challenge in snark-verifier. (Should touch root-circuit + snark-verifier) - modify root-circuit to constraint global challenge and fingerprint equality - assure last chunk with fingerprint quality check enabled. - [ ] split to multiple chunk proof - proof linked each other on public input - assure last chunk with extra quality check enabled. - [ ] root circuit recursive proof - protocol to verify root circuit proof --- # Notes by Edu > already refresh docs based on new proposal The Jolt paper mentions 3 approaches for memory consistency: B1 Merkle Trees, B2 Re-ordering the execution trace, B3 Memory-checking via permutation-checking, without re-ordering. This document explores option B3 What if we can implement B2 in chunks by implementing a chunked permutation check with gates, following `Figure 5: Permutation-invariant fingerprinting.`? $\prod_{i=1}^{N}(r-a_i) = \prod_{i=1}^{N}(r-b_i)$ Each side of the equation creates a fingerprint of vectors $a$ and $b$. If the equality holds, each vector is a permutation of the other with some probability. We can get the challenge $r$ following the approach you describe. Then $a$ is the chronologically sorted rw operations split among chunks, and $b$ is the by-address sorted rw operations split among chunks. Imagine we have $c$ chunks of size $k$ each. Each chunk $j$ calculates the following with custom gates, and propagates it to the next chunk to accumulate it: $\prod_{i=jk}^{(j+1)k}(r-a_i)$ $\prod_{i=jk}^{(j+1)k}(r-b_i)$ At the last chunk we verify the equality of both accumulations. Advantages: - Keep the same State Circuit constraints - Avoid initial commitment to the initialized memory at 0 - Avoid final post-processing to match the size of RS with WS Disadvantages: - Need for sorted verification constraints (in State Circuit) Questions: - What's the probability of failure in the `Permutation-invariant fingerprinting`? Note: the State Circuit can be chunked easily, we just need to constraint that the last row of chunk $j$ is equal to the first row of chunk $j+1$.

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully