There is no commentSelect some text and then click Comment, or simply add a comment to this page from below to start a discussion.
Raphael A third-party OAuth application (notes.ethereum.org) with access to public information (read-only) was recently authorized to access your account.
In the zkEVM project we are aiming at verifying a block. To do so, we basically get a block's trace and commit to it in different tables. We then check that each operation is valid and check the consistency of the input/output with the tables that we lookup to.
Q: Can you expand on this? How do you check that opearations are valid and, more importantly, what is exactly in the table?
One of the major hurdle we have is that we cannot easily change the size of our circuit because of the size of the SRS. Some of the blocks that we have may be represented by circuits too large for the current SRS we have. (Some may also be much smaller, and we will not be inefficient (but that is a different conversation).)
We started recently a project called "proof chunking" whose goal is to "split the circuit/instance/proof" such that we can prove (efficiently if possible) the whole block.
Q: how do you the different circuits? I am trying to think about it in terms for instance of R1CS, how would you manage to work with a srs that is smaller than the size of the witness?
We have considered aggregation and folding. For the latter, which is the subject of the discussion here, we are a bit stuck. Folding could be summed up as aggregating instance and verifying that the batched is correct up to an (cleverly chosen) error term.
Q: I guess folding is how you manage to split circuits? Note: Folding does not always involve error terms. That is a particularity of some constructions (Nova and Protostar for instance)
The problem we have is that we want to prove RAM memory consistency in a folding scheme: perform check between lookups of different instances.
Q: Can you expand on this? The table is the memory I guess, and you want to prove that you are taking elements from there. Does the memory state changes? Do you rewrite it? Is this lookup something fix? like, you know from the beginning that at step A you will access chunk N of the memory and such? Where on the folding this happens?
Ming
zkEVM are in halo2, and now we layout all the witnesss vertically into a single region. We leverage halo2 to support dynamic lookup to connect different column.
Q: "Are in halo2" refers to the arithmetization? What is a region? What is in the columns?
The problem of current design is we are doom to have a hard cap of row of witness due to the size of SRS. Intuitively, if we can split huge rows into multiple chunks (each chunk is just a collection of columns rows) then somehow the SRS size problem can be relaxed.
Q: what is column rows?
When it comes to chunk, then naturally there comes to design choice by aggregate(snark proof 1, snark proof 2, snark proof 3, ….), with public input connecting each others => challenge is to design lookup scheme across different snark proof, and is it potiential lead to soundness vulnerable?
Q: What is the table and the result of the lookup here?
by folding, recognized recursive structure in Plonkish => we believe it need a big refactor due to switch to different folding-based proving system. => a folding friendly dynamic lookup
Q: Where are you switching?
For zkEVM project, lookup is dynamic + on the fly + no zk