### Proof Chunk & aggregation
#### approach 1
- chunk by block/multiple blocks then aggregate => adopt by Scroll
> question remained: still hard limit on the amount of gas we can prove.
#### approach 2
[docs](https://hackmd.io/1h_McCIcTpmILN0xZlbnSw?view) by Edu
- chunk by sub-circuits logically and lookup across chunk via multiplexing design
- connect different proof via pub-io
- evm circuit to state circuit: permutation design.
:::spoiler {state="close"} Discussion note
It might also be easier to work on / implement, but that has to be taken with a pinch of salt. The obvious disadvantage however would be that as the setup is fixed, we need to overestimate the size or number of chunks to fully prove a block which leads to inefficiencies with dummy chunks. There is still the question on whether these chunks could be precomputed (Edu mentioned some padding instruction for instance)
:::
:::spoiler {state="close"} Dynamic chunk && efficiency notes
One other goal / potential outcome of proof chunking is proof dynamism: being able to choose a circuit size and/or de/activate part of the circuit to fasten proof computation, lower memory and so on. We agreed (at least no one refute me!) that de/activating seems complicating: for aggregation we would have to support different circuits and while we may gain on the chunk side, aggregation would become more complex. As for folding, we didn't discuss about this but I would expect to encounter the same problem if that were possible. Choosing the circuit size seems much more manageable: we could for instance work on smaller domain (and extend the evaluation only at the end) for aggregation, or for both simply work on finer chunks. Because we can process chunks in parallel, we can easily diminish the maximum memory, scale with distribution and so on.
:::
#### approach 3

- columns is cheaper in snark-verifier aggregation.
- sub-circuit splitted into by row first then multiple columns.
- everything handled in a single proof, no much different compare with current methodology.
- however constraints also need to be duplicate since they are working on different column.
### dynamic Plonkish folding schemes
#### approach 1: halo2 + hypernova
- halo2 + hypernova: write circuit constarints in halo2, and transpile to PIL/CCS automatically, reuse current zkEVM project.
- need to transpile lookup (grand-product) to CCS.
- recursive structure granularity: on block/transaction level/step level
- bus-mapping play as raw execution trace -> CCS witness generator
### SNARK-VMs folding schemes
> ethereum client implementation
--> crossed compile to another instruction-set
--> verify by any folding-based SNARK VM (with fixed circuit)
- able to handle arbitraty lengh of computation based on folding scheme.
- claimed by Ming: most confident solution to be free of soundness/under-constraints, whether now or in the future.
- current status: Rust emulator witness generator works by successfully handle ethereum transaction. Try integrate with [powdr-asm](https://github.com/powdr-labs/powdr/)
- [design docs](https://github.com/hero78119/zkGeth) by Ming
### Other topics
- explore STARK/AIR ?