From the perspective of explicit or implicit state transition, ZK-Rollup is the former one and Optmistic-Rollup is the latter one which requires much more data to put on chain. [14:02]
Optimistic-Rollup can make use of the privacy feature from ZKP and even make its verification optmistic to save gas [16:30]
Neither BN254 or BLS12-381 are Halo friendly elliptic curve, so the verification will cost much more gas (maybe tens of millions). [11:58]
To do recursive proof composition, we need a circuit which can verify another ZK-SNARK proof, which is typically very hard to do if we only has access to only one curve. [12:46]
It's hard to directly make DeFi private because it might break some of their machenism, so we might try to interact with DeFi through some privacy preseved pool. [26:31]
In next early summer launch, user will be able to send a value note to some predifined DeFi action on ZK
Rollup to interact with Ethereum DeFi smart contracts (DeFi Bridge type transaction). For example, it might be able to deposit a thousand Dai into Compound, or swap some token on Uniswap. [31:02]
Target on client-side proof generation (end-to-end privacy first), so it needs to be optimized to produce really tight circuits.
In longer term, it wants to add custom semantics into it to abstracts away private state management to allow state access by indivisual users
It compiles the DSL into an intermediate representation ACIR (just like LLVM for ZKP). Then it can work with multiple backend like plonk, groth16, etc…
Current DSL's compilers produce intermediate representation R1CS, which is very efficient for non-universal SNARK.
However, R1CS is not so friendly for universal SNARK and cause inefficient circuits, which doesn't leverage the strength of plonk's custom gate (extremely efficiently to evalute some arithmetic operations like poseidon hash, pedersen hash, sha256 hash, xor, etc…).
Even using WASM to generate proof on client-side, it still has many problems like lack of native efficient mathematical opcode of CPU, unable to multi-threading, and memory limit. [42:54]
Currently the most time-consuming part of proof generation is prime field arithmetic and prime field multiplication. To achieve multi-worker instead of multi-threading, they split proof into different part for seperate web workers to compute, and it gives like 4 times speed up. [44:26]