# Native Rollups Call #0 Notes Video: https://www.youtube.com/watch?v=MNtzDe9Ck0c Participants: Matthew (mteam, Spire Labs), Justin Drake, Joshua Cheong (Mantle), Ming (PSD), Ali (PSD), Artem (cyberfund), Alex ## Definitions > - it's an L2 if it uses Ethereum settlement > - it's a rollup if it uses Ethereum DA > - it's based if it uses Ethereum sequencing > - it's native if it uses Ethereum execution What is a native rollup? JD: the subroutine that executes transactions uses a precompile - takes in 3 input, pre-state root, post-state root, trace - and makes an assertion that you execute the trace starting from pre-state root and it ends at post-state root, and this assertion returns true or false. The trace needs to be available to validators so they can re-execute and verify the state transition is correct. If we require trace to be available, it has to be a rollup. Simplest native rollup is all of the execution goes through the precompile. Sufficient if user transactions through the precompile. Native execution precompile - this refers to the precompile that is a requirement in Justin's definition, to be a part of the subroutine, and the default path for user transactions. EXECUTE opcode. The precompile exposes the execution engine. ## zkVM diversity Likely valuable to formally verify at least one zkVM, and maybe more important than diversity. > JD: Realisation: zkVM diversity to hedge soundness bugs is not nearly as powerful as client diversity to hedge consensus bugs. It's unlikely two clients share the same consensus bug, but it is likely two zkVMs have (different) critical soundness bugs. In other words for zkVMs it suffices for an attacker to find independent soundness bugs, whereas for clients an attacker has to find correlated consensus bugs. This may be an argument for going all-in on a simple zkVM and lean on formal verification instead of diversity. JD: With ZK it is easier to have vouch-style multiplexing, which counterbalances the fact that if you have zkVM bug it may compromise the whole zkVM. If probability of zkEL being compromised is p, and you run n zkELs, then probability is p^n, so if p is sufficiently low and n is sufficiently high, then we're good. Artem: how can there be 1 precompile and many proof systems (zkVM)? JD: the key thing for native rollups is that the execution engine precompile doesn't take in the proof explicitly on chain, instead it's implicit and gossiped on-chain. ## Gas Global gas target for all native rollups, not per-native rollup. JD: Native execution is a limited resource. Provers need to be able to generate these proofs in a certain amount of time, e.g. 1 slot, and this needs to be a credible minority prover assumption. The ideal is that we're in a position where the proving can be done by entities with laptops (and not data centers), so we need some sort of limit on the cumulative gas used across all of the different native rollups. JD: Maybe there's a model if part of the fees go to ETH and part of the fees go to the L2 token - base fee for rollup execution that is the same for every single rollup, and some native rollups can charge a premium on top of that. ## Native rollup as stepping stone Native rollups with parallel re-execution (not SNARKs) may serve as stepping stone. > JD: Realisation: Having validators enforce the precompile using reexecution may be a pragmatic stepping stone towards the SNARK-based endgame. This would be a similar strategy to proto-danksharding: have validators do the naive thing (the equivalent of downloading blobs instead of doing DAS) and keep the gas limit small (the equivalent of the small 6-blob limit). > > Introducing the precompile with a maximum of 30M gas per L1 block may be totally reasonable from a resource utilisation perspective: > > state: because execution is stateless there is no impact on state size bandwidth: because the execution trace would need to be in calldata or blobs there is no impact on bandwidth execution: because execution is parallel (similar to a JS web worker) it can be done on a separate core; and because execution is stateless there is no disk I/O bottleneck M: Instead of doing snarks at first, we can get some of them with re-execution. The impact on validators in terms of hardware and bandwidth is minimal, just an additional thread of execution. If we move to snarks in the future (endgame), this wouldn't require layer 1 hard fork if we already have this. Justin says, it may be possible to get native rollups in fusaka in early 2026. Ali: 2 notions of native rollups. First one is you have a precompile and verifier, which means you have to enshrine a specific verifier, which ends up you selecting a proof system. The second notion is L1 is sort of co-ordinating around this pre-compile which is sort of an interface that a native rollup would provide its own verifier to L1, and in that paradigm we can have a diversity of native rollups. Which notion are we talking about? JD: 3 notions. First one is very opinionated proof system that's enshrined that every validator needs to run, directionally this is very very difficult to pull off in short and medium term, and if anything for questions of security e.g. some catastrophic vulnerability in the chosen proof system. Second path is that the proofs don't go on chain, the proofs are gossiped to validators. Every validator makes a choice which execution engine they run. The third thing is where the rollups choose what verifier they have, which is the status quo with ZK rollups today. The problem is they're not native, e.g. need for governance to update verifiers. What we're trying to do is take this status quo where each rollup decides its verifier, and instead having this precompile that is meant to be an exact copy of the L1 evm; the choice is at the validator level and not rollup level. ## Opcode > inputs: > - prestate root - > - trace - enough information for anyone to generate a snark proof (full path of execution here) > - bytecode input - evm bytecode > output: > - poststate root or exception: if an exception is raised: invalid transactions are noops, reverts > > To avoid having to resupply the same EVM bytecode every time, can just reference a contract: > - always refer to information that was made available in a previous slot > - bytecode in blobs or bytecode in state JD: Some minor revisions. we need to expose the total amount of gas used if we're going to have the EIP1559 mechanism, in addition to post-state root we expose total amount of gas used in that state transition. another change is that instead of putting poststate root as output, put it as an input, and think of opcode as an assertion that returns true or false. Do we want our proof system to only prove correct state transition functions or do we also expect the proof system to prove invalid state transitions. depends on which flavour we want, we want to have these potential changes or not (referring to the poststate root as input or output). Artem: not sure about additional EIP 1559 mechanism as it seems pretty cheap to verify (with snarks) JD: the bottleneck will shift from the verifiers - it will be so cheap to verify it won't be a bottleneck for validators - but there's a new bottleneck which is the prover. so there needs to be some sort of a gas limit. if we make current L1 evm instance snarkified under the same paradigm, then we can unify the EIP 1559 mechanism, so instead of having separate gas limit for L1 and native rollups, we could have a unified cumulative gas limit. Josh: prover infrastructure very different from validators. will there be a separate set of infrastructure incentives (for provers) to keep this in a finite set? JD: incentivisation is important and non-trivial. with attester-proposer separation, we have beacon proposer (weak) and execution proposer (sophisticated, like today's builders). we want the proving load to fall on sophisticated entities. one proposal is for execution proposers be bonded e.g. 1 eth, and require them to produce proofs, sufficiently many proofs to cover sufficiently large subset of validators / attestors who can attest to that block. if we set the bond to be significantly larger than cost of proving, then these rational entities are always incentivzed to get these proofs built so they can at the very least include the block on chain. my guess is that for any given 12 or 4 second slot, the proving cost will be very small, in the order of dollars, or total energy consumed, etc. so 1 eth bond should be more than enough. the slightly unsatisfactory part of this answer is that there's only incentives to generate proofs for a majority of attesters, so if there's 1% of attesters running some exotic client, then they're not incentivized to pay a few cents more to generate for this 1%. one way to partially address that is, we want validators to run a plurality of clients, so in some sense every client is a majority client. ## Composability > even if you have very slow proofs, you would still have composability between the native rollups but not with L1. > > how fast are the proofs? based sequencing + native execution = ultrasound rollup (definition pending) "next slot proven" gives more time for proof generation M: If proof generation is in the order of minutes...the more the proving latency, the less time to build a block and to get order flow. Centralization risk with beefy prover run by one builder. Discussion on this deferred to future call. ## Plans for future calls > - rough speccing for an opcode > - understanding gas and resource usage > - proving and provers (+ incentivization) > - migration (technical and incentives) M: I think the aggregate gas for all the native rollups doesn't accurately represent the proving cost for all the rollups. We could have many small native rollups that could be faster to proof because the proving latency can be parallelized, but huge native rollup have to prove sequentially. ## Closing Ali: Where does state live exactly? is it in the trace? JD: It's in the trace, it's stateless execution. the trace needs to include merkle or verkle paths, or at the very least the state leaves that are being accessed. we can't do state diff compression bc we need everyone to generate the proofs, and if you don't have the full witness then you can't generate the proof. The biggest bottleneck to getting that precompile technically speaking is to agree on some format for statelessness, how do we communicate the merkle paths and serialize them and put them onchain? standardization on a format for statelessness is a dependency to having native rollups in Fusaka.