# zkEVM on L1
## Abstract
This document goes over the high level changes that would be needed in a client to support zkEVM proofs.
## Problem
There are quite a few problems being addressed, we mention them briefly.
**State**
The block validation process in the consensus layer(CL) requires an execution layer(EL) node. This is because the beacon block contains an execution payload. This in turn means that to validate a block as an attester, you must also run an EL.
Running an EL currently implies needing all of the state for the EL, which can be significant.
**Processing time**
As the block gas limit increases, the average and worsecase time needed to validate the execution payload will also increase.
**Bandwidth**
As the block gas limit increase, the average block size will also increase since there will be more transactions in the block.
**EL client diversity**
Since running an EL currently requires a lot of resources, most entities will run only one.
## Proposed zkEVM solution
**State**
zkEVM proofs can be seen as a form of stateless, hence no state will be required to validate a block.
**Processing time**
The time needed to validate a block will be constant no matter the size of the block.
**Bandwidth**
zkEVM proofs are also proofs of execution, hence the transactions are no longer needed. We still do need to prove their availability using DA.
**EL client diversity**
The cost to "add" an additional EL will be the same as the cost to subscribe to an extra subnet.
# Attester Architecture
## Dependencies
The main dependency for zkVM proofs is the need for some form of pipelining like in ePBS.
This is needed because the time to create a proof will be roughly 6-8 seconds. Without ePBS or delayed execution, we will need to be able to create the proofs within 1-2 seconds, before the attestation deadline.
There are currently two mitigations for this while we do not have ePBS:
- Allow late attesting on testnets
- Specify that the beacon node is always in optimistic syncing mode
## Engine API
There are three engine api methods to take note of:
- *engine_getPayload*: Called when the CL wants the EL to make a new execution payload
- *engine_newPayload*: Called when the CL wants the EL to verify a new execution payload
- *engine_forkchoiceUpdate*: Called when the CL wants to update the EL on what the canonical finalized and safe head is(and payload preparation)
**getPayload**
As a stateless attester, you will not be able to call engine_getPayload. This is because you have no EL state for creating execution payloads.
**forkchoiceUpdate**
Stateless attesters no longer need to call this method because they are not connected to an EL that needs updating.
**newPayload**
Instead of calling an EL, the attester will listen for proofs on their subscribed subnets. If the proofs received, verify to be true, then this is enough to convince the beacon node that the execution payload is valid.
## Receiving proofs
Proofs will be received on subnets.
Without loss of generality, each subnet will contain the proof for a specific EL. We recommend subscribing to many subnets in order to achieve client and proof diversity.
> N will be chosen in the future. for now we assume we will use the number 8. This covers the known clients Reth, Nimbus-eth, Besu, Geth, Erigon, Nethermind, Eth-rex
The node will be able to set a parameter to decide how many proofs they will require before deeming a block's execution payload as valid.
## Generating and Broadcasting proofs
Stateless attesters do not generate proofs, but they will re-broadcast valid proofs they have received for blocks in their fork.
Builders or stateful proposers (for local block building) will generate blocks and subsequently generate N proofs for that block, since stateless attesters will deem the block as not valid unless K out of N proofs have been received for that block.
**MevBoost**
> Changes are not needed for the initial implementation.
## Syncing
The syncing component for the EL now reduces to downloading the proofs required for each block.
> Technically, we only need the proof for the latest block
The difference is that this happens on the CL and not the EL. The number of proofs needed would be the number of blocks since the finalization checkpoint.
# Synergies with PeerDAS
We observe that many clients may already have similar codepaths due to them implementing peerDAS. Observe the following diagrams:
## High level overview
```
┌─────────────┐ ┌─────────────────-┐ ┌──────────────┐
│ Network │───▶│Dependency Checker│-───▶│ Block Import │
│ (Gossip) │ │ (Collect & │ │ Pipeline │
│ │ │ Validate) │ │ │
└─────────────┘ └─────────────────-┘ └──────────────┘
│ │ │
Components Check Complete Import Block
Arrive Dependencies to Fork Choice
```
With peerDAS, the columns for a block are collected and validated, and only then can we determine the validity of the block. If the columns are missing, then the block is considered invalid.
One can think of peerDAS as introducing the concept of a "block dependency" -- this is a component that the block depends on in order for it to be considered valid.
We want to extend this notion to execution proofs. The main difference between columns and execution proofs being that proofs are created by the block builder.
### Visualization of component dependencies
```
┌───────────────────────────────────────────────────────────────────────────────┐
│ Block Import Dependencies │
├─────────────────┬─────────────────┬─────────────────┬─────────────────────────┤
│ Pre-Deneb │ Deneb │ Fusaka │ zkVM Fork │
├─────────────────┼─────────────────┼─────────────────┼─────────────────────────┤
│ • Block only │ • Block │ • Block │ • Block │
│ │ • Blob sidecars │ • Data columns │ • Data columns │
│ │ │ (PeerDAS) │ • Execution proofs │
└─────────────────┴─────────────────┴─────────────────┴─────────────────────────┘
```
Above is a diagram that shows the progression of "block dependencies". We mention this to conceptually note that if there is a component that is orchestrating the validity of block wrt columns/blobs, then this component can also be generalized to include proofs.