A rough sketch of a hypothetical `execute` flow is outlined below:
**1. Decompression**
Load and decompress the following from `trace`:
- Witness data
- EVM bytecode
- Pointers to custom precompiles, patched opcodes, and custom transaction type handlers
**2. Initialization**
- Fetch code for custom precompiles, patched opcodes, and custom transaction type handlers from L1 contract state, and register them in the EVM context
- Initialize an in-memory database with the `pre_state_root` in the trace, and populate it with witness data
**3. Execution**
- Execute the decompressed EVM bytecode
- Assert that metered gas usage matches `gas_used` supplied in the trace (this includes custom precompiles)
- Assert that the final state root of the in-memory database matches `post_state_root` supplied in the trace
## Additional notes
We assume that a RISC-V execution precompile is added to the EVM, and that custom precompiles, patched opcodes, and custom transaction type handlers are defined in statically-linked ELF files that are deployed within contract accounts. Instead of fine-grained RISC-V execution metering, we could charge by the cycle, and revert if the count exceeds a maximum supplied in `trace`.
Initializing an EVM with custom precompiles and different opcode behavior requires reading state, which could come with a performance penalty. We could partially address this by caching custom precompile object code in memory for up to `N` blocks, refreshing the cachce whenever it's accessed again. The gas cost of `execute` would then vary depending on whether the code is warm or cold.
A similar cache idea could also be used to reduce the overhead of witness-related calldata. If we store the last N blocks of calldata in memory, then new blocks could reference portions of the cached calldata for (partial) witness data.