# Scroll RETH Development
Part3 are "application questions" of who would like to take this job. Please send your thoughts on these to zhuo@scroll.io
## Part1: Scroll Geth Features
First, we will explain what is the difference between scroll-geth and original l1 geth.
Repo: https://github.com/scroll-tech/go-ethereum
1. L1-type tx fetching and execution. As a L2 client, it needs to fetch L1-type txs from L1 geth rpc(from bridge contract) and execute it in L2. Codes dir: rollup/rollup_sync_service
2. [L1 fee](https://docs.scroll.io/en/developers/transaction-fees-on-scroll/#l1-fee) mechanism. Codes dir: rollup/fees
3. Binary trie. As a ZK L2, scroll uses a binary trie as stroage trie. We still not support some geth features like snapshot, snapsync, gcmode=full, offline prune due to this change. Codes: trie/zk_trie*.go. [The underlying. trie implementation](https://github.com/scroll-tech/zktrie) is provided for both Rust and Go
4. EVM changes. Different behavior of opcodes(blockhash,selfdestruct,prevrandao) and precompiles(ecpairing,modexp). [This](https://www.rollup.codes/scroll) is great docs for this. We already has a [forked revm branch](https://github.com/scroll-tech/revm/pull/1), fully compatible with scroll-geth.
5. RPC changes. For example, we added "l1 data fee" to tx receipts. And we added rpc to fetch infomation of L1-type txs.
6. "Trace". Trace is a json data format, it can be seen as a superset of "stateless block", needed by provers. [Trace example](https://github.com/scroll-tech/stateless-block-verifier/blob/master/testdata/mainnet_blocks/5224657.json). While in future we may replace this customized data format with more standard rpcs or [more official data definition](https://github.com/ethereum/go-ethereum/pull/29719). Codes dir: rollup/tracing
7. Circuit capaicity checking module. This module is part of block building logics, to make sure blocks built can never overflow circuit limit. While in future we will remove this module. Codes dir: rollup/circuitcapacitychecker
## Part2: Scroll-reth development milestones
1. Milestone1: basic follower node. We plan to first built a followr-node, instead of a seqencer node. So it is easier. The goal is the follower node can sync scroll mainnet. So the above feature (1)(2)(3)(4) is needed. We do not care performance this milestone, while we think performance should be reasonable, better at least 100TPS or 5days to sync Scroll mainnet. (Which is the performance of "full-sync" mode of geth)
2. Milestone2: High performance follower node. Reth support much faster sync without computing state root of each block, so in this milestone we hope Scrol-Reth can sync Scroll mainnet within 1 day, utilizing of Reth native designs.
3. Milestone3: Full featured client. Support some Scroll native RPCs, be able to be a sequencer. (Very likely we don't need to clone "trace" and "circuit capacity checking" codes to Reth). We may also need to use the "exex" of Reth to make codes more maintainable.
## Part3: Questions needed to be answered when you are going to work on this Project
1. For the binary trie, if you are going to implement this into Reth, which folders/files you need to add/change. Can you have a more detailed development plan or roadmap. (For reference, if it was a question for Geth, we can answer like this: implement trie itself as another type aside of old Trie and SecureTrie, implment trie database to enable in-memory prune, implement range proof method of trie to support snapshot and snapsync, implement prefetch, implement correct "Commitment" semantic to make full node and archive node behave as expected and performant, etc..)
2. How can exex feature of Reth help Scroll L2 client more maintainable? Which features you think can work with exex well?
## Part4: Other resources for this Project:
We already have a [POC](https://github.com/scroll-tech/stateless-block-verifier) to combine Scroll-REVM and our zktrie together. It is tested, it can verify every Scroll mainnet block.