Summary
This document aims to establish a standardized definition of "commodity hardware", or at the least, recommend hardware specifications that we believe to be suitable for validating on Ethereum mainnet.
A clear hardware specification is crucial for:
Without a shared understanding of target hardware specifications:
In order to estimate what popular CPUs people are buying, we did a brief search for popular CPU choices and listed their characteristics. These are not necessarily the most updated models.
High performance Consumer Market
Mid-Range Consumer Market
Existing Staking Community
Overall, we recommend a setup with:
CPU rationale
We substantiate the choice of 8 cores, with the steam hardware survey reference. The majority of CPUs here have either six or eight cores. One can view the steam dataset as being biased towards the low/median end of the gaming market.
The CPU rating scores were decided by looking at the high end CPUs on CPU benchmarks that had sixteen or less cores alongside the current consumer trends analysis done above and finding a rough average that we thought were reasonable.
We will not consider AVX512 when conducting benchmarks, however we will consider AVX2(Intel/AMD) and NEON(ARM) since these are widespread.
Storage rationale
The 4TB of storage is due to current history and state growth.
Memory rationale
The 64GB of memory was chosen for two reasons:
We recommend the ASUS NUC 14 Pro from the NUC series:
*This seems to be the closest NUC model to match the desire to have 8 cores.
We recommend the Minisforum UM790 Pro with modifications:
Below, we list out average prices for the main components needed, if you were to build your own server. The total cost is approximately $1000.
For 8 cores and 16 threads, the average price for a CPU will be $300-400
For more resources on building your own setup, see eth-docker's hardware documentation.
Currently there is no meaningful role separation between an attester and a proposer. Hence the hardware requirements for an attester would follow the same hardware requirements as a proposer.
If there was a meaningful separation, then an attester would be run with weaker hardware since they would no longer need to propose.
An aggregator aggregates BLS signatures. With the introduction of post quantum signatures, the job of the aggregator may become more computationally intensive.
There is also currently no meaningful separation between an aggregator and a proposer.
We believe that our recommended hardware requirements satisfies the desired hardware requirements for an aggregator if they were to be separated, so no meaningful changes would need to be done either way.
Proposers are assumed to not be powerful enough to compete with centralized block builders, neither is this a goal.
Key points about proposer requirements:
While out of scope for this document, we note some responsibilities:
Once we have full statelessness, we envision that validators themselves can be stateless.
This:
The stateless verification procedure fits within our recommended hardware requirements, verification is cheap. We also note that our recommended hardware requirements work both for verkle tries and binary trees with stark proofs. The latter requires more benchmarks using traditional hashes.
Raising the gas limit increases the rate of history growth, which effects the storage requirements. The analysis from paradigm suggests that without any changes, we have 2 to 3 years before we exceed 2TB. This does not include the storage requirements from the consensus layer(CL) however, Pari from DevOps notes that with the CL we have less than six months before we reach the 2TB limit.
Given that the recommended storage space is 4TB, and we plan to implement EIP-4444 in at most two years. This storage requirement should not pose any issues, even in the case that we double the gas limit.
If a user plans to stay at 2TB, then this may be sufficient, given pre-merge files are pruned in less than six months which frees up ~500GB and 4444 is implemented within a year.
As noted in the paradigm post, the additions of blobs, have reduced the history growth caused by the rollup users, since they have switched from calldata to blob data.
It's unclear whether rollups are switching between calldata and blob data currently, which means that it is unclear as to whether raising the blob limit will affect history growth any further.
This section will be handwaved the most since it has a lot of path dependencies with Orbit, 3SF and MaxEB.
We know that for SSF, the validator set needs to be reduced, so at the very least, the bump in hardware specifications should be sufficient enough for any aggregation that happens in SSF.
With contributions from Parithosh Jayanthi, Kevaundray Wedderburn, Josh Rudolf, Dankrad Feist, Justin Traglia, Ignacio Hagopian and George Kadianakis. We would also like to thank the external reviewers for their feedback: Nixorokish, Yorick Downe, Rémy Roy, Ben Adams, Vitalik Buterin, Lightclient, Andrew Ashikhmin, Marek Moraczyński, Potuz, Joe Clapis, Haurog, Francis(Base), Jimmy(Lighthouse) and Nico Flaig. Feedback does not imply endorsement of this document.