Authors: Norbert, Nilu, Rachit
Special thanks to Barnabé Monnot (RIG/EF) for his review and suggestions.
We propose a simple mechanism that enables decentralization, permissionless entry, liveness and cost-efficiency. It's an in-protocol mechanism that integrates staking for eligibility and slashing as a security mechanism to disincentivize malicious behavior. It also employs reputation score to measure prover uptime and failures. The provers are selected through a VRF from a pool with the highest reputation score. The design has a backup mechanism for emergencies in times of prover failure and network congestion. The backup mechanism is proof racing in a more confined environment, which promotes competition and liveness. Other features like proof batching and distributed proving can be added on top of this simple design.
Anyone meeting the eligibility criteria can join as a prover and leave at any time. In the exit case, the stake can be withdrawn in a week.
Provers are accepted to the network based on two criteria:
Since Aztec will be launched as a fully decentralized network, the mechanism should avoid unintentionally leading to centralization or monopolization. Some selection mechanisms might inadvertently favor well-funded or resource-rich participants, leading to centralization.
To detect centralization in a network, the Herfindahl-Hirschman Index (HHI) is used to assess centralization in the distribution of resources among entities in a network. It quantifies the extent to which a few entities dominate the network. Higher HHI values indicate greater centralization, while lower values suggest more distributed networks.
Resource Distribution Inequality (RDI)
HHI method is used in the distribution of resource concentration among provers in the network. It encompasses CPU and RAM of provers’ hardware.
RDI = HHI of CPU and RAM Concentration
Geographic Diversity (GD)
Geographic distribution shows the spread of prover network across different physical locations. This metric can be useful for detecting potential Sybil attacks.
These decentralization metrics can be used for risk assessment and monitoring of the network state, and new strategies can be created accordingly. For example, when the network has a higher concentration of provers with expensive hardware, the prover selection mechanism may prioritize provers with cheaper resources to promote equity. This mechanism can be deactived when resource concentration of the network is in equilibrium. Additionally, monitoring and adjusting the allocation of provers to maintain a high GD can be part of your network management strategy.
The framework for prover selection is based on random selection among the provers with the highest 25% reputation score.
Randomness
For simplicity, we apply the same mechanism as for sequencer randomness in Fernet (borrowed from the Espresso team), generating a SNARK of a hash over the prover private key and a public input. The public input is the current block number and a random value from RANDAO.
Reputation Score
The reputation score is dependent on two factors:
Every prover is assigned a base score = 100. The base score is the highest score a prover can have.
Once the sequencer has revealed the block contents, a prover is selected to prove the block in a specific proof time window. Once the proof is generated, the prover submits the proof to L1 for the block to be finalised. The proof needs to reference the new state root uploaded by the sequencer (similar to how it works in Fernet). Once the proof submitted by the prover to L1 it is verified against the initial block commitment and becomes final. If the prover fails to submit a valid proof on time, the system goes into “emergency mode”.
Emergency Mode
In emergency mode, proof racing mechanism is activated. The proof task for the failed block is opened for competition among N number of provers, randomly selected from the top 25% of provers in the network. This competition can help the system minimize reorgs. The prover who submits the proof the fastest receives the block reward, while other provers receive uncle rewards for their efforts in generating proofs. Rewards for this mechanism can be derived from the slashed stake of the failed prover.
While proof racing leads to some computational waste, it can be considered as network redundancy to maximize liveness and minimize the time/cost lost due to prover failure.
If there are not enough provers in the network and there are too many blocks waiting to be proved, the network again goes into "emergency mode" and runs a proof race among all provers. This time no uncle rewards are distributed. Once the network is stabilized, the network returns to "normal mode".
The primary objective is to incentivize provers to behave honestly and to reward provers in proportion to their computational costs. Reward for honest provers is calculated as follows:
Reward = Prover computation cost based on complexity of the proof + L1 call data cost + Prover profit
Provers are slashed for two reasons:
We propose the following transaction states representing the different phases of a transaction lifecycle:
Above we defined our simple decentralized prover model. Now we can add more features on top of the model which could be useful in various settings.
In the case of proof batching, a certain number of proofs generated for individual blocks can be aggregated into a group. Only the aggregated proof is then sent to L1 for verification. This can reduce transaction fees but results in a longer time to finality. Batch proving can be implemented as a mechanism to manage peak periods of incoming transactions. During times of high transaction influx, the proving of individual blocks can run "almost" in parallel and the grouping will cause only minimal delay in finalization at L1.
In the context of decentralized proof construction, rather than creating a single proof for a large, monolithic block, we divide it into smaller computational components. Different provers are responsible for generating proofs for these individual pieces. This division of computation also correlates with reduced hardware requirements, especially when compared to handling an entire block. The same principles of prover selection and emergency mode remains consistent in this model.
Similar to the concept of liquid staking, where participants pool their resources, hardware resources can be shared and pooled. This approach lowers entry barriers and increases accessibility for joining the prover network through resource pooling services. The pool leverages these resources to operate a prover and produce proofs. Rewards earned by the prover are distributed among those who contribute computational resources to the pool.