Zkrollups offer a promising solution to Ethereum’s scalability problem by aggregating multiple transactions off-chain and only presenting their cryptographic proof on-chain. The efficiency and reliability of zkrollup networks depends heavily on the robustness of the underlying mechanisms. Currently, zkrollups are in a stage of development regarding system mechanisms and aim to be compatible with Ethereum’s features, hence the need for innovation and research in this area. As an extension of Ethereum, it is essential that zkrollups embody core values such as decentralization, transparency, security and fairness.
While the current research landscape in this ecosystem predominantly focused on optimizing the sequencer actor, there is a notable lack of emphasis on provers. This gap in the literature motivates our research to address challenges related to selecting, managing and incentivizing provers in zkrollups. Inadequate prover selection and incentives may result in network congestion, security vulnerabilities and diminished user trust, making it imperative to tackle this problem. By evaluating the existing research and methods, the research aims to answer the following questions:
Our proposed solution begins with a detailed examination of existing research on prover mechanisms. Following this, we aim to establish and quantify criteria that accompany the development of a mathematical model to simulate the network of provers. By adopting this comprehensive approach, we aim to contribute valuable insights that advance the evolution of zkrollups, fostering a more robust and secure decentralized ecosystem.
We conducted a comprehensive literature review on existing decentralized prover mechanisms, aiming to illuminate current trends, challenges and potential solutions in the field. Our systematic screening of articles and exploration of zkrollup research portals resulted in the creation of an archive accessible through "awesome-prover-mechanisms".
Despite the current centralized approach, zkrollups are moving towards decentralization in the foreseeable future. Major players like Aztec, Starknet, Taiko and Scroll are keen on incorporating decentralization and permissionlessness in their prover mechanism designs.
The article titled "Decentralized Proofing, Proof Markets, and ZK Infrastructure" by Trace from Figment Capital provides a comprehensive overview of prover selection methods, incentives, and proof techniques. It delves into third-party proof networks and marketplaces, suggesting that these platforms will enable applications to outsource proof processes, ultimately reducing costs.
"Ideas on a proving network" a research post by the Aztec team, examines current trends in projects such as Taiko, Starknet, Scroll, Mina and nil. It aims to identify an optimal prover selection method integrated with Aztec's sequencer mechanism, Fernet. The article thoroughly explores crucial aspects of constructing a decentralized proving network, specifically addressing challenges related to centralization, liveness, competitiveness, hardware requirements and economic incentives.
Aztec outlines its vision for a first-party proving marketplace in the post, considering various methods to optimize the process. These methods include randomly assigning portions of the proof tree and establishing a marketplace where individuals have the right to engage in the work. Options under consideration encompass random assignment, giving sequencers a choice in selecting provers, adopting a "First Proof First Serve" (FIFO) approach, and creating a bidding marketplace specifically focused on different portions of the proof tree. Starknet, on the other hand, explores diverse approaches to decentralizing proving, including turn-based models and auctions.
The text also highlights the complexities of determining fair compensation for proofs, taking into account factors such as Ethereum gas costs, dynamic proof generation expenses, and other relevant considerations. It extends further to explore the potential implementation of an out-of-protocol mechanism, offering real-time price estimates for different proving options.
Essentially, the current research is focused on exploring an optimal prover network mechanism that ensures liveliness while fostering an economically competitive market for cost-effective proving. This pursuit demands robust designs for prover eligibility, selection, and incentives, all aimed at achieving a delicate balance between competitiveness and decentralization.
In our pursuit of identifying the optimal design for a prover network, we have meticulously gathered insights from research sources. By critically assessing various prover mechanisms, we have developed an understanding of the key characteristics contributing to an effective design. These core criteria—decentralization, cost, liveness, permissionlessness, scalability and honest behvior—serve as our guiding principles for the evaluation of prover mechanisms.
This comprehensive effort has culminated in the development of an evaluative framework that covers crucial aspects such as prover selection, incentives, workload management, and network security considerations. Recognizing the importance of translating our criteria and metrics into a mathematical format, we are inclined to frame them as optimization problems.
Our optimization approach aims to find a set of design parameters that optimize a weighted combination of key objectives, all while adhering to specified constraints. To address this complex problem, we employ multi-objective optimization algorithms. These algorithms facilitate the identification of a set of design choices that best meet the desired objectives.
The prover mechanism should aim to minimize operational costs, making it economically feasible for provers to participate while keeping user transaction costs low.
Minimize Computational Cost associated with the computation performed by provers.
Subject to:
The mechanism should ensure that there’s always a prover ready to generate proofs, ensuring the continual operation of the zkrollup. The primary goal is to minimize any downtime and delays in the network.
1. Minimize Cost of Downtime and Delays the cost associated with any periods during which provers are unavailable or fail to generate proofs. The cost of downtime can be quantified in terms the opportunity cost of potential revenue from the duration of failure.
Subject to:
proofTimeWindow
ensures that even if a griefing attack occurs and provers with the best hardware attempt to monopolize the rewards, they still have to spend/wait a significant amount of time proving each batch. This could potentially make it more difficult for a small group of provers to dominate the process.2. Maximize Prover Reputation Score (PRS)
Subject to:
The mechanism should avoid unintentionally leading to centralization or monopolization. Some prover selection mechanisms might inadvertently favor well-funded or resource-rich participants, leading to centralization. The mechanism should aim to reduce such biases.
1. Maximize Geographic Diversity (GD)
Subject to:
2. Minimize Resource Distribution Inequality (RDI)
The Herfindahl-Hirschman Index (HHI) is used to calculate the distribution of resource concentration among provers in the network. It quantifies the extent to which a few prover with well-funded resources dominate the network. Higher HHI values indicate greater centralization, while lower values suggest more distributed networks.
Subject to:
The mechanism should provide appropriate incentives to encourage participation and honest behavior among provers, aligning their interests with the overall success of the system.
Maximize Honest Behavior
Subject to:
In our research, we built an agent-based simulation model of the prover network in Python using the cadCAD modeling framework. Our approach involves creating a simulation that captures how provers interact, handle transactions, and deal with different network environments. These environments can have different motivations and characteristics. Some examples include:
This approach allows us to conduct a thorough analysis of our system, understand how it operates under diverse circumstances, and make informed decisions about optimizing it. It also provides a means to identify vulnerabilities, assess the impact of different factors, and fine-tune our system for robustness and efficiency.
Our model simulates random incoming transactions and processes them according to the number of provers available. It calculates user value based on the rate of random data size and user cost based on the unprocessed transactions.
The link to our github repo: Prover-Mechanism-Simulation
If
else:
If
else:
where:
Taking into account our work on optimization problems and the existing literature on prover strategies, we have successfully designed a decentralized prover network.
We propose a simple mechanism that enables decentralization, permissionless entry, liveness and cost-efficiency. It’s an in-protocol mechanism that integrates staking for eligibility and slashing as a security mechanism to disincentivize malicious behavior. It also employs reputation score to measure prover uptime and failures. The provers are selected through a VRF from a pool with the highest reputation score. The design has a backup mechanism for emergencies in times of prover failure and network congestion. The backup mechanism is proof racing in a more confined environment, which promotes competition and liveness. Other features like proof batching and distributed proving can be added on top of this simple design.
Provers are accepted to the network based on two criteria:
The decentralization metrics of Resource Distribution Index and Geographic Diversity can be used for risk assessment and monitoring of the network state, and new strategies can be created accordingly. For example, when the network has a higher concentration of provers with expensive hardware, the prover selection mechanism may prioritize provers with cheaper resources to promote equity. This mechanism can be deactived when resource concentration of the network is in equilibrium.
The framework for prover selection is based on random selection among the provers with the highest 25% reputation score. Reputation score is calculated based on prover uptime and proof reliability score. Every prover is assigned a base score of 100. The base score is the highest score a prover can have.
Once the sequencer has revealed the block contents, a prover is selected to prove the block in a specific proof time window. Once the proof is generated, the prover submits the proof to L1 for the block to be finalized. If the prover fails to submit a valid proof on time, the system goes into “emergency mode”.
In emergency mode, proof racing mechanism is activated. The proof task for the failed block is opened for competition among N number of provers, randomly selected from the top 25% of provers in the network. This competition can help the system minimize reorgs. The prover who submits the proof the fastest receives the block reward, while other provers receive uncle rewards for their efforts in generating proofs. Rewards for this mechanism can be derived from the slashed stake of the failed prover.
If there are not enough provers in the network and there are too many blocks waiting to be proved, the network again goes into “emergency mode” and runs a proof race among all provers. This time no uncle rewards are distributed. Once the network is stabilized, the network returns to “normal mode”.
Reward for honest provers is calculated as follows:
Reward = Prover computation cost based on complexity of the proof + L1 call data cost + Prover profit
Provers are slashed for two reasons:
Other features like proof batching, distributed proving and liquid proving pools can be added on top of this simple design.
During our research, we noticed that Aztec was requesting proposals for decentralized prover coordination. We took this opportunity to submit our design as a proposal [Proposal] Decentralized Prover Network (Staking, Reputations and Proof Races). This submission represents our commitment to contributing a robust and innovative solution to the evolving landscape of decentralized prover networks.
The simulation is a work in progress and there's a lot to be done. The decentralized prover network design can be further refined. This design raises some questions that we aim to address in the next phase by enhancing the simulation:
To delve deeper into our findings, we plan to apply sensitivity analysis to the parameters identified through the simulation. This step is crucial for understanding how variations in these parameters might impact prover behavior, network performance and security. It's a way to fine-tune our model and gain more comprehensive insights into the robustness of our proposed design.
My four-month journey as a protocol fellow was a period of curiosity, inspiration, creativity, teamwork and personal growth. It has been one of the most exhilarating projects I have ever envisioned and successfully carried out. The first two weeks were a significant challenge as I tried to understand the core concepts of the Ethereum ecosystem. However, with time, dedication and patience, the pieces started to fall into place and eliminated my initial fears.
Dedicating all my time and focus to the project, I was able to actively participate in weekly meetings and development updates, finding them highly beneficial and motivating. Reflecting on the experience, I am genuinely impressed by the progress I've achieved and the knowledge I've accumulated. This journey also helped me to improve my writing, note-taking and social skills.
I am deeply grateful for the mentorship of Barnabe Monnot, especially considering that I started the fellowship program as a permissionless participant. My heartfelt gratitude to Josh Davis and Mario Havel for organizing the EPF and fostering an environment that welcomes, nurtures and inspires new protocol fellows. Being selected as a protocol fellow is an immense honor and I extend my sincere appreciation for this incredible life-changing opportunity.
I would also like to express my gratitude to my teammates Norbert and Rachit for our fruitful discussions. Their insights and collaboration significantly contributed to the success of our collective efforts.
This experience has been truly transformative, allowing me to contribute to the Ethereum ecosystem, work on my dream project, acquire new skills, broaden my perspective and collaborate with exceptionally inspiring individuals. I am more committed than ever to contributing to the Ethereum ecosystem. Being part of a community where individual efforts harmonize with collective contributions resonates deeply with me, reinforcing my dedication to this impactful endeavor.