Week 9 Update
In week 9, we had a very productive meeting with our mentor Barnabe. We went through important aspects of our project, especially with regards to quantifying prover criteria and defining expectations from the simulation. Here are some key points of our discussion:
Quantifying Criteria
We recognized the need to translate our criteria and metrics into a mathematical format, likely in the form of optimization and allocation problems. This approach would enable us to integrate them into our simulation model.
- One of the central concerns raised was the need to quantify the costs associated with prover failures. If the rollup plans to release batches infrequently, the reliance on a single prover becomes a potential bottleneck. We must explore the delay it introduces to the system and find a way to measure the economic loss it incurs.
- Our conversation touched upon the concept of decentralization. We need to define and quantify parameters, such as the maximum number of provers in the system, and consider metrics like the Gini coefficient to measure economic decentralization.
- To minimize costs, it's essential to incentivize the most efficient provers in the system—those who produce proofs at the lowest cost. We discussed mechanisms that induce healthy competition among provers to increase efficiency. Block rewards were one such mechanism, as they encourage individuals to compete for tokens. However, a balance needs to be struck to avoid overpaying or underpaying provers.
Simulation Expectations
We reflected on what to expect from our model and simulation. This includes identifying vulnerabilities within the system. We also contemplated how the various metrics might be interrelated. For instance, understanding how redundancy, cost, and efficiency are connected, either directly or indirectly, and identifying patterns in these relationships.
In the context of our simulation, it's essential to outline the key components and aspects that we'll be working with. Here are some important notes related to the simulation:
- Main Agent - Provers: The primary focus of our simulation is on the "provers." These are the agents in our system whose behavior and interactions we aim to model and analyze.
- Environment: The "environment" in our simulation represents the overarching context in which the provers operate. It sets the stage for their actions and reactions. It's crucial to define the various aspects of the environment that may influence or interact with the provers.
- Different Environments: To gain a comprehensive understanding of how provers behave and perform under various conditions, we should consider simulating different environments. These environments can have different behaviors, motivations, and characteristics. Some examples include:
- Competitive Environment: Where provers compete with each other for rewards or recognition.
- Attack Environment: Simulating malicious behavior or attacks.
- Random Failures: Incorporating random failures or errors in the system.
- Randomly Picked Provers: Exploring scenarios where provers are selected randomly.
- Parameterization: To effectively model these different environments and scenarios, we should define a set of parameters that characterize each environment. These parameters can be adjusted to simulate different conditions. It's helpful to visualize these parameters on a two-dimensional plane (X and Y axes) to represent different environmental setups.
- X-Axis: Represents the set of parameters that define the environment (e.g., competition level, attack intensity, failure rates, etc.).
- Y-Axis: Represents the outcomes or behaviors of the provers within that environment (e.g., prover performance, rewards earned, system stability, etc.).
By systematically varying the parameters along these axes, we can explore a wide range of scenarios and observe how the behavior of provers changes in response to different environmental conditions.
This approach allows us to conduct a thorough analysis of our system, understand how it operates under diverse circumstances, and make informed decisions about optimizing it. It also provides a means to identify vulnerabilities, assess the impact of different factors, and fine-tune our system for robustness and efficiency.