Last week was all about cadCAD and learning to design mathematical equations to model the flow of inputs and outputs of the system. So far, the model simulates random incoming transactions and processes them according to the number of provers available. It also calculates user value based on the rate of random data size and user cost based on the unprocessed transactions.
You can find the github repo for the model and simulation here: https://github.com/niluferokay/Prover-Mechanism-Simulation/blob/main/Prover Mechanism Simulation.ipynb
Next steps are to integrate an equation for prover efficiency into the model, followed by adding constraints and criteria. Besides using Monte Carlo Simulation method, I plan to explore Parameter Sweeps and A/B testing for further experimentation.
Exciting news from the Aztec team: they've issued a request for proposals regarding decentralized prover coordination. This aligns perfectly with our research, and we're considering the possibility of using our model and simulation to address some of their questions and submit a proposal for the decentralized prover mechanism. It's truly exciting to see how our research, which we've been working on for nearly three months, has the potential to benefit and inspire other researchers. It's a beautiful journey! ⛵🏝️🌞
Last week, I defined my prover strategy objectives for the optimal prover mechanism, aiming to simplify them as much as possible. After careful consideration, I chose 💲cost, ⚡️liveliness, 🌐decentralization, and 😇honest behavior as the key criteria, in line with the goals and values of zkRollups and Ethereum.
The meeting with Barnabe was incredibly insightful. He guided us to take a more holistic, systems-oriented approach to studying the model and integrated mathematical equations to gain a deeper understanding of the relationship of the zk system inputs and outputs.
I've been learning cadCAD, I'm starting to build the system model in Python using the cadCAD modeling framework this week. Hopefully I'll be sharing the repo soon!🐍💻
Agent-based modeling inputs based on notes from our mentor Barnabe.
Rollup Transaction Throughput (
Data to Process (D): This is the amount of transaction data, measured in some unit (e.g., gas), that arrives and needs to be processed within
User Value (V(D)): When transactions enter the system, they carry some user value. The amount of value, V(D), is a function of the data size, D. It's defined as
User Cost for Waiting (Tu(Δt)): Users might incur a cost if they have to wait for their transactions to be processed. The cost incurred for waiting for a time interval Δt is given by
Number of Provers (
Prover Efficiency (
Cost to Prove a Batch (Ci(D)): The cost to prove a batch of size 'D' by prover 'i' is determined by a simple scaling factor:
Proving Delay (Ti(D)): The time it takes prover 'i' to prove a batch of size 'D' is also scaled by its efficiency:
Proving Capacity: Given the time interval Δt, a prover 'i' can prove
Prover Failure (