Try   HackMD

Week 14 Update

Last week was all about cadCAD and learning to design mathematical equations to model the flow of inputs and outputs of the system. So far, the model simulates random incoming transactions and processes them according to the number of provers available. It also calculates user value based on the rate of random data size and user cost based on the unprocessed transactions.

You can find the github repo for the model and simulation here: https://github.com/niluferokay/Prover-Mechanism-Simulation/blob/main/Prover Mechanism Simulation.ipynb

Next steps are to integrate an equation for prover efficiency into the model, followed by adding constraints and criteria. Besides using Monte Carlo Simulation method, I plan to explore Parameter Sweeps and A/B testing for further experimentation.

Exciting news from the Aztec team: they've issued a request for proposals regarding decentralized prover coordination. This aligns perfectly with our research, and we're considering the possibility of using our model and simulation to address some of their questions and submit a proposal for the decentralized prover mechanism. It's truly exciting to see how our research, which we've been working on for nearly three months, has the potential to benefit and inspire other researchers. It's a beautiful journey! ⛵🏝️🌞

Week 13 Update

Last week, I defined my prover strategy objectives for the optimal prover mechanism, aiming to simplify them as much as possible. After careful consideration, I chose 💲cost, ⚡️liveliness, 🌐decentralization, and 😇honest behavior as the key criteria, in line with the goals and values of zkRollups and Ethereum.

The meeting with Barnabe was incredibly insightful. He guided us to take a more holistic, systems-oriented approach to studying the model and integrated mathematical equations to gain a deeper understanding of the relationship of the zk system inputs and outputs.

I've been learning cadCAD, I'm starting to build the system model in Python using the cadCAD modeling framework this week. Hopefully I'll be sharing the repo soon!🐍💻

Inputs

Agent-based modeling inputs based on notes from our mentor Barnabe.

Rollup Transaction Throughput (

θ): This represents the rate at which transactions are processed within a rollup system. It's measured over a specific time interval
Δt
. In other words,
θ
is the number of transactions that the rollup can process within
Δt
.

Data to Process (D): This is the amount of transaction data, measured in some unit (e.g., gas), that arrives and needs to be processed within

Δt. The relationship between
θ
and
D
is given by
D=θΔt
. So, D is the measure of the workload that the system needs to handle within a specific time interval
Δt
based on its processing rate
θ
.

  • For example, if
    θ
    is 100 transactions per second, and
    Δt
    is 60 seconds (1 minute), then D would be 100 transactions/second * 60 seconds = 6,000 transactions to process in that 1-minute time frame.

User Value (V(D)): When transactions enter the system, they carry some user value. The amount of value, V(D), is a function of the data size, D. It's defined as

V(D)=αD, where α represents a constant factor.

User Cost for Waiting (Tu(Δt)): Users might incur a cost if they have to wait for their transactions to be processed. The cost incurred for waiting for a time interval Δt is given by

Tu(Δt)=βΔt, where β represents a constant factor. This cost is incurred by users due to the delay in processing their transactions within the rollup system.

Number of Provers (

N): This represents the total count of provers available in the system.

Prover Efficiency (

γi): The efficiency of each prover, indexed by 'i,' which scales the cost and proving delay. More efficient provers have lower costs and shorter proving delays.

Cost to Prove a Batch (Ci(D)): The cost to prove a batch of size 'D' by prover 'i' is determined by a simple scaling factor:

Ci(D)=Dγi. This means that the cost to prove a batch increases linearly with its size, but the scaling factor γi depends on the efficiency of the specific prover. More efficient provers (higher γi) have lower costs.

Proving Delay (Ti(D)): The time it takes prover 'i' to prove a batch of size 'D' is also scaled by its efficiency:

Ti(D)=Dγi. This means that the proving delay is directly proportional to the size of the batch, and more efficient provers (higher γi) can prove larger batches in the same amount of time.

Proving Capacity: Given the time interval Δt, a prover 'i' can prove

γiΔt units of data. This means that if a prover is active for Δt units of time, they can process data equal to γi times Δt.

Prover Failure (

p): There is a probability 'p' that any given prover may randomly fail to prove a batch. This probability represents the likelihood of a prover failing to complete its task successfully.