Try   HackMD

My prover design with assumptions and selected criteria for the agent-based modelling:
(Reference: Please see all the quantified criteria and related metrics in this document our team created earlier.)

Assumptions:

  • Permissionless entry: no special permissions or approvals required by any central party for an eligible prover to join. Criteria for prover eligibility:
    • Stake: staking applied to increase security and to create economic interest for honest behavior. Minimum threshold applied to lower the entry barrier and maximum limit applied to control the dominance of large players (taking ETH as an example these could be 8/16 and 32 ETH or similar)
    • Availability of minimum computing power
  • Transparency and fairness:
    • All the information about the mechanism’s rules, operation, and state, as well as the transparency of decision-making processes is a given to all participants.
    • There is equal selection probability rate to all participants with the same staked amount: staked amount / total amount staked * randomness coefficient.
  • Sybil attack resistance: Each prover has a unique identifier and their geographic location can be determined
  • Cost: I took this as a constant assuming that the minimum cost of computation and bandwidth to generate a valid proof on time are the same across the prover network
  • Efficiency: considering that all honest provers are aiming to maximize their profit, I am assuming that they will increase their efficiency through hardver acceleration or other means at the individual prover level
  • Scalability: assume that the prover mechanism is capable of scaling with the increase of the number of transactions

Metrics for modelling:

Decentralization:
I included the computing power and a stake threshold/limit in the base assumptions, thus I am focusing on geographical distribution as a metric for decentralization.

  • Geographic Diversity Index

    • The Geographic Diversity Index (GDI) is a measure that can quantify the degree of decentralization or geographic diversity in a network.

    Image Not Showing Possible Reasons
    • The image was uploaded to a note which you don't have access to
    • The note which the image was originally uploaded to has been deleted
    Learn More →

    A GDI of 1 represents maximum geographic diversity, indicating that each prover is located in a different distinct geographic region or country.

    Subject to

    • A minimum number of distinct geographic regions or countries where provers must be located.
    • A minimum number of provers available in each geographic region or country.
    • A minimum distance or separation between provers in the same geographic region to ensure that they are not concentrated in a small area within the region.

Liveness:
In my view Prover Reputation Score (PRS) includes prover availability, and downtime/delay, so I only included this metric here. PRS also has a sort of competition-generating factor thus I did not include other metrics for competition in my model. Out of the metrics measuring security Proof Validity Ratio is measured and included in the Reputation score as Prover Reliability. The penalty system under Incentives, as well as the metrics for Decentralization also increase the security of the network so I am not adding additional metrics for Security in my current design for the agent-based modelling.

  • Prover Reputation Score (PRS) ensure that there is always a prover ready to generate proofs based on factors such as uptime and reliability. Speed is included in this metric as the ability to generate valid proofs within the given time window. (Would a base score be assigned to newly joining provers with no history related to uptime and reliability?)

    • Prover Uptime (PU) represents the amount of time a prover is available and actively participating in the network. It can be measured in units of time or epochs?
    • Prover Reliability (PR) factor accounts for the reliability of each prover in terms of their ability to generate proofs accurately and without errors.

    Image Not Showing Possible Reasons
    • The image was uploaded to a note which you don't have access to
    • The note which the image was originally uploaded to has been deleted
    Learn More →

    Constraint:

    • A minimum level reputation score (uptime and reliability) to ensure network security.

Censorship resistance:
Assuming that the provers can only censor full blocks and not specific transactions within a block, I am only focusing on the Prover Non-censorship Index here as this could be the most representative metric.

  • Prover Non-censorship Index (PNI):
    The above Non-censorship Index (NCI) could also be tailored to measure the censorship of the individual provers, for this we include the below penalty in the formula:

    • Prover Censorship Penalty (PCP)*: could be included in the objective function to penalize censoring provers. This should be subtracted from the NCI when a prover censors a transaction.

    Objective function:

    • Maximize PNI = (Σ X) / T - Σ C * (1 - X)
      • ‘Σ X’ represents the sum of binary variables ‘X’.
      • ‘X’ is a binary decision variable (1 if not censored, 0 if censored)’ for each transaction.
      • ‘T’ represents the total number of transactions.
      • ‘C’ is the censorship penalty for the transactions.

    This index could be used in two ways:

    • included in the calculation formula of the provers’ reputation
    • Could be used to dynamically adjust the rewards for provers; i.e. non-censoring provers get full rewards, while censoring provers get rewarded according to their score (1 = 100% rewards; 0.9 => 90% of rewards)

Incentives for provers

Maximize the overall success of the system while ensuring that provers are appropriately incentivized to participate honestly. The objective function should strike a balance between rewarding provers for their contributions and deterring malicious actions, ultimately aligning their interests/ utility with the long-term success and security of the zk network.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Where:

  • V is the number of valid proofs submitted by the prover.
  • Base Reward is the fixed reward assigned to each valid proof.
  • C is the complexity of the computations involved.
  • Complexity Bonus is a coefficient that rewards more complex computations.
  • I represents the number of incorrect or invalid proofs.
  • Base Penalty is the fixed penalty assigned to each invalid proof.
  • Penalty Severity is a coefficient that determines the severity of the penalty based on the degree of misbehavior (e.g., 0 for no misbehavior, 1 for minor errors, 2 for major errors, 3 for deliberate fraud).
    • IDEA: this could be also a simple multiplier for the base penalty in case the bad behavior happens multiple times. E.g. base penanty x2 for second time, x3 for third time, x4 for fourth time etc.

Objective function:

  • Maximize Σ of rewards for each prover: Reward - Penalty

The system should be designed to maximize R−P for each prover. This means that honest provers who submit valid proofs and handle complex computations will be rewarded, while dishonest provers who submit incorrect or invalid proofs will face penalties proportionate to the severity of their misbehavior.

Subject to:

  • A maximum rewards that can be earned by honest provers. This prevents overly generous rewards that might strain the system.

Proposed prover selection model based on the above:

Random prover selection from a pool of 32/64/128 provers with the highest reputation score. Secondary mechanism in case of prover failure: open competition with proof racing (fastest prover submitting a valid proof is rewarded).

Notes:

  • the pool of most reputable provers could dynamic, i.e. if a prover is already working on a proof, then they are excluded from the pool until their proof is submitted and their capacity is available again, this way multiple provers get the chance to compete.
  • pool size to be set based on the block time and the proof window (how many blocks are proposed within the proof window)