Rewards System (GIA)

@rewardssystem

Public team

Joined on Mar 21, 2022

  • Info for JCook, input from Octopus refers to prototype model here: https://github.com/dRewardsSystem/Rewards/tree/main/prototype-model - What data is needed to run the simulation end-to-end? Eventually we need data on reviewer metrics. So far, I have been building using Octopus's notes as a proto-spec. This means reviewers entering the system have the following attributes: trust_level: 5 social_level: 4
     Like 1 Bookmark
  • Questions: What qualifies a grant for review? What are the components of the Reviewers Success score? What does it mean to have a grant reviewed moved to the stage of approval? Do the Grant reviewer team has input to grant appeal? What does tagging mean in Grant review? Does each grant pass through each reviewer? Is there any component that implies sybil? How do you treat such?
     Like  Bookmark
  • Background Credible neutrality is neccessary for a community to grow. Rawls has shown that participants are more likely to accept the outcomes of an election if they think the process is legitimate and fair. As we build the first digital democracies, how might we improve the legitimacy, credible neutrality, and sustainability of the new, internet-based, democratic systems? This is a top-level, society-scale question that GitcoinDAO aims to tackle. Within that, grants are provided to projects that have positive externalities for the public ("public goods"). Traditionally, these grants have been managed as a monolithic group of applications, reviewed by humans. The upcoming Grants 2.0 will be modular, allowing anyone to host a round. This means that the round owners will need to take responsibility for curating the scope and eligibility requirements for their round. Some round-owners will self-curate, others will prefer to delegate this responsibility to the community. If successful, this system will create a "validator set" of grant reviewers which will lend any grant round credible neutrality in their grant curation process. Communities which self-curate would eventually run into a scaling problem that they would need to solve. We intend to solve this problem in a way that generalizes to all round owners, regardless of whether they self-curate or community-curate. Communties that want their community to curate will be able to focus on sourcing the right criteria rather than execution details. The reviewer incentivization layer, if designed correctly, allows both these types of communities to scale in a credibly neutral way. This is the purpose of the Rewards team. They wish to optimize a generalizable incentive layer to maximize likelihood that a grant outcome is trustworthy, regardless of the size and scope of the round. At the moment grants are reviewed by a small set of highly trusted individuals who have built knowledge and mutual trust through experience and discussion. However, with decentralization and permissionless as core values, the grant reviewing process needs to be expanded to more human reviewers. This process will require systems in place to ensure those human reviewers act honestly and skillfully. In the absense of such systems, we are vulnerable to thieves, saboteurs and well-intentioned incompetence. To defend against this, a well-designed incentivization scheme is needed to attract, train and retain trustworthy reviewers. The optimal incentive model cheaply ensures reviews are completed honestly. The GIA Rewards team aims to devise as close to an optimal system as possible through ongoing research and development. This is one component of several, running in parallel with the overall aim of generating trustworthy grant outcomes. Correctly incentivizing reviewers is a route to increasing the trustworthiness of the humans reviewing grants - one critical part of the overall grant review process that also includes Sybil defense.
     Like 3 Bookmark
  • The Framework We can view the Grant Review mechanism for GitCoin Grants as a mathematical function. The input is a grant, and the output is either "Yes" (the grant is valid and should receive its donations) or "No" (the grant is invalid and will not receive donations). For each grant, we will ultimately make a permanent "Yes" or "No" decision. Now the key question is What is the appropriate process for computing this function? Components Our main idea is to view the reviewers as components in the system, with their individual signals processed and aggregated to reach the final decision.
     Like  Bookmark
  • Entities These are classes to create in Python. Reviewer Pool Grant Discussion Pipeline Scoreboard Round
     Like  Bookmark
  • The Framework We can view the Grant Review mechanism for GitCoin Grants as a mathematical function. The input is a grant, and the output is either "Yes" (the grant is valid and should receive its donations) or "No" (the grant is invalid and will not receive donations). For each grant, we will ultimately make a permanent "Yes" or "No" decision. Now the key question is What is the appropriate process for computing this function? Components Two Main Questions How do we know a grant should be approved or rejected?
     Like 1 Bookmark
  • The model below is one of the three models the GIA dReward squad is coming up with. In the model below, compensation is based on three layers of reviewers, viz; L1, L2 and L3. With L3 reviewers as the top of the echelon. Per layer, compensations range from POAP, social rewards to Monetary rewards. All interested reviewers are placed in a pool, tagged L1. The output of their review is vetted by a L2 reviewer, whose outcome is also vetted by a L3 reviewer. The L3 reviewer has. The final decision on such grants, and can mostly overturn the. Decision of the L2 reviewers. The hierarchical structure is such that the workload reduces from L1 to L3, and trust can be built from L3 to L1, based on the performance of the lower layer reviewers. Pros and Cons of the proposed model Pros: • Trust can be built and strengthened with this model, with good performances bringing high trust. • Reputation can easily be tracked. • Compensation is easy to model. Cons:
     Like  Bookmark