# Grant Reviewer Rewards System ## The Framework We can view the Grant Review mechanism for GitCoin Grants as a mathematical *function*. The *input* is a grant, and the output is either "Yes" (the grant is valid and should receive its donations) or "No" (the grant is invalid and will not receive donations). For each grant, we will ultimately make a permanent "Yes" or "No" decision. Now the key question is **What is the appropriate process for computing this function?** ## Components ## Two Main Questions 1. How do we know a grant should be approved or rejected? 2. How do we know that an individual's vote is trustworthy and reliable? ## Design of the System The goal in designing this system is to make sure that we minimize both *error rate* (grants which are misclassified) and *cost* (the expense to the GitCoin system for making the decision). ### Key Ideas * Use various mechanisms to ensure that reviewers are capable of doing the work and giving honest effort. * Give financial and non-financial rewards for good performance. * Create an efficient mechanism so that all contributions are rewarded. Identify different roles/POAPs for taking actions that help the process. #### Predefined Assessments * Introductory Quiz: to be eligible to review, new users should at least complete a basic short assessment where they correctly classify a small number of grants ("correct" can be * Poison Pills: grants that should clearly be rejected based on previous community consensus. Reviewers will be given a certain number of Poison Pills, and it will adjust their credibility if they approve any of them. * Gold Stars : grants that should clearly be approved based on community consensus. Reviewers will be randomly given a certain number of Gold Stars, and it will adjust their credibility if they reject any of them. ## Ongoing Vibe Checks * Agreement in Pools. Low-level reviewers may be placed in pools of 2-5 (the pool size is a parameter to turn) where their individual votes are aggregated. If a voter is in the minority on a particular grant, they could either have their credibility adjusted or be asked to submit a form detailing the reason for their vote. * Agreement Between Levels. If an established reviewer disagrees with a newer reviewer, we will assume the more established reviewer is correct. This should be counterbalanced by ### Variables to Consider * Do reviewers have trustworthiness/accuracy? How do we measure or update this? * Time spent on review * Learning through review process; how does a beginner gain more insight and connection to the community so the individual and the system gain knowledgs? * The "green swan" event of a massive disconnect after the fact (similar to the Token Engineering Commons" "Praisemageddon" in June 2021). How will data be analyzed? ### Valuable Roles These could be given POAPs * Reviewer (different levels) * Discusser : contributes questions or ideas that help the community decide and gain insight * Analyst : looks at data and/or more in-depth view of trends in the review process ### One Possible Design: Hierarchy ![](https://i.imgur.com/PIsWPg6.jpg) Reviewers have different levels. A Level 1 reviewer has their decision reviewed by a Level 2 reviewer. The Level 2 Reviewer's decision will be used for approve/reject. The Level 1 Reviewer's credibility will be adjusted by whether or not they agree with the "higher level" reviewer. We could iterate this, with Level 3, or even going higher, in a classic "up-the-chain" manner. For an individual grant, the review process would look like this: **picture** For all the grants, it would look like this: **picture** ### Risk and Cost of Hierarchy The risk that an incorrect decision is ultimately made os relatively low; we weight the votes somehow so that higher-level reviewers are giving the ### Another Possible Design: Pools Reviewers of the same level are placed in a pool, and their majority vote is counted. For an individual grant being reviewed, it would look like this **picture** For all of the grants, it would look like this (notice how the placement of arrows so we don't have overlapping pools). **picture** ### Hybridizing the Two Designs ### Wait -- isn't this a neural network?