# Grant Reviewer Rewards System :octopus:
## The Framework
We can view the Grant Review mechanism for GitCoin Grants as a mathematical *function*.
The *input* is a grant, and the output is either "Yes" (the grant is valid and should receive its donations) or "No" (the grant is invalid and will not receive donations). For each grant, we will ultimately make a permanent "Yes" or "No" decision.
Now the key question is **What is the appropriate process for computing this function?**
## Components
Our main idea is to view the reviewers as components in the system, with their individual signals processed and aggregated to reach the final decision.
## Two Main Questions
1. How do we know a grant should be approved or rejected?
2. How do we know that an individual's vote is trustworthy and reliable?
## Design of the System
The goal in designing this system is to make sure that we minimize both *error rate* (grants which are misclassified) and *cost* (the expense to the GitCoin system for making the decision).
### Key Ideas
* Use various mechanisms to ensure that reviewers are capable of doing the work and giving honest effort.
* Give financial and non-financial rewards for good performance.
* Create an efficient mechanism so that all contributions are rewarded. Identify different roles/POAPs for taking actions that help the process.
#### Predefined Assessments
* **Introductory Quiz:** to be eligible to review, new users should at least complete a basic short assessment where they correctly classify a small number of grants ("correct" can be
* **Poison Pills:** grants that should clearly be rejected based on previous community consensus. Reviewers will be given a certain number of Poison Pills, and it will adjust their credibility if they approve any of them.
* **Gold Stars:** grants that should clearly be approved based on community consensus. Reviewers will be randomly given a certain number of Gold Stars, and it will adjust their credibility if they reject any of them.
## Ongoing Vibe Checks
* **Agreement in Pools.** Low-level reviewers may be placed in pools of 2-5 (the pool size is a parameter to turn) where their individual votes are aggregated. If a voter is in the minority on a particular grant, they could either have their credibility adjusted or be asked to submit a form detailing the reason for their vote.
* **Agreement Between Levels.** If an established reviewer disagrees with a newer reviewer, we will assume the more established reviewer is correct. This should be counterbalanced by
### Variables to Consider
* Do reviewers have trustworthiness/accuracy? How do we measure or update this?
* Time spent on review
* Learning through review process; how does a beginner gain more insight and connection to the community so the individual and the system gain knowledgs?
* The "green swan" event of a massive disconnect after the fact (similar to the Token Engineering Commons" "Praisemageddon" in June 2021). How will data be analyzed?
### Valuable Roles
These could be given POAPs
* Reviewer (different levels)
* Discusser : contributes questions or ideas that help the community decide and gain insight
* Analyst : looks at data and/or more in-depth view of trends in the review process
### One Possible Design: Hierarchy
Reviewers have different levels. A Level 1 reviewer has their decision reviewed by a Level 2 reviewer. The Level 2 Reviewer's decision will be used for approve/reject. The Level 1 Reviewer's credibility
will be adjusted by whether or not they agree with the "higher level" reviewer.

We could iterate this, with Level 3 reviewers, or even going higher, in a classic "up-the-chain" manner.
### Risk and Cost of Hierarchy
The risk that an incorrect decision is ultimately made os relatively low; the higher level reviewers make the final decision. We trust the higher-level reviewers to get it right.
### Another Possible Design: Pools
Reviewers of the same level are placed in a pool, and their majority vote is counted.
For an individual grant being reviewed, it would look like this

For all of the grants, it would look like this (notice how the placement of arrows so we don't have overlapping pools).

### Hybridizing the Two Designs
The two distinct designs just presented can be combined and modified.
For instance, suppose we have the following parameters:
1. a Level 1 vote counts as 1 vote, a Level 2 vote counts as 2.5 votes
2. a pool consists of two Level 1 reviewers and a Level 2 reviewer.
Under these parameters, the "Pool" model is functionally equivalent to the "Hierarchy" model, since the Level 2 reviewer is giving the final decision.
The precise parameters of the design is open to exploration, and it's important to gather data by trying different arrangement both in simulation and in practice.
### Wait -- isn't this a neural network?
The way that information flows through this system is reminiscent of many other systems, including neural networks.
For a neural network to improve, there has to be a way for the neurons to update their weights. In a learning system composed of humans, this is equivalent to human beings growing their own understanding.
The system will benefit from having data flow on individual reviewers feedback, as well as giving opportunities for reflection. Discussion can seem inefficient for reaching a final decision, but taking time to highlight edge cases is invaluable for aligning perspectives and developing cultural norms.