# GitCoin Trust Flow and Rewards Simulation Draft :octopus:
## Entities
These are classes to create in Python.
* Reviewer
* Pool
* Grant
* Discussion
* Pipeline
* Scoreboard
* Round
### Reviewer
A reviewer has the following attributes, all of which range from 0.0 to 1.0.
* trust level: how trustworthy they actually are
* social level: how likely they are to engage in non-work activity (e.g. chats and discussion) and their ability to communicate their thoughts
* ability level: how capable they are of doing the work
* recognition level: how they are recognized by the community (starts at 0 or 1 but goes up with increased community engagement
* engagement: goes up based on interaction with community
A reviewer performs the following actions:
* decide (a grant): takes as input a grant and makes a decision, the grant is valid or not
### Pool
A pool is a collection of reviewers. A list of reviewers should work.
A pool takes the decisions of its individual reviewers and aggregates them into a single decision.
### Grant
A grant is a proposal about which a reviewer makes a decision as to whether it is legitimate or not.
Grants have two attributes:
* value. The amount of value they add to the GTC ecosystem.
* clarity. How clearly they communicate this value.
* legitimacy. This is 0 or 1, whether the grant is created with clear intent or not.
### Discussion
A discussion occurs when a reviewer is unable to determine if a grant is legitimate.
Discussions can affect the reviewer's ability and engagement. A productive discussion will increase them, a contentious or misleading discussion will lower them.
### Pipeline
A pipeline is a connected set of reviewers and pools which produce a decision.
When we build a pipeline, we are giving an information flow structure for what reviewers make a decision for an individual grant, and for how those individual decisions ultimately lead to the system's decision.
In addition to producing a final decision, a pipeline should also update individual reviewer's statistics in the Scoreboard.
### Scoreboard
A scoreboard keeps track of each individual reviewer's statistics for the round, allowing the system to update between rounds.
For each reviewer it would be good to know
* how many grants they reviewed
* how often they disagreed with their pool
* how often they disagreed with higher level reviewers
* how often they reached the correct decision
* how often they engaged in discussion
The scoreboard could be implemented as a pandas dataframe where each row is a reviewer and the columns track their performance for each round.
### Round
A round consists of a series of grants reviews.
In each round, the grants are assigned to a pipeline and final decisions are made.
### Simulation
We decide in advance how many rounds will run, what pipeline structure we want to use, and what metrics we will track.
#### Process of Each Round
1. Assign each grant to a pipeline
2. The pipeline decides
3. We track individual and system performance on the scoreboard
#### Between Rounds
* New reviewers will enter and old reviewers may leave (if they are unhappy with rewards or have engaged in negative discussion).
* New good grants will enter if they see grants are being treated fairly.
* New malicious grants will enter if they see that sybil projects are succeeding.
* Old grants will return if they are being treated fairly.
#### Metrics to Track Overall
* Accuracy: whether grants are correctly assessed by the system
* False Positive Rate: bad grants that get through
* False Negative Rate: good grants that are rejected