refers to prototype model here:
https://github.com/dRewardsSystem/Rewards/tree/main/prototype-model
Eventually we need data on reviewer metrics. So far, I have been building using Octopus's notes as a proto-spec. This means reviewers entering the system have the following attributes:
trust_level: 5
social_level: 4
ability_level: 3
recognition_level: 4
engagement: 5
While grants have the following attributes:
value: 3
clarity: 6
legitimacy: 5
First, we need to agree whether these lists of attributes are sufficient and sensible and that they can be derived from real data from Gitcoin. Then, we need to define precisely how to extract these metrics from the real dtaa we have available to us.
This is a task for me to consider tomorrow. The primary missing piece is a class method associated with the Reviewer
class that takes in the reviewer metrics and returns a trust level (1,2,3).
For example, an instance of Reviewer
might have an embedded class method that executes on class instantiation that populates the attribute self.trust_level
.
e.g. (pseudocode, just off the top of my head)
The a class method associated with Pool is required to divide the total population of reviewers into groups according to their trust levels.
I guess it is also necessary to start thinking about how to implement the incentive models, but I need to go back to Octopus's notes for this.
Not sure - certainly not before next week as there are still some of the more complex aspects of the incentivization model to sketch out and I need input from others. Like I said on the call, I am just building this prototype using Octopus's notes as a rough spec. It needs formalizing and refining collaboratively.
I have added some brief documentation to the project README including the following key TODOs:
update make_decision()
function. At the moment it is a compeltely arbitrary placeholder. It shoudl be some meaningful combination of grant and reviewer attributes.
The magnitude of payments, satisfaction increases and decreases, number of reviewers in discussions etc all need to be tuned. Theyu are arbitrareily chosen at the moment
At the moment every reviewer sees every grant. We need to add soem logic so that reviewers are divided across the grants.
The model runs on dummy data, we need to determine how to wrangle real Gitcoin grants data intoa suitable format for running simulations
Determine sensible definition of trust level 1,2,3. In the prototype I take an arithmetic mean of [social_level, ability_level, recognition_level, engagement] each of which are normalized to 0-1. The mean is then used to determine trust-level as follows:
L1 = mean <=0.5
L2 = mean <=0.8
L3 = mean > 0.8