Info for JCook, input from Octopus

refers to prototype model here:

https://github.com/dRewardsSystem/Rewards/tree/main/prototype-model

- What data is needed to run the simulation end-to-end?

Eventually we need data on reviewer metrics. So far, I have been building using Octopus's notes as a proto-spec. This means reviewers entering the system have the following attributes:

​​​​trust_level: 5
​​​​social_level: 4
​​​​ability_level: 3
​​​​recognition_level: 4
​​​​engagement: 5

While grants have the following attributes:

​​​​value: 3
​​​​clarity: 6
​​​​legitimacy: 5

First, we need to agree whether these lists of attributes are sufficient and sensible and that they can be derived from real data from Gitcoin. Then, we need to define precisely how to extract these metrics from the real dtaa we have available to us.

- Defined classes in the prototype currently are reviewers, pool and grants. How are the Hierarchical structures of reviewers in the architecture going to be depicted in the simulation and what data is needed for such?

This is a task for me to consider tomorrow. The primary missing piece is a class method associated with the Reviewer class that takes in the reviewer metrics and returns a trust level (1,2,3).
For example, an instance of Reviewer might have an embedded class method that executes on class instantiation that populates the attribute self.trust_level.

e.g. (pseudocode, just off the top of my head)

get_trust_level(self, trust_level, social_level, ability_level, recognition_level, engagement):

    # first calculate the arithmetic mean of 
    # normalized reviewer attributes 
    
    mean_score = mean(normalize([trust_level,
        social_level,
        ability_level,
        recognition_level,
        engagement]))
    
    # now use mean normalized score to determine
    # trust level by thresholding
    if mean_score <= 0.3:
        self.trust_level = 1
    
    elif mean_score <= 0.6:
        self.trust_score = 2
    
    else:
        self.trust_score = 3

    # define normalization func
    def normalize(self, attributes):
        norm = []
        for i in attributes:
            norm.append(i - min(attributes) / max(atributes)- min(attributes))
            return norm


The a class method associated with Pool is required to divide the total population of reviewers into groups according to their trust levels.

I guess it is also necessary to start thinking about how to implement the incentive models, but I need to go back to Octopus's notes for this.

- Realistically, when can we start experimentations on the simulation?

Not sure - certainly not before next week as there are still some of the more complex aspects of the incentivization model to sketch out and I need input from others. Like I said on the call, I am just building this prototype using Octopus's notes as a rough spec. It needs formalizing and refining collaboratively.

Specific Questions/TODOs

I have added some brief documentation to the project README including the following key TODOs:

  1. update make_decision() function. At the moment it is a compeltely arbitrary placeholder. It shoudl be some meaningful combination of grant and reviewer attributes.

  2. The magnitude of payments, satisfaction increases and decreases, number of reviewers in discussions etc all need to be tuned. Theyu are arbitrareily chosen at the moment

  3. At the moment every reviewer sees every grant. We need to add soem logic so that reviewers are divided across the grants.

  4. The model runs on dummy data, we need to determine how to wrangle real Gitcoin grants data intoa suitable format for running simulations

  5. Determine sensible definition of trust level 1,2,3. In the prototype I take an arithmetic mean of [social_level, ability_level, recognition_level, engagement] each of which are normalized to 0-1. The mean is then used to determine trust-level as follows:

    L1 = mean <=0.5
    L2 = mean <=0.8
    L3 = mean > 0.8