Gitcoin RPG notes

tags: gitcoin Notes

Updated by June 2021

Goals

  • To perform a dry-run of the technical anti-sybil workstream during round
  • To make it fun and educative to manage the anti-sybil work

Roles

  • Contribution data generator: Danilo
    • Sub-roles: dishonest contribution generator & honest contribution generator
  • Machine Learning operator: Jesse Tao
  • Human Evaluator: Jiajia
    • Responsible for manually flagging select labels that were predicted from the lastest run
  • MC: Zargham
    • Subrole: Final report generator

Rules of the game

  • Win conditions
    • For dishonest generators: funelling money away through the sybil tax
    • For honest generators:

Procedures

Bootstraping cycle

Flagging Cycles (iterative rounds)

  1. Data generators create additional rows and provide metadata (labels) to the MC
  2. Machine Learning Operator trains the subset-supervised ML algorithm and uses it to extrapolate over the all the existing dataset
    • (possible to change heuristics over time)
    • The result of the extrapolation is a tabular data with users and ids and containing the model results (eg: label and confidence)
  3. Machine Learning Operator selects a subset of the extrapolation results and hand over to the human evaluator
    • (eg: the subset could be 20 'random' users)
  4. Human Evaluator manually flags each selected user by looking into the available data sources (contributions graph, account links, etc)

Judgement & Sanction (end round)

  1. Metadata provided by the data generators are checked against the final extrapolation results

Notes

  • How the evaluator is going to handle the resulting data?
    • Right now, it is through google sheets
  • Improvements
    • add github fields to the test sheet
    • The dry run will perform better by having a subset of the full contributions graph
    • Needs a better way to generate the created_on events
    • Make the generated user names less obvious

References