# Flagging routine LC Link: https://lucid.app/lucidchart/9dc7a462-563a-44c8-9b67-6c764f19a0d9/edit?page=7IHBjY1kn_fG# ## Parameters - BSci is the owner of all cycles unless otherwise stated - Temporary state for bootstraping & demonstrating the flagging process - ## Proceedings ### During round Expected cycle lifetime: each 3 days 1. Labels are initialized through TBD criteria - Initially this can be bootstrapped with provided heuristics - Further iterations can make use of manually flagged users - Flags must contains both flags: sybil and non-sybil users 2. The ML algorithm is trained and statistically validated against the known labels (Prepare Model) 3. The trained model is validated by SMEs inspection (Model Evaluation) 4. Flags are generated with confidence scores (Flag Snapshot & Sybil KPIs) 5. KPIs are generated 6. Users with high uncertainty are sampled for manual evaluation (Prediction Evaluation) - Depending on the SMEs conclusions, this could generate further heuristics or manual flags. ### After round Expected duration: 3 days after the end of the round 1. A list of users to be sanctioned and to be evaluated is generated ## Suggestions - a ## Expected I/O for human-driven processes Highest priority for outside BSci interfaces: Prediction Evaluation & Sanction Thresholds - Generate Heuristics - Output: clear logical rules of what is a obvious sybil user and a obvious non-sybil user given the available features. - Model Meta Parameters - Output: how much the algorithm should be sensitive or specific on the iterative cycle. - Model Evaluation - Output: A go / no-go before going into the prediction evaluation and a description of any newly gained contextual knowledge - Prediction Evaluation - Output: A list of manual flaggings for sybil / non-sybil from a sampled list as well as a description of any newly gained contextual knowledge - Sanction Thresholds - Output: a 'agressiveness' value for how sanctions should be applied in regards to the algorithm flagging confidence + manual user overrides.