Data set being analyzed: https://docs.google.com/spreadsheets/d/1qvmwwlUHnQWYc2JQRoE1Qo_IKx6a4CI9ehWIMvkiqFc/edit#gid=2142373003
GitHub Repository for analysis: https://github.com/CommonsBuild/praiseanalysis
Categories List
https://docs.google.com/spreadsheets/d/1e6o8XFEh8tRXzUiQcVQZCfx53aYCphPEJA3oxWqcBz4/edit#gid=0
Jeff's Original Analysis & Proposal
https://docs.google.com/spreadsheets/d/1i6UaBb7n36HTZ6Ww2T6VrhjzW_7gIBTxHGM5wom27NE/edit#gid=1010584684
https://docs.google.com/document/d/1lq6JyyTNrAmiQ5jB0jBYeyFvKih8ZvKYDG2zBXlHYyY/edit#heading=h.lbv4awggshuq
(That's right, analyze!)
The Praise system was an evolving process and had clear points of change that
have impact on the data set.
Round #0 = Historic data. This is the first round, it took praise from the
previous months that were relevant to the TEC and scored them.
Round #1 - #5 = Centralized Tiered Praise. Livia and Griff were the only
quantifiers and recieved the Mode praise amount
To be finished
Jeff/Tam/Livia/Shawn/Jess - research update brief to determine next steps
Gini Co-Efficient as it stands, without intervention is .79
UN recommends .4 or under
Context https://www.cia.gov/the-world-factbook/field/gini-index-coefficient-distribution-of-family-income/country-comparison
-Process is underway, how much material do we need to make a decision? Bring to discussion/vote?
-Integrate community into the process - how?
-No "agreed objectives" - balancing different needs - some want more equitable distribution, some don't want to rock the boat, and some desire to bring in more TEs. Focus on why more equitable distribution is important and communicate this.
-How can we have a win-win? CEO/COO analogy - CEO built things, brought in CEO - does the CEO take 97% of equity? Splitting with COO gives COO ownership / responsibility to grow the pie
-What policy intervention does community want to apply? Desire to not change past data - intervention to change future data okay?
-15% -> 30%?? possible solution for paid contributors + potentially having a governance vesting token
-What about gifting for the nominations?? A certain # of Impact Hours that you must give away to someone after seeing final distribution if you think someone is too low you can gift
-Vote delegation? Great idea, maybe doesn't solve the problem
-What would a UBI adjustment mean for the individual?? We can look at value per impact hour / total raise and the amount "builders %"
-Zero sum as it stands, UBI equalizes the pie pieces
-Bringing to light many important discussions
NEXT STEPS:
-Shawn to work with data team to do the bucketing
-Once finished, Soft Gov to lead the governance process - suggestion: collab/hack sessions to form proposals/debates similar to existing processes
June 6th
YGG - I think there are three main categories of investigation.
Based on the results of the above. I recommend a UBI intervention of 25 Impact Hours. This results in a GINI coefficient of 0.37 which qualifies as an equally distributed economy. We see that the top 50% of the population has 72% of the impact hours, the top 20 percent of the population has 50% of the impact hours, the top 5% of the population has 25% of the impact hours, and the top 1% of the population has 10 percent of the impact hours.
A total of 6044.95 impact hours have been deducted from paid workers.
Impact hour distribution without discounts:
2-3 hrs focused work session with Shawn/Octopus - Doc2VEC (turns word docs into spatial embedding)
See visualization of clustering
May or may not produce something for policy adjustment, but will give further insight
i.e. If Retweeting is 50% of IHs, we may want to adjust
How about nominations?
* Jeff, Angela, Zargham, Trent, Sebnem, Griff, Billy, Simon, Jonathon, Shermin, Anish
* Community Nominations
Shawn
Jeff
Johan
Sol
Zeptimus
Juan
Jessica
Angela
Griff
Nuggan
Eduardo
Ddan
Octopus
Mitch
Metaverde
GOALS:
-Examining praise inflation via buckets (can help us ascertain praise inflation)
-Examine multiplicity how many times an action was praised (by type)
PROCESS/APPROACH:
-List of keywords, search most frequently occuring keywords
-Column encoding: column as type of praise or praise instance can exist in multiple buckets?
Weighted coding for one praise in multiple buckets 33% x 3
-Need specific questions: i.e. What is the average Praise for Tweets?
-Develop "buckets" / categories with keywords/word streams?
BRAINSTORM LIST:
Categories List
https://docs.google.com/spreadsheets/d/1e6o8XFEh8tRXzUiQcVQZCfx53aYCphPEJA3oxWqcBz4/edit#gid=0
TE & TEC
Technical Infrastructure - Developing software, models & technical tools - TEC tech / TE primitives, Token spice, cadCAD models, building, open source, documentation, cadCAD community calls
External tools - not essential for TEC but TEC - i.e. praise bots
Tech support
TE Education - TE Academy, cadCAD Edu
Education & onboarding - x 2 TEC & TE discipline - TEC labs, hatch params debates
TE Peer learning, lab sessions, Lisa's book/work to distribute info, intro sessions/cohorts
TE Community build - platforms, channels set up
Research work - governance, TE, Commons, DAOs - TE research groups
Narrative & Strategy - x 2 TEC & TE discipline, content creation/blogs/Comms/graphics/marketing
Participation, care work - x 2 TEC & TE discipline, Attending meetings - joining, sync, meeting, call
Cultural Build contributions - Ostrom's principles, conflict management, social fabric
Leadership - in different projects
Working Groups
Interactions between members - working sessions
Foundational work of the discipline - i.e. Simon DLR bonding curve work, cryptoeconomic flower
TEC
Filter by working group - keywords/phrases of work i.e.
Comms - article, retweeting, blog, organized, presentation, graphic(s), design, website, marketing, SEO
Soft Gov - soft gov, survey, vote, voting
Gravity - gravity, conflict, non-
Params / Parameters -
Legal - legal
Commons Swarm - tech, dev, dapp, app,
Hatch outreach / onboarding - onboarding, hatch outreach
Omega -
Stewards - stewards
Labs - labs
Transparency - transparency, YouTube, recording(s)
Can we apply the Gini Index to the distribution? What does this look like?
What does UBI look like applied to the data? Can we parameterize this?
"Universal Basic Impact Hours?" How does adding a fixed amount of IH to all
hatchers change the mean and the mode of the distribution before and after this
intervention?
Visualize the distributions before and after and intervention such as UBI
Can we parameterize this by the amount of UBI that we apply?
How do we vote on these interventions? Using what we have got so far to
improve. Tokenlog it is. Run a DAO through github issues, continuously
self-modify the token distributions using data science and community sense
making.
What does applying interventions, filters, or transformations do to the to the distribution? What other kind of interventions might be interesting?
Plot Contributions to TE Commons vs. contributions to Token Engineering
(discipline) in the past or in parallel - Archetype detection - (Manually)
Identify agents that are known to have been producing token engineering public
goods before the recording of impact hours started. Should we apply an NFT -
TEC OG - Multiplicative factor?
What does it look like when we modulate the paid contributor discount rate from 0.15 to 1?
Can we compare the total money paid to stewards against the total number of IH
that has been reduced from them? Can we compare this with TEC price outcomes?
Weigh the balance of discounts applied to Impact Hours Received for those who
are compensated from CSTK/TEC .
Perhaps consider an alternative future build where in which no one gets
discounts on their praise, and rather everyone gets a UBI.
Can we identify tweets from research? Can we identify coding from comms? What are the different
praise buckets? What does this look like and how is each bucket weighted? Does this reflect the
values of the community?
How to categories the 'reason for praise' column:
How are the weightings?
Gravity(Juan):
Jeff:
No proposed action through sake of analysis, but more infomation will give us a sense of how to align the system with our goals
Griff:
Every two weeks, impact hour quantification. We had a very interesting discussion of if we should divert from our current process given these insights and discussions. We decided not to divert as we havn't concluded results from this analysis yet (even though we can see some flaws now). It was hard.
Tam:
Adjusting a single praise session wouldn't adjust for the many months prior, and we are excited for this to be a community decision.
Conjecture: The pool of funds allocated to builders is zero sum. i.e. any IH imbalance is taking voice from those with less IH to give to those with more IH.
Taking from Peter to pay Paul becomes a systemic issue when the majority of participants are Peter.
Examples where using the raw IH data is dangerous:
Source:
https://docs.google.com/spreadsheets/d/1i6UaBb7n36HTZ6Ww2T6VrhjzW_7gIBTxHGM5wom27NE/edit?usp=sharing
YGG desired data science approach
-Praise data as a network
-Every person is a node, and there are edges
-Clustering: each WG is a cluster
-How the praise has transformed over time, a “drift” in the way praise was dished - put on time axis/multi-dimensionality
Andrew
Tends towards simplest possible solution so any changes/alterations don’t become a roadblock
Z
-Sanity check
-Ignore who was giving/receiving - look at group by aggregate - irrespective of who received it
-How much was dished for what type of action - software dev/giving talks/sectors - what are the fractions of those buckets
-Is this what we were going for?
-What is it weighting towards at v
-If it is determined that some types of work are underrepresented / active and visible vs. deeper work unseen
-Intersubjective measurement - extent to which those reflections are doing what we intended
-Transparent policies to modify to improve purpose towards goal
-We have to make time to question the process/tool
-Generated raw data - no perfectly objective data / “sensemaking” process of human input works from the process side - should be kept separately
-The algorithmic processing is working differently - from the goals first
-Algo should not be used as oracle, we should question if it is doing what we want
-Algos are in service of the sensemaking process - if it doesn’t feel right, we can do better - feedback loop - what did we intend, it doesn’t feel right, should or shouldn’t we do something - does the algo fail to express what we intended
-We can just document - make sure we question - algos in service of social process
-We govern algos rather than they gov us
Griff
-We adjust from the inside to try and adjust the distribution, this may have polluted the data
-We will start keeping that in a separate tab
Categories
Lightweight, fast, and high impact. Eyes on the prize, looking at allocating the builders pool of governance weight in the TEC. Keep this in mind as an implementable process that we can dig into soon.
Categories the data
Number of praises Looking at drift - related to praise in calls (double praise)
Interesting to look at segmentations - As of now with raw data it does seem really skewed
Always paid
Sometimes paid
Never paid
Distribution percentages - Distribution of how many people have what percentage. What are the distributions, what is the range between those distributions. If we remove the extremes and look at the mode.
Look for outliers first Multiple stages, look at the data before removing them. Then look at it after removing them. It's all an experiment, it's going to be messy, and we are going to learn at every step.
High level categories TE Work
Best Case Scenario - data scientist for a week who can lead would be ideal
Looking at the praise data, it seems that a lot of small praises accumulate significantly more tokens than larger tasks with less frequent praise, even if they are quantified relatively much higher. This begs the question: do the results we see land within what we would call a fair or accurate distribution of tokens for work that was put into the efforts of the TEC?
Source:
TEC Forum Discussion
https://forum.tecommons.org/t/pre-hatch-impact-hours-distribution-analysis/376
Document Library
https://docs.google.com/document/d/1QiVfjtFDW1ahehdVXFV4Dauo5k_QM77FOUHS9CWmu7k/edit
Repo
https://github.com/CommonsBuild/praiseanalysis
Analysis by Octopus
https://colab.research.google.com/drive/1Lz2lrIkZbPLmgms5TrgUx8iO9sWDe3hN?usp=sharing
TEC Praise Data Sheet
https://docs.google.com/spreadsheets/d/1qvmwwlUHnQWYc2JQRoE1Qo_IKx6a4CI9ehWIMvkiqFc/edit#gid=1510055853
Processed Data
https://docs.google.com/spreadsheets/d/1K1CeAG-1E1UUk4P7lsM9MR_Fwvap8ch0WrZlPORESgQ/edit#gid=1975905774
Initial Analysis by Jeff
https://docs.google.com/spreadsheets/d/1i6UaBb7n36HTZ6Ww2T6VrhjzW_7gIBTxHGM5wom27NE/edit?usp=sharing