--- tags: paladins --- # Cycle 5 Compensation Program Recommendations ## Guidance and Constraints from survey > references > - [raw figma](https://www.figma.com/file/EGqpBXDUwqJpBl8hXzLCze/DH---Burn-Rate-Survey-Whiteboard?node-id=0%3A1) > - [tokenized model](https://www.figma.com/file/CW6sgBgvpLw800dFbAM4bX/Untitled?node-id=0%3A1) > - [summary](https://hackmd.io/_xf_ZtWpSvOrm98agWg8Zw?edit) - **Parity** across contributor groups (e.g. tracks) in... - how they are evaluated - how they are compensated - process - **Local Decisions:** Decisions and evaluations should be made as locally as possible - **More Evaluation Modalities** of results & outcomes - **Nonprescriptive:** A small group should not *determine* compensation for everybody else - **Dispute Resolution** may need to be delegated - **More Merit based:** Avoid popularity contests and evaluation by visibility - and *especially* don't use this as a primary determinant of compensation ## Concentric Problem Areas 1) Resource Constraints (budget, runway, etc) 2) Contributor Engagement Tracks a) how they engage, eg commitment vs. retroactive b) framework or rubric for differential compensation 3) Objectives and Priorities 4) Performance Evaluation and Accountability ## Misc ideas - extend cycles to 3 months - increase the number of value levels - create a new "cross-circle" circle - projection and reflection - projection - A) DAO priorities/objectives/outcomes - each contributor allocates "importance points" to projects - B) Contributor attention/commitment - each contributor allocates "commitment points" to projects - reflection - A) Success/impact/value of DAO priorities - each contributor allocates "success points" to projects - B) Value of contributor attention to each - points allocation ### **Projection** **DAO Phase (A)** * The DAO uses a collective signalling mechanism to weight high-level *workstreams*. Prototype case: We use the 4 circles as starting workstreams. * Each contributor is given a number of "importance points" * number correlates w/DAO shares * Importance points are allocated to workstreams * Relative rankings among workstreams determine approximately how much attention/energy should go into each workstream **Contributor Phase (B)** * Each contributor sets their individual commitment to workstreams (now rated by importance in previous Phase) * Each contributor is given a number of "commitment points" * Commitment points are allocated across workstreams, signalling contributor's intended areas of focus for the upcoming period * Importance point ratings from DAO Phase above informs this Contributor Phase allocation. Contributor's Performance will be weighted by the workstream's rated Importance. ### **Reflection** **DAO Phase ( C)** * Warcamp coordinape circle, with 4 recipients (the workstreams) * Each contributor is given a number of "progress points" * Progress points are allocated across workstreams according to individual assessments of progress made within each workstream **Contributor Phase (D)** * Individual coordinape circles for each workstream "performance points" * Everybody who participated in workstream is included in workstream coordinape * Likely those who allocated "commitment points" * "Workstream commitment point allocators" allocate to each other * Take results & weight them by the "progress point" results * Dedup/Sum everybody's points across circles DAO factors: Importance & Progress Contributor factors: Commitment & Performance ### 4) Performance Evaluation and Accountability - Self assessment augmented with standardized paths for feedback * Performance points * local * higher context (pinned to circle-determined priorities) * Each circle generates a list of contributors and their performance point ratings * DAO normalizes contributor performance points with circle progress points * DAO-wide "normalized contributor performance points ratings" * Buddy system * bidirectional conversation w/contributor & guides you to settling on appropriate value level * Collusion? * Consultant Triangle System * Gives each person a conduit through which to receive feedback * Peer review * Committee * Has responsibility of doing assessment/feedback for each contributor * Self advocacy * On individual to put up value level after consultation w/above * Comment period (raise flags) * First time they make a public assessment for themselves is after they receive facilitated feedback through various channels (above) ### Compensation * Move to a 10 level value system * Change it to Market Value of value created (based on constellation of skills) * Rough guidelines/starting points - developer between levels x & y, ranger between level x' & y' #### Skill Domains - Smart contract development - Other programming/development - Product management & strategy - Project management - Written communication - Oral communication - Graphic Design / UI Design - UX Design - Community building - Tokenomics & revenue / value accrual - Partnerships & Business Development - Accounting - Mechanism and system design - DAO ecosystem & DAO design knowledge - Leadership #### Assessment dimensions - Strategic / critical thinking - Dispute resolution - Relationship development - Easy to collaborate with - General quality and accuracy of work - Web3 Political/Cultural Awareness - Self-management and self-organization ### Suggested Starting Points for Circles - ### Cycles * Watch for emergent MVP rituals & ceremonies * At the beginning/end of each 3 month cycle we take time (1 week) to focus on Reflection & Projection * Projection is ongoing - reassigning weights & allocations throughout the period * Reflection is ongoing - an opportunity for contributors to reflect * Compare where the projection of "importance points" vs preliminary assessment of "progress points" * Suggestion/potential emergent pattern: * 6 week midpoint milestone to reassess * 3, 2 week "sprints" ## Misc notes - tradeoff between high fidelity evaluation and "evaluating people" - "salary" enables people to relax into a longer term contribution and professional development, but may lose the immediate short term accountability - professional development is one of the main goals of a feedback system like this - commitment track enables mid- to long run prioritization and development - Frame the projection as a signal - possible reflection can change that assessment - the "Board" (aka bulletin, bounty) is a valuable community technology - ## levers - mix (base compensation <> bonus compensation) - ratio of priority-weight modifier (progress points) on individual assessment vs direct performance (performance points) --- TODO: - [x] [Visual version](https://www.figma.com/file/Y6KE5EnTV0g91hzL6y0HXC/Comp-Program-Cycle-5-Ideas?node-id=0%3A1) of projection/reflection - **Spencer** - [ ] Visual version of peer review/committee flow (feedback flow) - [ ] Articulate more specifically more sources of input into performance review - [ ] Value level assessment guidance - how and where that connects w/retroactive - [ ] Some sort of mapping of skill domains to value levels - [ ] Initial "discussion" forum post summarizing the proposal (see below) - **Spencer** # Forum post draft ## Objectives of these proposed changes Our approach is informed by the [results](https://www.figma.com/file/EGqpBXDUwqJpBl8hXzLCze/DH---Burn-Rate-Survey-Whiteboard?node-id=0%3A1) of the [survey](https://hackmd.io/_xf_ZtWpSvOrm98agWg8Zw?edit). - Enable decisions to be made at the level where context is highest - Create process and valuation parity across all contributors - Enable more / better evaluation of workstreams, priorities, and outcomes - Minimize popularity contests and the impact of visibility on evaluation - Facilitate more effective allocation of resources, both for the DAO (workstream priorities) and contributors (time/attention) - Diversify our evaluation tools from one to multiple modalities - Avoid concentration of power without accountability - Create more modularity in assessing priorities, commitments, progress, performance/contribution. ### Out of scope Variables left to be specified in subsequent proposals - Monthly burn rate - Pay rate for different market value/skill ratings - Actual compensation amounts per individual - Relative ratings of skill domains ## Overview As we see it, there are four big categories of challenges the DAO currently faces. None of these are new, but in recent months several have become significantly more accute, eg with our DAO expanding and especially with recent unfavorable market conditions. A) Resource Constraints (budget, runway, etc) B) DAO Objectives and Priorities C) Contributor Compensation and Engagement Options D) Performance Evaluation and Accountability This proposal focuses on mechanisms and process for *how* to allocate resources and to whom, addressing categories B, C, and D. It leaves the question of *how many* resources -- category A -- as a separate decision. ### High level changes - Extend cycles to 3 months - Keep the tracks, but use the same evaluation primitives and methods for both - Introduce a projection and reflection process for bottom-up priority-setting and performance evaluation - Change value levels to Market Value Levels - Add granularity (10 levels) and more concrete definitions of each level, based on skill domains ## Concept 1: Projection and Reflection The first part of our proposal addresses the problem of how to set DAO objectives and priorities (category 2) and forms part of a process for facilitating performance evaluation and creating accountability (category 4). The approach is split into four phases, with two looking forward as "projection" and two looking backwards as "reflection". Both projection and reflection include a DAO phase and a contributor phase. The goal here is to facilitate bottom-up determinations of the following: - Prioritization of DAO objectives and workstreams ("importance") - Allocation of contributor commitment to workstreams ("commitment") - Evaluation of workstream success ("progress") - Evaluation of value created by contributors in relation to each workstream ("performance") Each of these measures are valuable directly and also as *primitives that can be composed* into additional mechanisms and processes. :::info For illustrations of the concept, see the following resources: - [**Spreadsheet**](https://docs.google.com/spreadsheets/d/1_42y1NT1XyaeVLavvFWUoGtSVbzGAV6LW9fvOObi1Do/edit?usp=sharing) with example data and calcluations - [**Figma visualization 1**](https://www.figma.com/file/Y6KE5EnTV0g91hzL6y0HXC/Comp-Program-Cycle-5-Ideas?node-id=0%3A1) - [**Figma visualization 2**](https://www.figma.com/file/CW6sgBgvpLw800dFbAM4bX/Untitled?node-id=0%3A1) ::: ### Projection The projection phases occur at the beginning of the cycle. They can also be updated by individual contributors at any subsequent point. #### DAO Phase (A) The DAO uses a collective signalling mechanism to weight high-level *workstreams*. **Prototype case**: We use the 4 circles as starting workstreams. * Each contributor is given a number of "Importance Points" in proportion to their Warcamp DAO shares. * Contributors allocate Importance Points to workstreams, as a prediction or projection of how valuable that workstream will be during the cycle, i.e. as reflected up in Phase (D). * Relative Importance scores among workstreams serve as a signal for where the DAO should allocate funding and/or where contributors should allocate their attention/energy. * Data collection could be as simple as using a google form. ![](https://i.imgur.com/iWn6u3c.png) #### Contributor Phase (B) Each contributor sets their individual commitment to workstreams (now rated by importance in previous Phase) * Each contributor is given a number of "Commitment points" * Commitment trackers use their Commitment % as points * Reotractive trackers can allocate up to 100 points * Contributors allocate their Commitment points across workstreams, signalling their intended areas of focus for the upcoming period ![](https://i.imgur.com/UQtwU09.png) ### **Reflection** The reflection phases occur at the end of each month and at the end of the cycle. Within a cycle, the monthly checkpoints serve as preliminary measures and may also determine base compensation for retroactive trackers as well as bonus compensation for all contributors. #### DAO Phase ( C) Contributors reflect on the value created by each workstream * Each contributor is given a certain number of "Progress points" * Contributors allocate Progress points across workstreams according to individual assessments of progress made within and value created by each workstream. In other words, closing the loop on Phase (A) by evaluating the "actual" importance of each workstream. * This could be done with a Warcamp coordinape circle, with 4 recipients (the workstreams) ![](https://i.imgur.com/8527ddW.png) #### Contributor Phase (D) Each workstream outputs a list of contributors who added value to the workstream and a Performance score for their contributions to the workstream. The method for deriving these scores is left to each workstream. One example is to use a workstream-specific coordinape circle. Another would be to allow individual contributors to self-evaluate. The resulting Performance scores are then weighted by the Progress scores from Phase ( C) to normalize individual contributor Performance scores across all workstreams. DAO factors: Importance & Progress Contributor factors: Commitment & Performance ## Concept 2: Contributor Tracks and Skill Domains This approach maintains both Commitment and Reotractive tracks, but modifies them in several ways: ### 2.1 Market Value Levels (MVLs) Value Levels are replaced with Market Value Levels - There are 10 Market Value Levels - Each MVL now corresponds to a particular relative "market value" rather than "predicted value created". This means that contributors with skills domains that are valued/priced more highly by the market will tend to be at higher MVLs. ### 2.2 Skill Domains Each MVL has a more concrete definition based on skill domains. The specific skill domains and ratings are to be established separately from this proposal (see [some discussion here](https://forum.daohaus.club/t/discussion-skill-domains-and-capabilities/11140/)), but the idea is to create common grounding for more articulate peer feedback and evaluation. Each skill domain will likely have a 1-10 scale. Contributors with multiple skill domains may determine their MVL as a balance of ranges from each skill domain. #### Illustrative skill domain example: - Smart Contract Programming - may fall within MVLs 5-10 - Web Programming - MVLs 4-9 - Graphic & Other Design - MVLs 4-9 - Technical Documentation - MVLs 4-8 - Copywriting - MVLs 1-7 - Administrative Function - MVLs 1-7 - Operational Organization Design & Implementation - MVLs 4-10 - Project Management - MVLs 3-10 - Meeting Facilitation - MVLs 1-6 ### 2.3 MVLs for all All contributors (on both tracks) have an MLV evaluation. This puts retroactive compensation rates on the same scale as the commitment track. ## Concept 3: Contributor Performance Evaluation Multiple modalities of inputs feed into the process for how contributor MVLs are determined 1. Intersubjective evaluation scores, i.e. from Reflection 2. Qualitative feedback, i.e. from fellow contributors 3. Self-advocacy 4. Peer facilitation ### 3.1 Intersubjective Evaluation Scores These are normalized the Performance scores from the Reflection contributor phase (1.2.D). ### 3.2 Qualitative Peer Feedback This proposal does not specify a mechanism here, but it does establish it as a first-class input into performance evaluation. Suggested mechanisms should likely be developed in conjunction with the skill domains from 2.2. ### 3.3 Self-advocacy - Contributors should advocate for themselves at a particular MVL - Show your work - Peers should give feedback (including as part of 3.2) ### 3.4 Peer facilitation We present two options here: a) peer-3-peer facilitation, or b) an evaluation committee #### Option A -- Feedback Facilitator Triangles Each contributor is placed in a Triangle with two other contributors. To start, this can be done randomly. Whenever the number of contributors is not divisible by three, contributors can form foursomes to ensure there is nobody left out. At the end of each month, each contributor facilitates a conversation with their facilitee at the end of each. They work through all inputs -- including those described in 3.1-3.3 -- and work out preliminary skill levels and overall performance. - for retro trackers, this includes coming up with an appropriate payment amount to request At the end of each cycle, this happens as well, but this time the outcome is that the facilitee come up with a revised set of skill levels for their skill domain. an appropriate value level. Those value levels are placed into an omnibus proposal for ratification after a 3 day comment and dispute period. #### Option B -- Evaluation Committee Under this option, a committee of 5 will be elected by the DAO, with 1 representative each from each circle, and 1 from warcamp overall. With the start of each cycle, the committee will assume the responsibility for ensuring that each contributor's performance is reviewed and that a value level is suggested for them. The committee will *not* have the authority to set value levels for any contributor. Rather, they will have the *responsibility* for That responsibility will certainly come with some influence, so... - The committee will rotate every 2 cycles, on a staggared basis - Each committee member will be required to stake DAO shares, which can be slashed by the DAO should they not carry out their responsibility appropriately.