--- ###### tags: `UX Research` --- # Notes on Summoning a DH R&T Circle #### Use Case: Can Rangers Do Something? 1. How does an **individual Ranger contribute maximum value**? - How does an individual contributor think about what topics are significant or important or valuable for the organization? - How do we choose what article topics to focus on? - How do we determine the [appropriate audience, form and tone](https://hackmd.io/@daohaus/Rangers-Audience-Tone-Format) in order to decide such things? 2. What is the high level **marketing strategy**? - Which determine the responsibilities of rangers in general - Would allow an individual to locate themselves within those responsibilities and claim some accountability/ownership over certain tasks - But before the marketing strategy can be decided, we kind of need to decide... 3. What is the **DH product vision and strategy**? - How do the circles fit together? - What direction is the DAO facing? What is the long-term vision of the organization? - Org management - HAUS utility - *No code platform* or *dev tool kits* or *B2B DAO tooling*, etc. - This is dependent upon clarifying... 4. What is the **DH organizational structure**? The relationship between DH, Warcamp, and Uberhaus? - What are the dependencies of the nested circles: Paladins, Alchemists, Rangers, Magesmiths? - Do we need to summon new circles/DAOs to handle these missing elements? - How does an individual navigate the DH ecosystem? ![](https://i.imgur.com/ypeGz7i.jpg) --- #### Problems - The circles currently suffer confusion over who should be untangling the complexity of this problem space, resulting in inefficient DAO-wide meetings, controversial initiatives, and low morale. - DH, WC, and UH models lack a reflexive cycle to test new implementations. - The metrics for evaluation are not aligning across cirlces: compensation, contributor evaluation, prioritizing initiatives towards a great goal, etc. - Decision are currently consensus-based, leading to repeated bottlenecks and unsavory cultural tensions. - The DAO is not using their own tools to reach amicable conflict resolutions. - The onboarding UX is confusing for new/potential contributors. - No clear list of problems across circles or the whole DAO. - No clear methodology for solving problems (conflict resolution, soft gov policy, UX method). - Lack of legibility into the interrelations of the problem space: lack of collective knowledge. - Bureaucratic strategies increase overhead without providing actionable recommendations. - Surveys introduce the bias of the writer of the questions compounded with the bias of the person interpreting the answers. #### Proposed Method - a rational testing framework combined with E2T's coaching experience and agile design thinking methodology. - creating high-fidelity problem statements - articulating hypotheses and *how might we* statements - capturing anticipated results (while avoid moving to immediate solutions) - defining appropriate methods for testing - initiate an apprenticeship format (akin to the Travii's's buddy system) - introduce best practices, SOP, tools for capturing relevant information - conduct syncs like **workshops instead of roundtables** to avoid breakdowns - create templates for identifying expanded aperatures of the problem space - cultivating P2P accountability and project ownership with built-in support - research standardized into clear reportage - bullet pointed TL;DRs - optional deep dives for the curious - supplemental diagrams for visual thinkers - footnoted/indexed references - reliance upon DAO tools for decision making - forum posts to propose research scope (Discourse) - internal polls for topic selection (Snapshot) - reports archived in Github repo to ensure SSOT and easy access - policy proposals ratified on-chain (Signal proposal) - results published to the ecosystem via Rangers evergreen/syndicated pub flow #### Loose Workshop Structure 1. Prompt 2. Response 3. Generation, divergence, gather ideas 4. Discussion, categorization, group the ideas into themes 5. Triage, decide priority of the ideas for testing 6. Action, design testing method #### Guiding Questions - How do we make our decisions based on actual evidence, rather than opinions and assumptions? - How do we train others to use this system? - Training - Documentation - Templates - How might we provide meaningful feedback? How do we respond to each otherwithout judgement? - How do we help facilitators level up? How do we collectively get better at facilitating the process? #### Initial Workshop Topics - The structure of the R&T group 😛 - circle or DAO or working group? - All the DH problems: towards a comprehensive list and map of their relations - DH community/user personnas - Unpack [WarGames as a testing methodology for Uberhaus](https://hackmd.io/@daohaus/WG-RandTmethodology)