---
###### tags: `WarGames`
---
# WarGames Research & Testing Methodology
## In three parts:
:::spoiler **WarGames General Method**
### Purpose
- To create an analytical framework to inform what we build and how we build and why we build. This extends beyond just product level decisions into cultural and operational exchanges of the DAO.
### HMW
- How might this testing structure allow us to identify, elucidate, and increase the fidelity of the problems within the DAO?
- How might the testing method increase the efficiency and conserve the human resources of the DAOist that want to tend to these problems?
- How might the R&T method effectively generate evidence and empirical data to help the DAO make collective decisions and policy changes?
- How might our R&T method be crafted into an experience that rewards participants (positively incentivizes them) by strengthening community cohesion while also generating the data we are interested in?
### Hypothesis
- We might divide into groups with specialized oversight that aligns behind a standardized procedure.
- Research & Testing (Testers)
- Game Master & Lorecrafters (LARPers)
- We will identfiy opportunities within each scenario to install data filters for extracting meaningful metrics. We will attempt to balance the quantitative with qualitative.
### Method
- Glean a topic from the community.
- Identify where the problem is located within DAO: what circle, what scale, what amount of detail is required to understand the problem space.
- Identify an individual that has first-hand knowledge on the topic. Extract this knowledge through 1:1 interviews to capture the problem. (Might be repeated, depending on complexity). A template will be used.
- Format the problem into a testing flow, including:
- Problem Space
- HMW Inquiries
- Hypothesis
- Method for Data Capture
- Anticipated Results
- Deliverables / Outputs
- Pass off the R&T outline to the LARPers to lorecraft based on the necessary constraints of the testing flow.
- LARPers conduct the games.
- Testers observe and extract data from the games.
- Testers analyze the data and present the learnings/recommendations into a report for the appropriate recipients to consider.
### Outputs
- A clear understanding of a friction within the DAO, distilled into a succinct problem statement.
- A clear methodology for how to test this problem.
- A game crafted to represent the testing methodology, played with the DH community.
- Raw data extracted from the game.
- A report/analysis formatted for the appropriate audience.
:::
:::spoiler **UH General Overview**
### Problem Space
- DAOhaus is currently experiencing confusion about how DAOhaus might be distinguished from Warcamp and Uberhaus, including how each of these entities function and how they participate with each other.
- Uberhaus includes numerous nested issues related to DAOhaus product strategy, community strategy, HAUS token utility, etc.
- Uberhaus presumes multiple stratas of scale that require unique, individual testing strategies in order to make rational, evidence design decisions. This includes a variety of user experiences on the front-end (UH DAOs and members) and on the back-end (Warcamp serving as steward, Uberpaladins and Operators, etc).
- There are numerous strategies for how to design UH at every scale, each with their own attack vectors and community alignment considerations.
### HMW
- How might we craft a simulation of Uberhaus as it is currently conceived?
- How might we distinguish the simulation of UH from the actual DAO so that changes can occur in a controled testing environment without causing friction with UH governance? ie: leaving UH in a static form while the simulation tests potential changes.
- How might we define discrete and individuated user flows for each scale/aperture of consideration for this complex system?
- How might these texts highlight deeper epistemic tensions within DAOhaus more generally?
### Hypothesis
- We might interview a variety of individuals across the DAO to create personnas and flows to test each proposition individually.
- Through diligent mapping and an insistence on referencing down to a SSOT for a testing method, we might be able to create a meaningful simulation of the UH complexity and avoid disseminating confusion, disorientation, low morale, and volatility to the community. If we grapple with the pain points internally, we might space the community the complexity.
### Methodology
- Interview knowledge holders
- Extract the problem space they identify.
- Format the problem into a testing methodology. Might include: personnas, user stories, and red route flows for testing the particular problem space.
- Devise appropriate data filters for the user flow. Setup the needed tools to capture the data. ie: forms/surveys, recordings, notes, direct observations, analytics, etc.
- Hand off the testing constraints to the LARPers to lorecraft a narrative game on top of. The MVG (minimally viable game) but be tightly correlated to these testing points.
- Organize the games into an episodic flow to track the transition from one game phase to the next.
- When the game is complete, extract the data.
- Analyze and organize the data.
- Present the data as a report to the members of Warcamp.
### Outputs
- A laundry list of problem across the UH architecture, tagged and organized into categories to convey their relative weight and relations.
- A repository of tests, raw data, and results from testing these problems.
- A series of games that test the problems. (Games can be replayed or iteration can continue)
- Evidence for making justified decisions for how to improve the Uberhaus architecture and UX.
:::
:::spoiler **UH-specific Quests**
*This method might be considered a user interview script/template.*
### Defining the Methodology of a Quest
1. What is your Discord handle?
2. Who is the user/persona you propose to test?
3. What is the feature you propose to test? What is the problem you have identified? Define in as much detail as possible.
4. What circle does it fall into (according to existing DH circle structure)?
5. What is interesting about this problem? What are some of the dynamics we might test?
6. Can you suggest individuals to provide additional information on this problem?
7. What do you hypothesize will happen? What are your anticipated solutions, if any? What are the success conditions? What are the failure conditions? What are the attack vectors that you can identify?
8. What is the lore that you associate to this scenario? (optional)
9. How is this lore tied to the features, users, or high-level scenario? (optional)
10. Are there any additional considerations that we should be aware of? (optional)
:::
---
:::spoiler **Roadmap Sketch**
- Complete MVP of UH that can remain static for the duration of a testing cycle
- Explicate the research methodology, setup filters for capturing data, setup IA repo for storing data. Ensure everyone understands the UH structure, R&T structure, and what we are testing
- Define initial testing flows and lorecraft the first quests from this structure
- Run the game cycle. Collect data.
- Complete the game. Analyze data.
- Compile the report. Debate variables. Decide how the learnings will be applied to the UH structure. Vote to ratify the changes.
- Retrospective of testing method and game mechanics. Prepare for next cycle.
:::
:::spoiler **Three Teams**
#### UH Design Team
- Bau, Dekan, Spencer, ceresbzns, and others (representatives of each circle)
- Define the initial parameters of the MVP.
- Unfold the ongoing conversation in UH delegate meetings.
- Contemplate painpoints and attack vectors of the architecture and community.
#### Research & Testing Team
- TW, Jeremy, 010101, adrienne, helle(avi), and others
- Takes the proposed MVP structure and renders a testing flow to evaluate various strategies, using the method outlined above
#### LARPing Team
- BorrowLucid, UI369, anyone else with D&D/RPG experience!
- Focuses exclusively on lorecrafting and game elements.