# 2025-03-28 Notes
## Holly intro
UX researcher, background in anthropology
Surveys / quantitative research: good for testing hypotheses at scale
Qualitative research: good for forming hypotheses; otherwise you're just testing your assumptions
"Other" can help catch missing options, pay attention to whether Other is getting a lot of support
Main two forms-- interviews + observation. Interviews: ask questions.
Observation / Ethnography - observe people in their context. Goal is "thick description" -- the difference between a *wink* and a *twitch* (same motion, very different meaning due to context)
Really important to get a *representative sample*. Doesn't have to be *statistically significant*, nature of the data doesn't require that. You want to hit *saturation* -- do you hear the same themes coming up again and again? [Nielson Norman](https://www.nngroup.com/articles/) is a great resource for this, website that helps non-UX researchers learn about UX research methods. They've found 5 interviews is usually enough per significant segment to hit saturation.
Josh: Re "hitting saturation" - How to avoid "motivated stop" cognitive bias, where you decide on the fly?
Holly: start with a clear plan, break out the groups in advance, seek out 5 per group
Have a set of research questions you know in advance. Not the script, those are the questions you want to answer. Make sure you have the right people you are speaking to -- that they'll have the data to answer those questions. Same way that you would approach an experiment. You have hypotheses you are looking to validate or refute, and a set of people you need to speak to to test them. Or you're trying generate the hypotheses.
What are you going to talk to the people about? Shoot for unbiased and non-leading questions.
Ability to take a question "up a level higher". How can I make a question "even more open-ended" or "even more exploratory". Often not until the 4th or 5th follow-up that you start to really explore and understand the situation in more detail.
Interview: *really not a script*. It's a *guide*. Humans are messy. Sometimes you ask a first question and people start talking about number 8. Don't stop them. Let them go. Just follow the conversation. My ideal is just to start with one prompt, "tell me about X", and then just go from there. Not always helpful if we need data.
Good tips:
* "TED walks" -- tell me about, explain to me, describe, walk me through
* Make sure people describe what they mean when they use an adjective (eg., annoying, frustrating, useful, etc)
* "What does useful mean to you? What are some things that are not useful? How does it stack on a scale of 1-5? Interesting, what would a 5 look like?"
* Remember, their understanding of frustration might be very different from yours, you don't know if what they find frustrating is the same thing that you find frustrating.
## Coding
Really important: to record + transcribe interviews. If you're really into the conversation you're going to have difficulty taking notes. Need a secondary note-taker. But we as humans are biased and our notes are biased.
I have often thought that a person's comments were astute, analyzed them afterwards, and realize that the data wasn't so meaningful, I just liked the person. Goal: objectivity?
Thematic analysis. Can do a bunch of different things, but thematic analysis--
* pull the transcripts into a figjam or miro board or some such thing.
* go through the transcripts, code them. What are the themes, research questions that we had?
* New user experience
* Journey into Rust
* Global perspective
* Pain points
* Go through the transcript, pull out the data that relates to those themes, maybe you hvae 10 transcripts, you've done it for all 10, then you start to pull subthemes together.
* OK pain points, looks like we have 3 major subthemes.
* That's how you go from "data" to "insight".
* A data point is "this is a thing that is happening", impact adds
* why it is happening
* and an opportunity to improve it
* nielson norman has a good article + video on how to do thematic analysis (possibly https://www.nngroup.com/videos/coding-thematic-analysis/ ?)
* more fun to do together than to do individually
* individually you can get lost in the data
* "oh no have I learned anything"
* https://www.nngroup.com/articles/thematic-analysis/
* Doing it as a group can help to reduce bias and ensure you aren't seeing the patterns you expect to see
You pull the insights together then you can do an ideation workshop:
* This is what we're seeing, biggest opportunities
* Now let's talk about solutions
* Highlight low hanging fruit that can be targeted now
You may need to deep dive into unexpected areas as follow-up
Maybe build a survey to get more quantitative data, so that you can judge what is the "biggest pain point" or "most significant action".
Qualitative data can't tell you how much the pain you heard resonates with the broader community. Maybe you identify that within your group there is a segment with a very different experience, you may want to explore that segment more.
That could transition to Tactical research. This is *Generative* research.
Q: You mentioned cutting up transcripts and sorting those notes but there is a question of "where do you cut". Is there a way to approach that which would help minimize debate? Should we be trying to minimize that argument about *what is relevant and distinct from something else*?
Take a look at [this article](https://www.nngroup.com/articles/thematic-analysis/). You need to keep tracking your research questions so you can know whether each thing ties back to those or is possibly useful for other questions.
significance of data
- frequency?
- do multiple people express the concern or feeling?
- intensity?
- are they blocked? did they abandon or give up? using strong emotive words (scary, hurt, etc)?
- NN has showed that if one person in your qualitative study has a very negative experience, you can be confident that it is happening to more than one person. "unique" experiences may still be around 1 in 1000.
Having themes and research questions will really help. Keep in mind there is no precise way of doing this, there will always be grey areas. Always better to start broad and get more narrow. That'll help you identify which are occurring a lot and which are not.
Be aware of people's expressions. Not just words. Also tone of voice. Do they pause a lot? Look frustrated? Sometimes I'll ask people -- e.g., why they paused, or say "I notice that you look like you have something you want to tell me"? Words being used vs words not being used?
Q: What if we seem to be developing this other new question? What's the best way to address that to avoid falling into confirmation bias?
If you are seeing that we are asking about your "overall experience with Rust" but you are hearing something you did not expect, that's an important thing to add into your code. For example we were speaking to mobile developers, we had hypotheses going on, but every since conversation we had was talking about caching. So when we coded our data we moved everything related to caching and it turned out there were distinct issues they were experiencing.
Sentiment analysis: break down the positive vs negative things, opportunities, and actions. You'll have 4 massive buckets and then have to subtheme within those.
Q: What specific techniques can we use to minimize bias, given that we can't be in someone else's head?
For me it's very much about curiousity, asking things like "what is your typical experience" and contrasting that with "the last time you did a thing". It sounds like today you shared a level of frustration, how does that compare with how you felt last year? Keeping a curious mindset. Fight the urge, which a lot of us have, to validate them ("I know it's frustrating"), you want to be empathetic and encourage engagement but not lead them on. It's going to be hard to avoid problem solving for them. You'll probably know the answers. But this is not a time for that, you want to understand. "That sounds really difficult" or "really challenging" can you tell me more? What is the impact of that challenge for you? If I gave you a magic wand, what would your ideal solution look like? Very frustrating to do X Y Z, what are other things you use to do that job and how does it compare?
Book reference: Never Split the Difference by Chris Boss. Has good techniques for open-ended question asking. One of the things he does it repeat back a word that somebody says. "I tried to use Rust but it was really hard to understand where to start?" You might just say, "Where to start?" and that will prompt them to elaborate.
Can be useful in other contexts too.
Q: Do you just want to keep them talking?
Holly: Yes, but you do want to keep them on subject. People often like to go on tangents. I tell people up front that we'll leave 5 minutes for them to give other feedback at the end and that they can reach out later. Tangents-- you have to play with them, but you also want to steer them, "I have some other areas I'd like to explore, can we move to those and come back to that later?" or "can I connect you to an expert who would know better about that?"
Q: Good interview length?
Holly: it depends on how many questions you have, how exploratory you want to go. If you have 2 or 3 larger questions, you can probably get through that in 30 minutes, I normally say 45 minutes to give time to be more exploratory. Anything over an hour is very laborious for participants. Think carefully: what is the information we really need to know in order to make decisions. What decisions are we trying to make with this data? What do we have to know to move forward and what do we have to know? Want to be focused enough that you can go deep on an area and not stay entirely high level, fewer questions and deeper exploration is a better use of your time.
Q: How important to be consistent in terms of time?
Holly: Radical changes in depth can make theming difficult. However, if there is a branch of questions (e.g., around global experience), and for those you want to dig in more, etc, maybe that's it's own study, and you just want to approach it separately. You might have 2 or 3 different studies that you thematically analyze independently from one another.
You might want some core 'overall' questions that everybody asks. And then for the branching questions that are more specific, you separate them out.
Nadri (adding a note in the text): we may want to decide on a simple data retention policy like "we'll delete everything end of May"
Holly's basic intro points
* No wrong answers, if I ask you follow-up questions, I'm just trying to understand, not judging you.
* Can I have your permission to record this, it won't be associated with your name, will be anonymous, and you can ask me to delete it and I will.
* I'm going to leave time at the end for anything other topics or you can contact me later
---
JT: Hi Nadri. OK to record?
N: Yes.
JT: What brought you to Rust originally? How did you first start getting involved with Rust?
N: Short version is that through a friend I ran into Rust mostly through the blogposts being written in 2015 onward by Niko, Aaron, boats, about the design of the language. Probably 4 years before I wrote my first line of code, I just nerded out on the PL side of it. I would describe myself as a PL nerd, not that I know a bunch, but I enjoy the design, seeing it happen live was incredible. Something inspiring in the way this was happening in public. Might be the main word I had about Rust, I felt inspired by its design but also the community aspect. Programming language not just a technical object also a social object. Posts about consensus, how do we do governance. That is probably what sold me on Rust, I like this, I want to follow what's happening there.
Project snowballed and I became a contributor, now it's a big part of my life. I feel like in Rust the feeling of responsibility of how much I can do in my project is limited by how much I want to take on. I feel quite empowered in my current position to do more of that, I want to contribute, vision doc is part of that, shifting away from PRs.
JT: You mentioned a project that started snowballing, can you tell me more about that project, and other types of projects you built in Rust?
N: CLI tool. Chose Rust for the fun of it. Second one, the one that snowballed, was a compiler thing. I like compilers. I ended up wanting a feature that wasn't stabilized in Rust, looked at the feature, tried to help, started making a PR. Most of my experience in Rust is working on the compiler. Once I got hooked into that, it became the main thing I've done, and now I'm employed to write a rustc driver, so I'm still hooking into the compiler. My professional experience is 100% Rust, mostly that plus hobby projects.
JT: How did Rust fit those projects, where did rust help, how has it Rust gotten in the way?
N: For compiler work, wasn't really a question, but zooming out...Rust is reliable, robust, I would use it whenever there's a piece of code that actually reliably works. I might start with Python but if I want it to last I'll rewrite it in Rust. If I want to collaborate with other people, Rust shines really well there. I haven't had a lot of opportunity, mostly hobby projects as I told you, my limited experience is that, whenever I want something that's reliable, I'll go for it. It's got my back.
JT: Thanks for your abbreviated participation!!! 3 seconds for add'l thoughts?
---
Holly: some things I might have picked up from what Nadri said. 4 years, why? What happened?
You love compilers? Why? Tell me everything about them?
Holly: instead of what's been good and what's been bad, just how's your experience been?
* dig into the meaning of reliable -- what does reliable mean?
*
Josh: We should add some user notes to the script.