# Risc | Usability Testing Checklist
> Question: what should the plan be re: user testing?
# Usability Testing Checklist
:::info
*Example three-week schedule below; condense timeframe to two weeks if needed*
:::
## Three weeks before
- [ ] Figure out what we’re going to be testing (site, prototype, etc.)
- [ ] Create list of tasks to test out – the particulars of the user flow
- [ ] Confirm what kind(s) of users we want to test with — CCSO, RISC Analysts
- [ ] Designate point of contact who will be liasing with CCSO. With CCSO liason, determine a couple dates to book mornings for testing; put out call for CCSO participants via liason sending out google form for scheduling slots (more details to come)
- [ ] Book additional afternoon time on those dates for the debriefing
## Two weeks before
- [ ] Get feedback on list of tasks from the project team (and stakeholders if possible)
- [ ] If necessary,arrange incentives for participants (e.g., order gift certificates)
- [ ] Start screening participants and scheduling them into time slots (via google form sign-up responses)
- [ ] Send “save the date” email inviting team members (and any interested stakeholders) to attend
## One week before
- [ ] Send email to the participants with meeting software directions, a link to the test room, name and contact number of someone to call on the test day if they’re late or having trouble connecting, and the non-disclosure agreement if we’re using one
- [ ] Line up (a RISC analyst?) participant in case of a no-show
- [ ] If this is first round of testing, install and test the screen recording and screen sharing software
## One or two days before
- [ ] Liason sends email to participants to reconfirm and ask if they have any questions
- [ ] Email reminder to observers
- [ ] Finish writing the scenarios
- [ ] Do a pilot test of the scenarios
- [ ] Get any user names/passwords and sample data needed for the test (e.g., account and network log-ins, or test accounts)
- [ ] Make copies of any test handouts for participants
1. *Recording consent form*
2. *Sets of the test scenarios in individual docs*
3. *Extra copies of the nondisclosure agreement (if using one)*
- [ ] Make copies of test handouts for observer(s)
1. *Instructions for Usability Test Observers
eg. link to rolling issues log for noting any during each observed session*
2. *List of test scenarios*
3. *Copy of the test script*
- [ ] If used, make sure incentives for participants are ready
## Test day (before the first test)
- [ ] Make sure whatever we're testing is accessible via the Internet and is working
- [ ] Test the screen recorder: do a short recording (including audio) and play it back
- [ ] Test screen sharing (video and audio) with the observers' screens
- [ ] Turn off or disable anything on the test computer that might interrupt the test (e.g., email or instant messaging, calendar event reminders, scheduled virus scans)
- [ ] Create bookmarks for any pages that need to open during the test
- [ ] Make sure we have any slack/direct-messaging contacts available that we might need:
```
Observer(s): _________________
Liason: _______________
Developer: _________________ (for problems with prototype)
IT contact: _________________ (for network or server problems)
```
## Before each test
- [ ] Start screen sharing session, if necessary
- [ ] Reload sample data, if necessary
- [ ] Clear the browser history
- [ ] Open a “neutral” page (e.g., Google) tab in the Web browser
#### While the participant signs the consent form
- [ ] Start the screen recorder
#### At the end of each test
- [ ] Stop the screen recorder
- [ ] Save the recording!
- [ ] End the screen sharing session, if necessary
- [ ] Take time after each session to jot down a few notes about things we observed; conduct a post-interview debrief with observers
## 1–2 Days after the last test
- [ ] Schedule a ~45-minute synthesis meeting to review any/all of the following:
* Session highlights, especially
* Patterns we saw
* Differences between what users say and what they actually did
* Workarounds — times users accomplished tasks in unexpected ways
* Anything else we noticed
* Task completion rates, and whether or not users met our success criteria
* Issues from our rolling issues log
* Any questions concerning user needs that these tests raised (ending with more well-informed questions is better than ending with user interface tweaks alone; while the latter is valuable in the short term, the former can align the team in solving the difficult problems that will leave users happier in the long run)
- [ ] Conclude the meeting by determining how the team will use what it learned in service of future design decisions (as usability.gov says “for a usability test to have any value, you must use what you learn to improve the site”). eg. In the team's rolling issues log, we might conclude the meeting by prioritizing those issues relative to (1) the business value of tasks they inhibited completion of, or (2) the frequency with which users encountered them. Once we’ve prioritized the issues, get them into the product backlog.
- [ ] After the synthesis meeting, Tal writes and submits a brief summary documenting what we did, what we learned, and any decisions the team made.
---
Reference:
[18f Methods: Design Validation, Usability Testing](https://methods.18f.gov/validate/usability-testing/)
[18f/GSA Guidance: Remote Usability Testing](https://18f.gsa.gov/2018/11/20/introduction-to-remote-moderated-usability-testing-part-2-how/)