The Formula for Usability Testing: Part 2
In Part One of our series on usability testing, we discussed what testing is all about, why it’s important, and the best times to do it. Now, let’s explore some of our UX agency’s favorite strategies and best practices for gathering the data needed to make smart, user-centered product decisions.
Step 1: Determine User and Business Objectives
Why are customers using your product? What tasks are they trying to complete, and what problems are they trying to solve? These are your user objectives.
What are your organization’s goals for usability testing? Why are you considering doing it in the first place? What are you hoping to learn? These are your business objectives.
By outlining what both stakeholders [users and the business] are trying to accomplish and why, your team will be able to ask the right questions during testing and determine if your product is meeting those goals.
Documenting both sets of goals is key. If you focus solely on business objectives, you may not know whether users actually want or need your product. But if you focus solely on user objectives, you may build an amazing product that customers love but isn’t viable for your business. By outlining what both stakeholders are trying to accomplish and why, your team will be able to ask the right questions during testing and determine if your product is meeting those goals.
Step 2: Screen Usability Testing Participants
Before diving into user testing, it’s crucial to make sure that the participants are part of your product’s target audience so you can get relevant user feedback. For example, if you’re designing a website hosting and management product, you wouldn’t want to test someone who has never managed a website before because that person wouldn’t be able to relate to the day-to-day tasks, challenges, and feelings of your target audience. Instead, you’d want to screen a group of participants to ensure you’re talking to the right people in the first place.
Before diving into user testing, it’s crucial to make sure that the participants are part of your product’s target audience so you can get relevant feedback.
During this screening phase, have a screener distribute a short survey to find users who represent your ideal customers. We like Survey Monkey, Google Forms, and Google Consumer Surveys because they make the survey creation, distribution, and analysis process easy. Plus, if you don’t already have a list of potential participants to survey, you can use these services to purchase responses from users who fit your target demographic.
As you’re gathering your survey group, start brainstorming questions that focus on behaviors (not demographics), such as:
- Occupation
- Work responsibilities
- Personal interests
- Digital behaviors and proficiency
- Purchase-related intentions
- Workspace and environmental conditions
- Cultural norms and biases
The answers to these questions will help you hone in on the group of people who are the best fit for your test. Although you can usually identify 80% of major UX issues with as few as 6 participants, the more data you can gather, the better.
Step 3: Conduct Usability Tests and Analyze Results
As you design your usability test, keep in mind that your primary goal is to observe how real people would use your product in a real environment. These tests can be conducted remotely or in person, as long as the user can perform tasks naturally.
There are plenty of tools that make it easy to moderate the test, record sessions, and analyze results. Some of our favorites include:
- Remote testing: GoToMeeting
- In-person testing: Techsmith Morae
- Testing prototypes: InVision for designing clickable prototypes, lookback for recording sessions during testing
- Testing existing sites and apps: Full Story and Hotjar
Before the test begins, set the user’s expectations about what the test is for and approximately how long it will take. It’s also important to reassure the user that the interface is being tested, not them. Otherwise your results may be skewed because they won’t admit when they can’t find something, or they’ll give you the answer they think you’re looking for.
During user testing, use a “think aloud” protocol, where you ask participants non-leading questions and have them talk through how they would complete basic tasks without any guidance. Tell them to wait for permission to click through to the next step. Then, ask follow-up questions to learn more about whether their hopes aligned with their expectations, how they’re feeling at that moment, and more.
Reassure the user that the interface is being tested, not them. Otherwise your results may be skewed because they won’t admit when they can’t find something, or they’ll give you the answer they think you’re looking for.
For instance, let’s say you’re designing an app and want to test if the navigation is clear. Your conversation would go something like this:
Researcher: “Show me how you would change your notification settings. Where would you look first?”
Participant: “I would click on my avatar because that’s usually where settings are.”
Researcher: “Ok, please do that.”
::Participant clicks on avatar::
Researcher: “Is that what you expected to see?”
Participant: “Kind of. It has my contact and billing information, but not my notification settings.”
Researcher: “What did you hope to see?”
Participant: “I hoped I could update other settings here, like my notifications and password.”
During this test, the researchers would see that the interface didn’t work as intended or meet the user’s goals, and the experience caused frustration. If they hear similar feedback from other users, they should consider moving the notification settings to the main account settings screen, making the navigation clearer so users know where to find them, or using a completely different tactic to meet their hopes and expectations. As a designer or product owner, it may not be the feedback you wanted to hear, but it’s the feedback you needed to hear.
During user testing, use a “think aloud” protocol, where you ask participants non-leading questions and have them talk through how they would complete basic tasks without any guidance.
Once the testing is complete, review the recorded sessions and data with your team, map the findings back to the objectives outlined in Step 1, and see where there are gaps. Then, you’ll have the information you need to improve the experience and guide your product to success.
Best Practices From the Pros
The process of facilitating usability tests is a science in and of itself. A well-designed and executed test will produce useful feedback, but making common mistakes could lead to biased or inadequate results. Here are some of the best practices we’ve learned over the years.
When conducting usability testing, DO:
- Contact way more users than you will need. You won’t hear back from everyone you contact, so it’s good to have backup testers.
- Provide a gift to thank users for their participation. However, be careful not to bribe them, or they may give skewed answers. We’ve found that something like a $25-50 Amazon gift card usually strikes the right balance.
- Write a script to follow during testing, but be prepared to deviate from it to follow the user’s natural path. There’s so much gold and a-ha moments here, so don’t be a slave to the script.
- Use a “think aloud” protocol during testing, where you encourage users to talk through their thought process and share whatever comes to mind.
- Ask open-ended follow-up questions during testing (e.g., “Can you tell us more about that?”).
When conducting usability testing, DON’T:
- Focus so much on demographics in the screening process. Things like age and gender don’t impact a user’s decisions as much as their context and behavior.
- Lead users down a certain path or explain the interfaces during user testing (e.g., “This is helping you, right?” or “Let me explain what this does…”). This will skew the data.
- Have different user segments (e.g., current customers and potential customers) complete the same test. Different segments often have unique needs and workflows, so separating them makes it easier to get focused results.
- Force users to perform tasks that aren’t relevant to them. Irrelevant questions lead to irrelevant answers that cloud the rest of the data.
- Make users feel like they are the ones being tested. It’s crucial to keep them in a positive, productive mindset and avoid making them feel like they need to tell you what you want to hear or lie when they don’t know an answer.
- Go longer than 45 minutes per test. Users get fatigued after that amount of time and may stop providing quality feedback.
Don’t you want to know that users love your product and that it meets their needs? Wouldn’t you want to realize sooner rather than later if you’re drifting off course so you can avoid wasting resources and putting your company at risk?
Now You Know…and Knowing is Half the Battle
Don’t you want to know that users love your product and that it meets their needs? Wouldn’t you want to realize sooner rather than later if you’re drifting off course so you can avoid wasting resources and putting your company at risk?
You may be able to find out if your product functionally works just by trying it yourself, but you can’t know the answers to these questions without putting on your lab coat and testing it with real users. Only then can you validate your hypothesis and know without a doubt that you’re designing products that will lead your users and business to success.
Thinking about doing usability testing for your product? Gathered some initial results, but not sure what to do with them? Reach out to our UX team at Drawbackwards for help navigating the process and taking your product from good to great.