A short story on Usability Tests

A usability study assesses how easy user interfaces are to use. It involves observing users as they attempt to complete tasks on a device or software and is pivotal in identifying design issues that affect user experience.

A short story on Usability Tests

Usability studies are the way to go to learn how users actually use and interact with your product. It’s a straightforward method to see if users understand how things work, if they use the product as intended, and to identify bottlenecks, issues, bugs, etc. These tests are behavioral, so you will learn what people do, not just what they say, whether it’s quantitative (unmoderated) or qualitative (moderated).

Since there are two types—moderated and unmoderated—I will share how to conduct each, and provide a clear comparison between them to help you choose the right approach for your research needs.

What is a Usability Study?

A usability study assesses how easy user interfaces are to use. It involves observing users as they attempt to complete tasks on a device or software and is pivotal in identifying design issues that affect user experience. The primary goal is to ensure that users can complete tasks efficiently and effectively without frustration. 

Usability studies often involve a variety of methods, such as task completion tests and think-aloud protocols. These methods help you gather both quantitative and qualitative data, providing a comprehensive understanding of user interactions. 

Usability Study vs. User Interview

While both methods aim to gather user insights, they differ significantly.

Usability Study: Focuses on user behavior and interaction with the product. It’s about watching what users do and how they interact with your interface. For example, during a usability study, participants might be asked to complete specific tasks while observed and noting any difficulties or confusion.
User Interview: Concentrates on users’ attitudes, perceptions, and needs. It’s more about listening to what users say about their experiences and expectations. In user interviews, participants might be asked open-ended questions about their overall experience, preferences, and pain points.
The key difference lies in the nature of the data collected. Usability studies provide direct evidence of user behavior, while user interviews offer insights into user attitudes and motivations. Both methods are valuable, but they serve different purposes and can be used complementarity to gain a holistic understanding of user experience.

When to Use a Usability Study

Usability studies are particularly useful when you need answers to questions about user behavior, such as:

  • "Can users easily navigate through the menu to find a specific section on the website?"
  • "Do users understand how to use the new feature we introduced?"
  • "Where do users encounter the most friction in our checkout process?"

These kinds of questions help pinpoint usability issues that could hinder the user experience. Usability studies are especially crucial during the design and development phases of a product life cycle. If you identify and address usability issues early on, you can save time and resources by avoiding costly redesigns later.

For instance, consider a scenario where a company is developing a new e-commerce website. A usability study might reveal that users are struggling to find the shopping cart button because it is too small or not prominently placed. Addressing this issue based on the study's findings, the company can improve the user experience, potentially leading to higher conversion rates.

Behavioral vs. Attitudinal

Understanding the distinction between behavioral and attitudinal data is crucial for effective user research:

Behavioral: Observes the actions of users (what they do). For example, tracking how long it takes for users to complete a task or where they click on a webpage. Behavioral data is objective and can be quantified, making it valuable for identifying patterns and measuring the effectiveness of design changes.
Attitudinal: Focuses on users' opinions and feelings (what they say). For example, asking users how they feel about the overall look and feel of a website or their level of satisfaction with a specific feature. Attitudinal data is subjective and provides context to the behavioral data, helping you understand the "why" behind user actions.

When combining both types of data you can gain a more holistic understanding of the user experience. Behavioral data reveals what users do, while attitudinal data explains why they do it. This combination allows for more informed decision-making and targeted improvements.

Types of Usability Studies: Remote Moderated vs. Remote Unmoderated

Both types share the goal of enhancing user experience but differ in execution:

Remote Moderated Usability Testing

Involves real-time interaction with the user, allowing immediate follow-up questions and clarification. This setup helps in understanding the user’s thought process as they navigate the interface. For example, it is common to ask questions like "Can you explain why you chose to click that button?" to gain insight into the user's decision-making process.

Remote Unmoderated Usability Testing

Users are given tasks to complete at their convenience, without a moderator present. This method provides more natural user behavior but lacks the ability to probe deeper during the session. Participants might be asked to complete a series of tasks while their interactions are recorded, and the data is analyzed later.

Pros and Cons

Moderated:

  • Pros: Direct interaction, immediate probing, detailed qualitative feedback. Moderators can ask follow-up questions and provide real-time assistance if participants encounter difficulties.
  • Cons: More resource-intensive, scheduling can be challenging. It requires the presence of a moderator, which can limit the number of sessions conducted simultaneously.

Unmoderated:

  • Pros: More cost-effective, scalable, and less bias as users are in their natural environment. Participants can complete the tasks at their own pace, providing a more authentic user experience.
  • Cons: No control over the test environment, no immediate clarifications. If participants encounter issues, they cannot seek immediate assistance, which might affect the quality of the data collected.

How Each Works

Remote Moderated: Your connect with the participant via a tool that allows screen sharing and communication, conducting the session in real-time. You guide the participant through the tasks, asking questions and observing their interactions. This setup allows for immediate feedback and clarification.
Remote Unmoderated: Participants complete tasks independently while their interactions are recorded. Data is collected automatically through the software used. This method relies on detailed task instructions and automated data collection tools to gather insights.

Moderated Usability Studies

When and Why to Use This Method

Moderated remote usability testing is ideal when you need in-depth insights into user behavior and thought processes. Use this method when you want to:

  • Launch a new product or feature to understand initial user interactions and gather immediate feedback.
  • Identify and solve usability issues by Directly observing where users struggle and address these pain points.
  • Gain qualitative insights by probing into users’ motivations, preferences, and frustrations.
  • Refine prototypes by testing early designs and iterate based on user feedback before final development.

Moderated remote usability testing is particularly useful because it allows for real-time interaction. This means you can ask follow-up questions, clarify user actions, and gather rich qualitative data. The moderator can guide the session, ensuring that all critical areas are covered and that any issues are thoroughly explored.

How to set up a detailed usability study plan

Step-by-step guide:

  • Define Objectives:

Clearly state what you aim to achieve with the usability study. Examples include identifying navigation issues, understanding user satisfaction, or testing new features.

  • Determine the Scope

Define which parts of the product will be tested. This could be a specific feature, a user flow, or the entire interface.

  • Develop User Tasks

Create realistic tasks that users are likely to perform. Each task should have a clear objective and be relevant to the study’s goals.

  • Choose Participants

Identify the target user group that matches your product’s user base. Use screening surveys to select participants who meet specific criteria.

  • Prepare the Script

Write a script that outlines how you will conduct the session. Include a brief introduction, task instructions, and any follow-up questions you plan to ask.

  • Pilot Testing

Conduct a pilot test with a colleague to ensure that the tasks and tools work smoothly. Make any necessary adjustments based on this trial run.

  • Schedule Sessions

Arrange sessions with participants at times that are convenient for them. Send reminders and ensure they have all necessary information and tools ready.

  • Conduct the Test

Follow the script, guide participants through tasks, and ask follow-up questions as needed. Ensure that you create a comfortable environment where participants feel free to share their thoughts.

Involving Stakeholders

Involve stakeholders early to align on goals and expectations. Share the usability test plan with them to get feedback and ensure all necessary aspects are covered. Conduct a kickoff meeting to discuss the study’s objectives, methodology, and expected outcomes. This helps build buy-in and ensures that everyone is on the same page.

Selecting Participants

Choose participants that represent your target audience. Use screening surveys to ensure that participants meet the criteria relevant to your product.

Generally, 5 participants from the same user group (if you have multiple personas, 5 users / User Persona is the number of participants you need!) are sufficient to uncover most usability issues. This number balances the need for diverse perspectives with the practical constraints of time and resources.

How to organize User Tasks effectively

Organizing user tasks effectively is crucial for a successful usability test. Here’s a step-by-step approach:

  • Identify key user goals

Determine the primary goals users have when interacting with your product. These goals should align with the objectives of your usability study.

  • Break down tasks

Break down each goal into specific tasks. For example, if a goal is to purchase a product, tasks might include searching for the product, adding it to the cart, and completing the checkout process.

  • Create clear instructions

Write clear, concise instructions for each task. Avoid technical jargon and ensure that users understand what they need to do.

  • Sequence tasks logically

Arrange tasks in a logical order that reflects the natural flow of user interactions. This helps users navigate the tasks smoothly without unnecessary confusion.

  • Include realistic scenarios

Frame tasks within realistic scenarios that users might encounter. This makes the tasks more relatable and provides context for their actions.

  • Pilot test tasks

Run the tasks with a colleague or a small group of users to ensure they are understandable and achievable. Make adjustments based on feedback.

  • Prepare to adapt and be flexible 

Be ready to adapt tasks during the session based on user behavior. Sometimes users may take unexpected paths, providing valuable insights.

Template: https://media.nngroup.com/media/editor/2022/05/12/nng-example-facilitator-guide.pdf 

How to analyze the results to inform product design decisions

Analyzing the results of a moderated remote usability test involves several steps. Here’s how to do it:

Review recordings

Watch the session recordings and take detailed notes on user behavior, comments, and any issues they encounter.

Identify patterns

Look for common themes and patterns across different sessions. Note recurring issues or behaviors that multiple users exhibit.

Categorize findings

Organize the findings into categories such as navigation issues, content clarity, functionality problems, and user preferences.

Prioritize issues

Determine the severity and impact of each issue. Prioritize issues that significantly hinder the user experience or prevent task completion.

Summarize insights

Summarize the key insights and findings in a clear and concise manner. Highlight the most critical issues and their potential impact on user experience.

Create recommendations

Based on the findings, provide actionable recommendations for improving the product. Each recommendation should address specific issues identified during the test.

Use visual aids

Include visuals such as screenshots, video clips, and heat maps to support your findings and make them more compelling.

Using the Template for Moderated Usability Studies

  1. Before the Study:
    • Ensure the spreadsheet is prepared with participant IDs and tasks.
    • Share the tasks and scenarios with participants during the session.
  2. During the Study:
    • As the moderator, guide participants through each task.
    • Take notes directly into the spreadsheet or transcribe from recorded sessions later.
    • Use the "Pain Points," "Positive Feedback," and "Additional Comments" columns to capture real-time insights.
  3. After the Study:
    • Review recordings and notes to fill in any gaps.
    • Analyze the data to identify common themes and issues.

Reporting Results

Create a report that highlights key findings, issues, and recommendations. Include both qualitative and quantitative data to provide a comprehensive overview of the study’s results. Use visuals like video clips to support findings and make them more comprehensible. Visual aids can help convey complex information more effectively and engage stakeholders.

Unmoderated Remote Usability Testing

The foundation for usability testing remains the same, requiring a well-defined plan and clear objectives. However, unmoderated usability testing differs significantly in execution and analysis.

Key steps in unmoderated usability Testing

  1. Define Study Goals and Participant Criteria:
    • Clearly outline what you aim to learn and identify the type of participants needed.
  2. Select Testing Software:
    • Choose a tool that supports your study's goals, such as UserTesting, Lookback, or Maze.
  3. Write Task Instructions and Follow-up Questions:
    • Craft clear, specific instructions. Include follow-up questions to gather qualitative feedback.
  4. Pilot Test:
    • Conduct a trial run to identify and fix any issues with task instructions or technical setup.
  5. Recruit Participants:
    • Use screening surveys to ensure participants match your target audience.
  6. Analyze Results:
    • Review recordings, identify patterns, and categorize findings. Use both qualitative and quantitative data for comprehensive analysis.
    • Use the same analysis template as for Moderated Tests

Reporting Results

Create a report highlighting key findings, issues, and recommendations. Use visuals like screenshots and video clips to support your insights and make them more comprehensible.

Use the same report template as for the moderated tests.

For detailed guidance on each step, refer to the full article by Nielsen Norman Group.

Unmoderated User Tests: How and Why to Do Them
The 6 steps for running unmoderated usability testing are: define study goals, select testing software, write task descriptions, pilot the test, recruit participants, and analyze the results.

All the details you need about usability testing is on the NNg website.

Usability Testing 101
UX researchers use this popular observational methodology to uncover problems and opportunities in designs.

If you want to be a true expert in Remote Usability Testing, Moderated or Unmoderated, as well as in other Remote User Research, take the NNG course on Remote User Research or Usability Test course.