Guidance

Usability testing: qualitative studies

How to use usability testing to evaluate your digital health product.

This page is part of a collection of guidance on evaluating digital health products.

Usability testing looks at whether a digital product or service is usable, effective and acceptable to users.

Participants are asked to complete specific tasks using the product you have designed. They may be asked to think aloud as they complete tasks or reflect on their experience of using the product or prototype afterwards. The insights you get are usually qualitative, but you can do quantitative usability testing by introducing metrics such as task completion time or clicks to completion.

What to use it for

Usability testing can be used at all stages of product development, from early prototypes to iterative improvement of existing products. Use it to:

  • learn more about your users and their needs
  • test whether you are meeting user needs
  • work out how to make your product better

If you want your product to be published on the NHS Apps Library, you will need to meet their standards. Answering the NHS Apps Library Digital Assessment Questions is the first step. These include questions about the usability of the product.

Pros

Benefits include:

  • usability testing is a well-established way to identify common usability issues
  • it is relatively easy to do as you do not need any special equipment or many participants
  • it is a low-cost, low-risk way to gather feedback

Cons

Drawbacks include:

  • because you are asking users to complete a task in a controlled environment, it may not reflect how they naturally use the product or the challenges they would face in real life
  • it can be time consuming to conduct usability testing with participants, especially if you need to travel to where they are located
  • it can be challenging to find the right participants. If participants are not right, the insights you gather can have lower value or be misleading.

How to carry out usability testing

Have clear tasks and success criteria. For example, you might want to ask a user to find a specific piece of information. The success criteria would be them completing the task. The task you ask participants to complete and the success criteria should reflect the stage of development of the product.

Your prototype product should be suitable for the question you want to answer. If the product is at a very early stage you will have high-level questions, so the prototype can be simple and low-tech – for example, it could even be on paper. If the product is at the later stages then it should be closer to the final product – for example, it could be the live app.

Recruit participants who are potential users. It is good to recruit people who may have been in a situation which would have lead them to use your product in the last 6 months. This helps to make sure participants base their interactions on their experience rather than a hypothetical future.

For qualitative usability testing, recruit 5 to 6 participants. If you are doing quantitative usability testing, you should recruit more.

The GOV.UK Service Manual has guidance on recruiting participants and conducting usability testing.

When running the session, it is best to have one person administering questions (facilitator) and one person taking notes (support). You may want to record the session so that it is easy to refer back to. You could record the audio, the screen and the users’ hands using the screen.

You can carry out usability testing in person or remotely. There is software you can use for remote usability testing. However, it can be harder to guide participants or understand exactly how they are using the prototype, so it is best to reserve remote testing for the later stages of product development.

Give the participant a task and then let them complete it. Try to resist influencing how they engage with the prototype or giving them too many instructions.

Take notes on how they use the prototype and the comments they make.

After all the sessions are complete, look back at whether users completed the tasks and if there were any common errors.

Common challenges should be translated into design opportunities or changes that can be tested again.

Example: EPIC HIV

The team wanted to develop a digital behaviour change intervention that would encourage men in rural South Africa to test for HIV and access care. They wanted to make sure that the app they designed was usable, engaging and effective.

They carried out 4 rounds of usability testing on the app. Each round of testing helped the team improve the design of the app before the intervention went live. This was particularly important because some of the users had low literacy, low technical literacy and low health literacy. Some were not able to read, had not used touch screens before or were not familiar with the information about HIV that was being communicated.

Usability testing happened ‘in the wild’. Researchers recruited participants from the community and conducted the tests in their car or public spaces (depending on the participant’s preference) to reflect how participants would use the app in the future.

In total, 29 participants were recruited across the 4 rounds. During each round of testing, recruitment became more selective as the team learnt that the older and more rural participants had a harder time using the app. Later testing focused on these groups.

Participants were asked to use the app to complete the task they were given. Researchers observed them using the app and took notes on any usability challenges they experienced, such as not being able to find a ‘next’ button or not listening to important content. Afterwards, they were asked questions about their experience and their understanding of the app in order to assess whether the app was acceptable and communicated the correct clinical messages about testing for HIV.

The interviews were administered in isiZulu by a bi-lingual (isiZulu and English) facilitator. Between each round of testing, feedback relating to comprehension and key messages was translated into English, synthesised and communicated to the broader team (including a UX researcher, a developer and a clinician) to decide what design changes should happen. Changes were made to the app and it was then re-tested until the major comprehension and usability issues were addressed.

More information and resources

Usability Testing 101 on the Nielsen Norman Group website.

The Encyclopedia of Human-Computer Interaction, 2nd edition on the Interaction Design Foundation website.

Preece, Rogers and Sharp (2002), Interaction Design.

Diamantidis and others (2015), Remote Usability Testing and Satisfaction with a Mobile Health Medication Inquiry System in CKD.

Colligan and others (2015), Cognitive workload changes for nurses transitioning from a legacy system with paper documentation to a commercial electronic health record – this was a quantitative study of usability using standard measures of cognitive workload.

Updates to this page

Published 30 January 2020

Sign up for emails or print this page