You’ve got your product built and out to market. Or maybe you’ve got a clickable prototype built in Adobe XD or other tool, or even some paper wireframes. Either way, now you’re ready to get it under the noses of real people, test your idea and hopefully improve your business.

Scary times indeed. 

Why do you need a user testing plan?

A testing plan prepares you for user testing. At its simplest it helps you decide the areas that you are going to concentrate on. At its more complex and most useful, it’s a practical guide that ensures your tests are consistent, thorough and that they give your team actionable results.

Crucially, it also helps you focus on watching your users. A testing plan will allow you to uncover more about their problems, their needs and how your product might help them. Which is exactly why you’re doing this to begin with.

What goes in a user testing plan?

  • Scenarios
  • Tasks
  • Aimed for outcomes and a way of scoring
  • Requirements / assets needed to run each task
  • Notes and comments

So how do you start putting these together?

1. Choose what to test

To create your plan, first make a broad list of all the things you want to test.

Good ways of doing this include:

  1. Identify any assumptions you’ve made about your product or users which you want to test are true. Question the most obvious assumptions. It’s strangely hard to do, but invaluable.
    For example, ‘on our website, parents will be able to join parenting groups in their local area’, or ‘users will know they can save their work and edit it later’ etc.
  2. Think about the business objectives which your product is meant to deliver (e.g. increase the number of accounts created by our online shoppers) 
  3. (A favourite of mine) think about those aspects of your product which are meant to give value to your users. Be it social, emotional, economic or whatever. Do they really generate value for your users? Do they even know these aspects exist in your product?

Now shortlist.

Whittle them down to a good number to test – and definitely use your business’s objectives to help you prioritise.

10 things are plenty for you to be testing within an hour, per user. Any more risks fatiguing your users, you and any observers. 

2. Write your scenarios and tasks:

Test scenarios provide users with a real life context that they may find reasonably expect to find themselves in when using your product. Test scenarios also provide a structure that gives your user a seamless, authentic feeling experience – i.e. it provides them with the best simulation of real life as it is possible to give.

For each scenario you need a general structure something like this:

  • first the framing, or user motivation
  • then an actionable task for the user. These will give you something to measure, or validate its completion as a success.

Define your scenarios in advance!

Defining these scenarios – i.e. writing them down before you get into testing – gives you structure and reliability.

It means you can standardise the tasks, and therefore the test overall, making your results more reliable.

Writing scenarios in advance helps you avoid changing the way you describe them between users, as this would make your tests inconsistent

Finally writing scenarios and tasks helps you avoid poor or leading phrasing when you try to describe them to your user.

Simple things to avoid include:

  • Unintentionally revealing the steps required in order to complete an action;
  • Questions that use the same language as the on screen elements;
  • Turns of phrase that unwittingly reveal the aspects of the product you’re hoping to test;
  • Making the user operate in an unnatural way in order to please you (i.e. they do what they think you are pushing them towards, rather than solve the challenge in their natural way)

And of course, writing test scenarios allows you to identify in advance all the assets you’ll need to run that scenario. For example, if testing paper prototypes, all the paper assets needed to illustrate that user journey and any offshoots the user goes on.

What makes good user testing scenario and tasks?

As already mentioned, each scenario needs:

  • the framing / user motivation
  • then at least one actionable task

For example, “You need to travel to Oslo by Saturday morning. Go to the MyTickets website and book a good ticket.”

Keep tasks ‘human’ to reap insights

The phrasing here of “book a good ticket” allows the user to find and book whatever they think is the best approach to travel.

Had you just said “book a train ticket for Saturday morning” they might have done that, but you would have missed discovering that they’re the kind of person who would love to travel by boat for a change; or always hunts out deals; or that hates overnight travel; etc.

These would be missed business and design opportunities.

Make each task a real mission

Note in this example that the user was asked to find their own way to the MyTickets website. It was not already open and ready for them. 

Seeing how users find, navigate to, and react to the product first opening should also be part of the test. It makes the scenario feel real, like a mission, and it may surprise you with the route that they take.

Outcomes and scoring

In order to track how well users are able to complete the tasks, you need a scoring grid. Nothing fancy. Just a simple grid, with the leftmost column containing the tasks and hoped for outcome, one per row, and then a column for each tester.

[table]

For each user, if they successfully complete a task or answer a question according to the expected outcomes, they get a tick. If not, an X. If they only partially complete it, you leave it blank.

Success Rates

Another way of scoring to give user-tasks a % success rate. This will allow you to work out some averages of areas of your product that need improving.

A typical scoring system would be:

  • %100 completion rate = user completed the task
  • %50 rate = a partial success in completing the task
  • %0 rate = the user was unable to complete the task at all 

A note on terminology:

Using “success rates” instead of something like “failure rates” has a more positive connotation, and is based on empathy for our users – which is fundamental for good design!  “Failure Rates” has an inherent critical aspect to it, both of the test subjects themselves and the design decisions. Or, putting it another way, the design team!

However product development is about iterating and improving. It’s aspirational. And it’s much more motivating for people making design decisions, prototyping and iterating, to be talking about success rates over failure rates.

Finally, I believe “failure rate” has a focus that is more internal to the organisation and doesn’t celebrate users. There is an undercurrent that suggests that ‘users are failing to understand our designs’ rather than ‘we aren’t meeting users needs’.

Notes and comments

Of course, there’s more to user testing than generating scores and ticking boxes. The greatest amount of learning comes from simply watching your users and understanding what they do and how they feel. For this, someone needs to take notes.

If you’re the person facilitating the test, don’t worry about taking notes yourself. As the facilitator, your role is to concentrate on engaging with the tester, making them feel comfortable, keep them talking and and help them verbalise what they’re doing throughout.

Instead, have an observer do the note taking. For best results, it should be multiple observers. For this, set up a webcam and microphone and send the feed to in another room. Ideally, see if you can focus a second camera on the product itself as well, as then you can send a feed to the observers as people navigate, click around etc.

The observers should capture:

  1. Noted behaviours (e.g. body language, shaking their head, flipping back and forwards between screens)
  2. Likes
  3. Dislikes
  4. Feelings
  5. Questions
  6. Ideas

Capture them one per Post-It note, making each short and clear. Be sure to also note which category each belongs to (i.e.  add “Likes”, “Behaviour” etc in the corner) and set them aside. 

Post-test results

Once all the tests are complete, it’s time to bring everything together. 

Take all your notes and do a ‘Saturate and Group’ exercise: as a team, grab notes and post them on a wall. Then, collectively look for common themes within the feedback and group them, moving the Post-It notes accordingly. Finally, give each grouping a title.

Taken together with the task scoring grid, you’ll now have at least a good half dozen clear opportunities for improving your product.

Testing complete!