The usability of a product is contextual and depends on the different roles of users, environments and tasks they need to do. It’s important to measure effectiveness relative to these and to make sure we have quantitative and qualitative data to help us collect information and compare against benchmarks.
3 Metrics to Measure and Quantify Usability
In my work I use following metrics: task completion rates, time on tasks, post test satisfaction ratings and how they relate to quantifying usability; then with qualitative data the user experience. There are many more metrics you could measure, but there’s no point spending time using them if you don’t have experience with the basics. Often they may not be at all relevant to what you’re doing or the best use of your time. In this tutorial we’ll look at three metrics you can use to measure usability.
What is Usability?
Usability is the intersection between effectiveness, efficiency and satisfaction in a context of use. In other words, it’s all about getting things done and how happy a user is when trying to get things done. If they are getting their task done, it’s not taking too long, they are not making too many mistakes or errors and they think highly of the application after they use it, then it’s going to be a usable product.
How to Measure Usability
We can measure usability by testing a product (or a competitor’s product). An initial test gives us a “before state”, which is just as valuable as the “after state”. This enables you to compare your designs or how you stack up against your competitors. Once you have an idea of the effectiveness, efficiency and how satisfied your customers are with the experience you can set out to create a new design to improve such measures.
The “success rate” (or completion rate) refers to the percentage of participants who correctly achieve each goal. So ideally, before you undertake testing, you will have identified a number of scenarios to test. The test should also be unassisted by a test moderator. There are a number of other measures you can take to measure effectiveness. In my experience a completion rate of 100% is great, but anything above 78% is acceptable!
How to Resolve Effectiveness Issues
If users are struggling with completing the task, either through frustration or inability to find the next step, you may consider the following:
- Make sure that buttons look like buttons
- Make click through options obvious (e.g. clicking on a tile might seem obvious to you but to users it may not be intuitive)
- Simplify your workflow
- Reduce the learning curve of using the system by including onboarding if necessary
The efficiency metric refers to the average time it takes to complete each task. Alongside this you can also calculate the range and standard deviation. This is the main metric you will typically look at, but there a numerous other metrics you can gather:
- Time taken on the first attempt
- Time to perform task compared to an expert
- Time correcting errors
This is not a comprehensive list and you should choose tasks that make sense to you. For example, if you’re testing a very short flow it might not be worth calculating something like time correcting errors.
User errors are common, these may include actions, slip ups or mistakes. I normally assign a short description, severity rating and classify each under their respective section. Based on industry benchmarks I aim for no more than 0.7 errors per task.
Efficiency metrics can also be good when comparing types of users. For example, beginners vs. experts.
How to Resolve Efficiency Issues
If it’s taking users a long time to perform tasks you might consider the following:
- See if there is a mismatch between hyperlinks and the title of the page the link leads to.
- Make sure search results include a description of the link, in addition to the title of the page.
- Design your sitemap in a logical manner.
- Provide an alphabetical index which includes as many categories, content areas, departments and keywords as possible.
Satisfaction can be measured and calculated using the “System Usability Scale”. The standard scale has ten questions which measure the user’s overall impression of the usability of the software. You can also add more questions to your questionnaire, however, in my experience it’s best to use the SUS scale because it is so popular and there are therefore industry benchmarks which you can measure against. For example, if you do the test with four people and, overall, you get a score of less than 78%, it’s probably a good indicator that you need to keep working on redesigning the workflow or interface to improve users’ level of satisfaction.
How to Resolve Satisfaction Issues
In my experience, the best way to resolve poor satisfaction ratings is to take on board the qualitative feedback and work with the business to make changes to the product or service offering.
If you’re new to measuring usability, start with the basics. Just measure the success rate: the number of people who can complete the task without assistance. If you want to go beyond this, then collect “time on task” data and satisfaction data (with a survey at the end of the study). These measures are nearly always sufficient. If you want to dig deeper make sure that any other additional metrics are relevant and a good use of your time.