Reduced to its most basic definition, a heuristic evaluation is an assessment of the user experience in relation to established principles of usability to see how the interface performs against industry standards and norms. The assessor’s job is to identify where the product departs significantly from the established principles (or heuristics) and how this impacts on the user experience. It is often desirable to have more than one assessor (three to eight being the optimum) as they are likely to pick up on different aspects according to their own discipline and experience.
Heuristic analysis is usually employed quite early in the design process to ensure that the wireframe or prototype conforms to the standards, principles and objectives that have been set for it – and, if not, where it is failing and how.
How to conduct a Heuristic Evaluation
The first step in heuristic evaluation in deciding on the principles that you are going to assess against. There are a number of sources you can use for this or you can compile your own from existing principle sets or by creating and populating your own framework. The Nielsen Heuristics compiled by Jakob Nielsen and Rolf Molich are a good place to start.
- Status visibility;
- Accessibility of language and concepts;
- User control;
- Eliminating errors;
- Visibility to aid recognition;
- Minimalist design;
- Error recovery;
- Help facility and text.
As well as the principles set you also need an up-to-date understanding of your users and their expectations and interaction with your business; so any company research, statistics, information etc should be analysed and absorbed by the experts you are going to use before commencing with the heuristic evaluation. You then need to devise a clear evaluation system so they can also assess and report to the same criteria. Make sure you give them clear pointers as to how you want assessment to proceed, what you want them to assess and exactly how.
Heuristic analysis confers many benefits, especially when used prior to, or in conjunction with other research methodologies.
- It is a relatively swift and efficient way of gathering data about likely performance and delivery of objectives;
- It is a good means of establishing baselines or identifying issues that need addressing or further research to clarify and test;
- It provides a strong indication of likely usability performance and any issues likely to arise;
- It can provide a clear framework and reference point for all future usability testing.
It does, however, have a small number of possible drawbacks:
- You are limited by the available experts and their expertise and backgrounds;
- If specific skills are in short supply you might have to pay a lot to engage the relevant expertise;
- The evaluators are not necessarily your users or target groups so they might be using a different value set in their assessment;
- They can only assess within the framework and criteria you have given them – if this faulty or limited so will their evaluation be.
Despite these, heuristic evaluation is still a revealing and helpful process – if only because it forces you to analyse and define your own objectives and principles. If you would like to learn more about this type of analysis why not ring us on +44(0)800 0246247 or email us at firstname.lastname@example.org