How to perform a heuristic evaluation

Orientation

An orientation or training session preceding the evaluation process provides information on :

The orientation usually begins with an introduction to the heuristics and the evaluation method to be used.  Naturally, it will not be necessary to teach the heuristics to evaluators who are UI specialists.  Moreover, this orientation provides them with a list of common terms when referring to usability problems and ensures they consider a wide variety of usability concerns when judging the interface [13].

More than likely the experts will not be domain experts, since double specialists are difficult and expensive to find.  To increase the evaluators' knowledge of the system, the evaluators are given a short lecture on the system to be evaluated.  This by no means makes them an expert in the domain, but gives them some notion as to the purpose of the system.

If the evaluators are not knowledgeable of the domain, they will more than likely use scenarios to perform the evaluation (see "Scenario Vs. Self-Guided Exploration" for more details).  Therefore, the evaluators will be presented with the specific scenario that will be used to explore the system.  To ensures the evaluators bring a fresh and unbiased perspective, they are not shown any screen-dumps of the actual system [13].

The orientation session will generally last around two hours.

Evaluation Process

In principle, the evaluators decide on their own how they want to proceed with evaluating the interface.  A general recommendation has them going through the interface at least twice [13]. The first pass helps the evaluator to get a feel for the "flow" of the interaction and the general scope of the system.  This provides a further understanding of the domain, given they are not experts in it.  The second pass allows the evaluator to focus on specific interface elements while knowing how they fit into the larger whole.

As the evaluators go through the interface, they inspect the various dialogue elements and compare them with a list of heuristics.  Karat et al. [5] found that experts did not need the heuristics in front of them during the evaluation process.  They were already familiar with the concepts because of their experience with graphical user interfaces and other systems.  They thought less experienced users would find it very useful having them present.  Ultimately, the presence of the heuristics during the evaluation process will depend upon the preference of the inspector performing the evaluation.  The evaluator is not limited to finding problems that only violate the general heuristics. Any additional usability problems outside the scope of the heuristics are found, they should also be noted.

Results can be recorded either as written reports from each evaluator or by having the evaluators verbalize their comments to an observer as they go through the interface [13].  If observers are used, it is important they have a good understanding of the interface in order to answer the questions from the evaluators during the evaluation.  This is distinctly different from usability testing where observers do not interject during the evaluation process.  The results are a list of usability problems in the interface with references to the heuristics that were violated.

Nielsen [13] found that evaluators were not generally very good at stating which heuristic was violated by each usability problem or classifying the severity of the problem during the evaluation session. They were more focused on inspecting the interface and finding new usability problems. Interruptions by observers asking for this information were found to interfere too much with the evaluator's work flow when finding usability problems.  For these reasons, it is recommended that finding usability problems and analysing them are two different processes that should not be combined in a single session.

Typically, a heuristic evaluation session for an individual evaluator lasts one or two hours.

Debriefing Session

After the last evaluation session, Nielsen [13] recommends having a debriefing session. Participants should include the evaluators, any observers used during the evaluation sessions, and representatives of the design team.  The debriefing session would be conducted primarily in a brainstorming mode.  The focus would be to discuss severity ratings of usability problems, possible redesigns to address the major usability problems and general problems of the design.  A debriefing is also a good opportunity for discussing the positive aspects of the design, since heuristic evaluation does not otherwise address this important issue.  Also allows evaluators to create a group perspective on inspection problems.

Return to Home Page