Usability Techniques
Analyzing and Reporting Usability Data

by Chauncey Wilson
Usability Interface - October 1997

Just-In-Time Analysis

Just-In-Time (JIT) analysis involves having developers observe usability test sessions, record problems, and then immediately generate a prioritized master list of problems. In the ideal environment, developers take this list and immediately fix as many problems as possible. Usability specialists may or may not collaborate on the fixes to the problems.

The JIT method of data analysis has the virtue of immediacy, rapid turn-around, and team involvement; however there are several disadvantages. First, this type of analysis is problem-focused, rather than goal-focused. Long lists of problems are generated, but there is no clear relation to specific usability goals. Second, developers may not be able to fix things immediately so the context of the problem may be lost when it is time to fix the problem. Third, the JIT analysis requires that the entire development team observe the testing sessions since problems may occur that are the responsibility of different developers. This is a laudable goal, but perhaps one hard to achieve with large teams. Strong management support helps ("Get thee all to the usability test.").

Impact Analysis

Impact analysis is the most powerful technique for analyzing usability, but also the most time-consuming. Impact analysis is appropriate when you collect quantitative measures, like time to complete a task, time spent in errors, and number of errors. The impact analysis could, for example, indicate how much time the user spent in errors during the task compared to productive work time. A typical result might be that the user spent 40% of the total task time in non-productive errors.

There are several advantages to impact analysis. First, it provides quantitative data that can be compared against usability goals and that can be used in a cost-benefits analysis, something that is missing from many "discount" usability methods. "Discount" usability techniques like user interface inspections are cheap to run and that's great, but they do not yield detailed quantitative data. If your company is really interested in the impact of usability work on a product, then some performance testing and impact analysis would be beneficial. With impact analysis, you can show quantitative improvements as the product evolves. The disadvantages to impact analysis include the requirement for quantitative performance data and the time required to analyze the data.

Long Lists of Usability Problems

All usability tests generate lists of problems. Some development teams put each problem on a sticky note, stick the notes on a wall, group the notes in predefined or emergent categories, then assign priorities to each problem (this is often referred to as "affinity analysis"). If the number of participants in a usability test is small, you could put all the problems from each participant on a different colored sticky note. When you group problems, this color coding would highlight whether many participants were having the same problem or if one participant was having many problems. A minor variation on the sticky note approach to categorizing problems is to put the problems on 3" x 5" note cards and arrange them on a large flat surface for grouping.

To Report or Not to Report

There is an ongoing discussion in usability circles about the importance of formal reports that document the results of usability testing. I think that each usability evaluation should have a formal report that provides some context for the problems. Not all problems can be addressed immediately and memories fade. Usability reports are also important for showing what a usability specialist has done. They can also be used to determine some metrics, such as the number of problems addressed by development or the number of problems that occurred during successive prototypes or versions of a product.

One technique that I use for compiling formal reports is to have a short (2 pages maximum) executive summary that would be suitable for a senior manager, plus a detailed report that contains screen shots of user interface objects like windows, menus, dialog boxes, and error messages. Each screen shot is followed by a table which lists issues that arose with the object. The table has the following column headers: Problem/Issue, Frequency of Occurrence, Human Factors Principle Underlying This Problem, Priority, Recommended Solution, and Additional Notes. The Human Factors Principle column would describe the principle or guideline that underlies the problem. Principles might include: consistency with user expectations or a style guide, poor feedback, no clear exit, and error message with no solution. I think that it is important to provide a human factors principle for each problem in the report. I often put verbatim comments from users in the Problem/Issue column to provide context for the readers.

If you want to list problems by priority rather than by interface component, you can create one large table, and then sort on the priority column. This would put all high priority items at the top. You can then split the table and add any graphics or headers that you need.

Videotaped Highlights as Reports

Videotaped highlights are a common form of usability report. Highlight tapes can educate developers who could not attend usability testing and also corroborates the usability specialist's list of problems. Highlight tapes are powerful, but time-consuming to create. Analysis of videotapes can take from three to ten hours per hour of testing. A more efficient approach is to note the approximate time that critical events occur and then fast forward to the place on the tape that you want the developers to watch. If you know the approximate time that events occur and have some editing equipment, you can make a "rough cut" tape. Keep in mind that the length of a highlight tape should be matched to your audience. An executive tape might last ten to fifteen minutes whereas a tape for the development team might last 30 to 60 minutes, and be shown at a lunchtime seminar.

Usability Bug Databases

Bug or defect databases are primary forms of communication for many development teams. If your company accepts usability bugs as "real" bugs, then one form of reporting is to enter all the usability problems into the bug database. Entering bugs into a bug database has the advantages that they can be tracked, and that many people have access to them. The disadvantage is that many bug systems are complex and not set up for tracking usability bugs. Many companies do not have a description of how to categorize the severity of usability bugs. Furthermore, usability bugs are often considered low priority, cosmetic issues, or features. If you are considering using bug databases, some education about usability bugs will likely be necessary.

Most usability bug databases require that each bug represent a separate problem. A mistake that usability specialists sometime make is putting an entire usability report on the database. Make sure that you investigate the rules for entering and updating bugs. Find a mentor who will explain the "culture" of the database. I've gotten in trouble several times for not following the rules for bug databases and found that the best way to get out of trouble was to find someone in development or quality assurance who could help me avoid neophyte mistakes.

Get Usability Feedback on Your Reports

The content, format, and wording that you use for your reports should be evaluated for usability. You might want to create a template of your report format, verify that it will be useful to the intended readers when you first start working with them, then get feedback on subsequent reports. Asking the development team to critique your report can provide a small political benefit, assuming that you are willing to make changes to your next report. Some questions you might ask are:

Cautionary Notes About Usability Reports

Usability reports have political consequences, so make sure that you are clear about who gets the usability report. I always ask who should be on the mailing list for the report and do not send it to anyone else unless I ask permission from the development manager.

Make sure that you allocate time for writing reports. Getting usability data back to the development team quickly enhances your credibility. When I make a contract to do some usability testing, I always try to include a statement such as "The usability report will be ready within five working days after the test is complete" and then stick religiously to that schedule.

Keep a library of reports that you can search quickly. Not all problems can be fixed immediately and you may need to revisit your usability data often. Being able to pull up old data quickly impresses people.

Please do not forget to list the POSITIVE things that you find about a product during usability evaluation and include those in your executive summary and any summary or conclusions in the long version of your report.

Go to STC Society Web Site