STC Usability SIG Home
Back to the Newsletter
This article was originally printed in the January 2003 issue (Vol 9, No. 3)


About the Authors

Bill MacElroy is President of Socratic Technologies (San Francisco). Founded in 1994, Socratic is a research-based consultancy that builds proprietary, interactive tools that accelerate and improve research methods for the study of global markets. More information and animated demonstrations can be found at the company’s Web site:

STC Usability SIG Newsletter

logo70x50.gif (1973 bytes)
Usability Interface

The Role of Online Surveys in the Usability Assessment Process

by Dr. William MacElroy

I have attended several conferences at which I witnessed a growing debate over the role of survey work in the field of usability. Some practitioners are of the opinion that "usability is usability" and "surveys are surveys", and only rarely do the two meet in a harmonious exchange. The more I have considered this viewpoint, the more convinced I am that it is probably valid, unless the usability specialist takes the lead in assimilating survey output into the process of evaluating the overall effectiveness of Web sites and online applications.

Initially, my opinion was that online surveys are appropriate for some phases of usability work and the more traditional, ethnographic forms of observational work are better suited for others. But the more one thinks about the role of surveys, the clearer it becomes that they are not well suited for measuring usable aspects of the computer human interface. They are useful for profiling the background, market requirements and perceptual feedback for various types of Web-, documentation- and software-based systems.

Online surveys about Web sites provide context and post-design confirmatory measurements, but cannot assess usability as we have come to understand the mandates of the term.

To determine how online surveys and usability testing fit into a fully developed research program; I would like to suggest some nomenclature to describe the broader development process. I believe there are four key phases that encompass one phase of a total development cycle.

Environmental Scanning

Before usability can be accurately assessed, the researcher needs to know the context in which the site, documentation or software must perform. Similar to the market requirements phase of software development, survey work can be useful in determining the pre-cursors to a successful product such as relative demand for functionality, frequency-of-use projections, baseline satisfaction with current solutions and anticipation of future needs. Because the target audiences for many of the systems we work with are already online, it is a natural population for Web-based surveys.

The output from survey work can affix a weighting to issues that helps the usability professional assess the relative criticality of issues (i.e., if something is very low on the relative importance scale, it should probably be low on the list of things to either test and/or recommend radical changes.)

In the same phase, survey work should be used to give usability assessors a sense of the competitive environment. All Web sites, technical documentation and software applications are judged, not by themselves, but within the context of all other sites, manuals and software that the user has experienced. Survey work can yield insights of preferences (and reasons for preference) that define the relevant use-environment and experience set. Comparative assessments can be enhanced through multivariate statistical analysis to derive the drivers of satisfaction and the relative weights of performance-based variables.

At a minimum, survey work should provide the usability professionals with a context in which task programming should be structured and a use-environment framework for interpreting feedback.

Strategy and System Design

Usability is ideally an iterative process. During the planning and design phase, a myriad of alternatives and options must be culled through in order to devise the system that optimizes the performance of its parts. This complexity calls for a mix of feedback mechanisms, some survey work, some usability testing. Working with low-fidelity prototypes and other exploration techniques, the usability specialist is ideally equipped to brain storm ideas and concepts with potential users.

It should be noted that surveys are notoriously ineffective for generating new ideas or approaches. However, when concepts can be articulated, surveys are ideal for measuring reactions to demonstrated functionality. This is a natural benefit of online survey methodology, because animations, prototype site functionality, links to information, and the like, can provide a much deeper level of understanding as to what the respondent is being asked to assess. In addition, survey work can be used to cull out non-starters in terms of general conceptual approaches, particularly when there are many options from which to choose.

When these option-reduction surveys are performed online they are usually faster than other modes of collecting data. The information can also be reported out in real time, which can radically reduce the time-to-decision cycle.

An example of real time reporting is shown in Figure 1. This information, which was collected via an online survey about the use of a Web site for signing up and attending events at a marathon race in San Francisco, showed the proportions of people who answered questions in a certain way. As each person completed a survey, the event planners could see the change in the numbers, and at the end of the survey (which lasted only four days) decisions were made immediately about other events accompanying the race itself,

Figure 1. Example of Real Time Online Frequency Reporting

In this phase, the usability professional should be armed with survey work to limit the realm of possibilities and focus on areas of greatest potential that require additional exploration prior to final articulation of the design.

Classical Usability Assessment

During the mid- and post-development phase, there is no substitute for pure, observational and directed usability testing. In this arena, surveys are poorly suited to uncover the root causes for perceptions of functionality problems, use confusion or navigational inefficiencies. Without the probing and in-depth exploration of these issues, the gross characterizations of "problems" which might be obtained through an online survey are insufficient to create a reasonable basis for recommendations and solutions. Surveys, even ones that allow the user to experience the site, should not be substituted for observational "usability testing" in this area.

Perceptual Affirmation

When the site, documentation or software has been modified to incorporate the prescriptive input from the usability testing, online survey work can once again measure how well the designers have captured the essence of the usability counsel. It is at this phase that some confusion has occurred when survey-based assessments have been referred to as usability testing. In my opinion, this should be more correctly referred to as perceptual affirmation, because although some online surveys include task-based stimuli, the feedback is generally not the probative and iterative measurement found in the classical usability assessment procedure.

What should be obtainable through online survey work, however, is a general, quantifiable assessment of overall attitudes and perceptions: Did we capture the feel and content that the users need? Have we achieved a layout and information architecture that yields a satisfactory perception of ease-of-use? Are people able to complete certain targeted tasks successfully? Do their perceptions of "ease-of-use" and "success at meeting objectives" match the reality of the test outcome? How do our attempts compare to other, competitive options with which the users are familiar? And so on…


As usability practitioners, I believe we should be proactive in our requests for access to survey data where it helps set context for our work, focuses our attention and provides general feedback. The more we co-opt the full scope of assessment tools within our own process, the less likely that survey work will be used as a substitute for our own, unique contributions. By accepting, recommending, and participating in the design of survey work in those areas where we know it to be useful, we can more fully support and guide the development work to a successful conclusion.

Go to STC Society Web Site