Copyright 1993, 1994: Please see the"shareware notice" at the front of the book.
This chapter gives an overview of the task-centered design process that the book recommends. The process is structured around specific tasks that the user will want to accomplish with the system being developed. These tasks are chosen early in the design effort, then used to raise issues about the design, to aid in making design decisions, and to evaluate the design as it is developed. The steps in the task-centered design process are as follows:
The industry terminology for this step is "task and user analysis." The need for the task analysis should be obvious: if you build an otherwise great system that doesn't do what's needed, it will probably be a failure. But beyond simply "doing what's needed," a successful system has to merge smoothly into the user's existing world and work. It should request information in the order that the user is likely to receive it; it should make it easy to correct data that's often entered incorrectly; its hardware should fit in the space that users have available and look like it belongs there. These and a multitude of other interface considerations are often lost in traditional requirements analysis, but they can be uncovered when the designer takes time to look into the details of tasks that users actually perform.
Understanding of the users themselves is equally important. An awareness of the users' background knowledge will help the designer answer questions such as what names to use for menu items, what to include in training packages and help files, and even what features the system should provide. A system designed for Macintosh users, for example, should provide the generic Mac features that the users have come to expect. This might mean including a feature like cut and paste even though cut and paste plays no important part in the system's main functionality. Less quantifiable differences in users, such as their confidence, their interest in learning new systems, or their commitment to the design's success, can affect decisions such as how much feedback to provide or when to use keyboard commands instead of on-screen menus.
Effective task and user analysis requires close personal contact between members of the design team and the people who will actually be using the system. Both ends of this link can be difficult to achieve. Designers may have to make a strong case to their managers before they are allowed to do on-site analysis, and managers of users may want to be the sole specifiers of the systems they are funding. It's certain, however, that early and continued contact between designers and users is essential for a good design.
After establishing a good understanding of the users and their tasks, a more traditional design process might abstract away from these facts and produce a general specification of the system and its user interface. The task-centered design process takes a more concrete approach. The designer should identify several representative tasks that the system will be used to accomplish. These should be tasks that users have actually described to the designers. The tasks can initially be referenced in a few words, but because they are real tasks, they can later be expanded to any level of detail needed to answer design questions or analyze a proposed interface. Here are a few examples:
Again, these should be real tasks that users have faced, and the design team should collect the materials needed to do them: a copy of the tape on which the memo is dictated, a list of salaries for the current year and factors to be considered in their revision, etc.
The tasks selected should provide reasonably complete coverage of the functionality of the system, and the designer may want to make a checklist of functions and compare those to the tasks to ensure that coverage has been achieved. There should also be a mixture of simple and more complex tasks. Simple tasks, such as "check the spelling of 'ocassional'," will be useful for early design considerations, but many interface problems will only be revealed through complex tasks that represent extended real- world interactions. Producing an effective set of tasks will be a real test of the designer's understanding of the users and their work.
We don't mean plagiarize in the legal sense, of course. But you should find existing interfaces that work for users and then build ideas from those interfaces into your systems as much as practically and legally possible. This kind of copying can be effective both for high-level interaction paradigms and for low-level control/display decisions.
At the higher levels, think about representative tasks and the users who are doing them. What programs are those users, or people in similar situations, using now? If they're using a spreadsheet, then maybe your design should look like a spreadsheet. If they're using an object-oriented graphics package, maybe your application should look like that. You might be able to create a novel interaction paradigm that's better suited to your application, but the risk of failure is high. An existing paradigm will be quicker and easier to implement because many of the design decisions (i.e., how cut and paste will work) have already been made. More important, it will be easy and comfortable for users to learn and use because they will already know how much of the interface operates.
Copying existing paradigms is also effective for the low- level details of an interface, such as button placement or menu names. Here's an example. You're writing a special- purpose forms management package and the specifications call for a spelling checker. You should look at the controls for spelling checkers in the word processing packages used by people who will use your system. That's almost certainly how the controls for your spelling checker interface should work as well.
This is an area where it's really common for designers to make the wrong decision because they don't look far enough beyond the requirements of their own system. Let's dig a little further into the example of a spelling checker for the forms package. Maybe your analysis has shown that the spelling checker will most often pick up misspelled names, and you can automatically correct those names using a customer database. So you decide the most efficient interaction would be to display the corrected name and let the user accept the correction by pressing the Return key. But the word processor your users use most frequently has a different convention: pressing Return retains the "wrong" spelling of a word. Do you follow the lead of the existing system ("plagiarize"), or do you create your own, more efficient convention? To an extent the answer depends on how often users will be running your system compared to how often they will be running systems they already know. But more often than not, the best answer is to stick with what the users know, even if it does require an extra keystroke or two.
The rough description of the design should be put on paper, which forces you to think about things. But it shouldn't be programmed into a computer (yet), because the effort of programming, even with the simplest prototyping systems, commits the designer to too many decisions too early in the process.
At this stage, a design team will be having a lot of discussion about what features the system should include and how they should be presented to the user. This discussion should be guided by the task-centered design approach. If someone on the team proposes a new feature, another team member should ask which of the representative tasks it supports. Features that don't support any of the tasks should generally be discarded, or the list of tasks should be modified to include a real task that exercises that feature.
The representative tasks should also be used as a sort of checklist to make sure the system is complete. If you can't work through each task with the current definition of the system, then the definition needs to be improved.
No aviation firm would design and build a new jet airliner without first doing an engineering analysis that predicted the plane's performance. The cost of construction and the risk of failure are too high. Similarly, the costs of building a complete user interface and testing it with enough users to reveal all its major problems are unacceptably high. Although interface design hasn't yet reached the level of sophistication of aircraft engineering, there are several structured approaches you can take to discover the strengths and weakness of an interface before building it.
One method is to count keystrokes and mental operations (decisions) for the tasks the design is intended to support. This will allow you to estimate task times and identify tasks that take too many steps. The procedures for this approach, called GOMS analysis, along with average times for things like decisions, keystrokes, mouse movements, etc. have been developed in considerable detail. We'll summarize the method later in the book.
Another method is to use a technique called the cognitive walkthrough to spot places in the design where users might make mistakes. Like GOMS modelling, the cognitive walkthrough analyzes users' interactions with the interface as they perform specific tasks. We'll also explain how to do cognitive walkthroughs later in the book.
After thinking through the paper description of the design, it's time to build something more concrete that can be shown to users and that can act as a more detailed description for further work. In the early stages of a simple design, this concrete product might be as simple as a series of paper sketches showing the interface while a user steps through one of the representative tasks. A surprising amount of information can be gleaned by showing the paper mock-up to a few users. The mock-up may even reveal hidden misunderstandings among members of the design team.
For further analysis, the design can be prototyped using a system such as HyperCard, Dan Bricklin's Demo Package, or any of an increasing number of similar prototyping tools. It may even be possible to build a prototype using the User Interface Management System (UIMS) that will be the foundation of the final product. This approach can be especially productive, not only because it reduces the amount of work needed to create the production system, but also because interface techniques tested in a stand-alone prototyping system may be difficult to duplicate in the production UIMS.
The entire design doesn't need to be implemented at this stage. Initial efforts should concentrate on parts of the interface needed for the representative tasks. Underlying system functionality, which may still be under development, can be emulated using "Wizard of Oz" techniques. That is, the designer or a colleague can perform the actions that the system can't, or the system can be preloaded with appropriate responses to actions that a user might take. (The design team needs to take care that users and management aren't misled into thinking the underlying system is finished.)
No matter how much analysis has been done in designing an interface, experience has shown that there will be problems that only appear when the design is tested with users. The testing should be done with people whose background knowledge and expectations approximate those of the system's real users. The users should be asked to perform one or more of the representative tasks that the system has been designed to support. They should be asked to "think aloud," a technique described in more detail in Chapter 5.
Videotape the tests, then analyze the videotapes for time to complete the task, actual errors, and problems or surprises that the user commented on even if they didn't lead to errors. The user's thinking-aloud statements will provide important clues to why the errors were made.
The testing with users will always show some problems with the design. That's the purpose of testing: not to prove the interface, but to improve it. The designer needs to look at the test results, balance the costs of correction against the severity of each problem, then revise the interface and test it again. Severe problems may even require a re-examination of the tasks and users.
One thing to keep in mind during each iteration is that the features of an interface don't stand alone. Revising a menu to resolve a problem that occurs with one task may create problems with other tasks. Some of these interactions may be caught by reanalyzing the design without users, using techniques like the cognitive walkthrough. Others may not show up without user testing.
When should the iterations stop? If you've defined specific usability objectives (see hypertopic on Managing the Design Process), then iteration should be stopped when they are met. Otherwise, this will often be a management decision that balances the costs and benefits of further improvement against the need to get the product to market or, in in-house projects, into use.
The key guideline in building the interface is to build it for change. If you've been using a UIMS for prototyping, then you're already close to a finished product. If you've been using some other prototyping system, now is the time to switch to a UIMS or, perhaps, to an object-oriented programming environment. Try to anticipate minor changes with easily changed variables. For example, if you have to write your own display routine for a specialized menu, don't hardcode parameters such as size, color, or number of items. And try to anticipate major changes with code that is cleanly modular. If a later revision of the design requires that your specialized menu be replaced by some more generic function, the code changes should be trivial. These sound like ordinary guidelines for good programming, and indeed they are. But they are especially important for the user interface, which often represents more than half the code of a commercial product.
A fundamental principle of this book is that interface designers should not be a special group isolated from the rest of the system development effort. If this principle is to hold, then the designer must have contact with users after the design hits the street. In fact, it's easy to argue that this should be the case in any organization, because continued awareness of users and their real needs is a key requirement for a good designer.
One way to put designers in contact with users is to rotate them into temporary duty on the customer hotline. Another important point of contact for large systems is user group meetings. Managers also take advantage of these opportunities to see how real users react to the products they are selling.
Besides helping to answer the obvious question of whether the system is doing what it's designed to do, interactions with users can also yield surprises about other applications that have been found for the product, possibly opening up new market opportunities. This information can feed back into the design process as improved task descriptions for the next revision and better understanding on the part of the designer.
In today's computer market there are few if any software products that can maintain their sales without regular upgrades. No matter how well the product is initially designed to fit its task and users, it will probably be inadequate in a few years. Tasks and users both change. Work patterns change because of the product itself, as well as because of other new hardware and software products. Users gain new skills and new expectations. Designers need to stay abreast of these changes, not only by watching the workplace in which their products are installed, but also by watching for developments in other parts of society, such as other work situations, homes, and the entertainment industry. The next revision of the design should be a response not only to problems but also to opportunities.
* Task-Oriented vs. Waterfall Design *
The traditional "waterfall" model of software design starts with a requirements analysis step that is performed by systems analysts who are usually not the interface designers. These requirements are transformed into system specifications, and eventually the hardware, underlying software, and user interface are designed to meet those specifications.
The waterfall model has proven to be a poor approach to software that has an important user interface component. As this chapter describes, the successful interface designer needs a deep understanding of the user's task and how the task fits into the rest of the user's work. That understanding can't be derived from a set of abstract specifications. Further, our experience has shown that several design iterations are essential in producing an effective interface. The traditional waterfall model simply doesn't allow those iterations.
* The Design Team *
Because the task-centered design methodology spreads the activities of interface design throughout the software design and life cycle, the interface can't be produced or analyzed at one point by a group of interface specialists. The job of building a good interface has to be taken on by the team that designs the product as a whole.
The design team needs to be composed of persons with a variety of skills who share several common characteristics. They need to care about users, they need to have experience with both bad and good interfaces, and they need to be committed to and optimistic about creating an effective system. The team should include representatives from the entire range of interface-related areas: programmers, technical writers, training package developers, and marketing specialists. The team might include a user-interface analyst, but that's not essential. A shared commitment to interface quality, along with appropriate opportunities to interact with real users, will produce high quality interfaces for all but the most complex or critical interfaces.
* Responsibility *
Responsibility for the entire interface effort should be centralized. In particular, the designers who create the software shouldn't sign off on their product and hand it off to an entirely separate group that creates the manuals, who then hand off to another group that handles training. All of these activities need to be coordinated, and the only way to achieve that is through central management.
* Usability Objectives *
Serious corporate management efforts may require you to produce specific numbers that quantify usability. Usability objectives are target values for things such as speed to perform representative tasks and number of errors allowable. These can be used to motivate designers and support resource allocation decisions. The target values can be selected to beat the competition or to meet the functional needs of well- defined tasks.
(For more information on management, see Appendix M.)