Toward a Theoretical Base for Representation Design in the
Computer Medium: Ecological Perception and Aiding
David D. Woods
Cognitive Systems Engineering Laboratory, Ohio State University
The technological potential for data gathering and data manipulation has expanded rapidly, but our ability to
interpret this avalanche of data, that is, to extract meaning from this artificial data field, has expanded
much more slowly, if at all. Significant advances in machine information processing seem to offer hope for
advancing our interpretative capabilities, but in practice such systems become yet another voice in the data
cacophony around us. The computer as a medium for supporting cognitive work is omnipresent. Yet, where is the
theoretical base for understanding the impact of computerized information processing tools on cognitive work?
Research in cognitive science has focused on grand questions of what is mind; artificial intelligence research
continues to focus on building autonomous machine problem solvers; and the mainstream in both has assumed
that cognition can be studied and simulated without detailed consideration of perception and action.
Ironically, the current state of research on computer support for human cognition has many interesting
parallels to the state of perception research during the time when James J. Gibson developed the basis for
ecological perception. The concepts and research program that Gibson introduced for perception also provide
inspiration for needed concepts and research directions in aiding human cognition.
On the one hand, this chapter lays out the fundamental issues on aiding human cognition via the computer
medium, drawing on and pointing to parallels to Gibsonian concepts. Examples include agent–environment
mutuality which leads to the need for a new way to characterize problem solving habitats, the question of
what is informative, the multiplicity of cues available in natural problem solving habitats, and the need to
take into account the dynamism of real problems. If one takes the ecology of aided cognition seriously, it has
fundamental implications for research on human-computer interaction, models of human performance, and the
development of intelligent support systems.
The themes in this chapter, and, in fact, the title of this book, juxtapose concepts that are seen in the
conventional wisdom of cognitive psychology and human factors as contradictory. Terms such as representation,
cognition, problem solving, and support for cognitive work will be seen over and over again in the company of
ideas that relate to perception, especially for this forum — ecological perception. This juxtaposition is not the
result of momentary fashion; rather, it is indicative of a paradigm shift that has been underway for some time
in human factors and human-machine systems (this shift can be seen in a variety of published works that began
to appear in widely accessible forms starting in 1981, e.g.; Norman, 1981; Rasmussen & Rouse, 1981; Reason &
Mycielska, 1982). The appeal of the analogy to ecological perception is, in part, based on the common ground of
researchers experiencing the joys and pains of questioning the assumptions behind the current conventional
6.1 Representation Aiding: The Role of Perception in Cognition
Representation aiding is one strategy for aiding human cognition and performance in complex tasks. The basic
rationale is straightforward. Figure 6.1 depicts the pieces of the puzzle that combine to create this strategy. A
fundamental finding which has emerged from cognitive science and related research is that the representation
of the problem provided to a problem solver can affect his, her, or its task performance. There is a long
tradition in problem solving research in that "solving a problem simply means representing it so as to make the
solution transparent" (Simon, 1969, p. 77). I refer to this as the problem representation principle, and it is a way
to summarize a widespread psychological result that the content and context of a problem can radically alter
subjects’ responses (examples can be seen in everything from text comprehension to deductive reasoning to
organizational effects in visual search; cf., Norman, 1993 for a recent summary).
A variety of researchers concerned with aiding human performance in complex, high consequence task
domains (energized in part by responses to the Three Mile Island accident) seized this fundamental finding and
turned it around (Goodstein, 1981; Rasmussen & Lind, 1981; Woods, Wise, & Hanes, 1981) — if the problem
representation affects performance, then, when the goal is to improve human performance, one can develop new
types of representations of the task domain to effectively support human problem solving — representation
This approach dovetails very neatly with another fundamental result that was gaining wider
appreciation at about the same time. Studies of human performance at complex high consequence tasks (again,
motivated in part by the need to improve safety and human performance in the aftermath of accidents) kept
pointing to the critical role of situation assessment in expert performance (cf. e.g.,Woods, O’Brien, & Hanes,
1987, for a synthesis of results across several studies of operator decision making in both actual and simulated
nuclear power plant emergencies). The results in a variety of studies examining practitioners in a variety of
domains seemed to indicate that practitioners' behavior was based primarily on the ability to recognize the
kind of situation evolving in front of them, selecting actions appropriate to those circumstances from a
repertoire of doctrine about how to handle various situations and contingencies — recognition–driven processing
(see Klein,Orasanu, Calderwood, & Zsambok, 1993 for an extensive treatment of this result and its
implications). The critical role of recognition-driven processing in effective task performance points to the need
for aiding practitioner situation assessment through improved representations of the task world.
In human performance this result has been called recognition-driven processing, but the underlying concept
really has a longer history in cognitive psychology. The data that support the critical role of recognition-
driven processing reemphasize the view that perception is a part of cognition rather than an independent front-
end module that merely supplies input for cognition. Remember that Neisser (1976) called his fundamental
principle of cognition the perceptual cycle, emphasizing the dynamical interplay and mutual dependence of
perception, cognition, and action in contrast to linear information processing models in which independent
modules interact via parameter passing. Perception and cognition guide action; acting (and the possibilities for
action) and cognition direct perception; perceiving informs cognition (cf. Tenney, Adams, Pew, Huggins, &
Rogers, 1992, for one explicit application of the perceptual cycle to understand the dynamics of human
performance in one complex domain—pilot performance in commercial aviation).
Given the omnipresence of the representation principle in research on cognition, one may then ask how do
representations affect human cognition; in other words, why do computer-based displays (representations)
make a difference in human performance? There are several (perhaps overlapping) mechanisms that have
been suggested to mediate the effect of external representation of a process or device or problem on the cognitive
activities of the human problem solver:
(a) Problem structuring — the representation changes the nature of the problem and therefore the kinds
of strategies that can be used to solve the problem;
(b) Overload/workload — a good representation shifts task-related cognitive activities to more
mentally economical forms of cognitive processing, such as more parallel, more 'perceptual,' more
automatic, drawing on different kinds of mental resources, and so on. (note the inverse: a poor
representation forces reliance on more deliberative, serial, resource-consuming cognitive processing);
(c) Control of attention — good representation supports attention related cognitive processes including
switching attention, pre-attentive reference or peripheral access ( Hutchins, 1991; Woods, 1992),
divided attention, knowing where to look next;
(d) New secondary tasks — poor representation creates new secondary tasks that increase workload
especially in high-criticality, high-tempo periods, that interrupt primary tasks, and that shift the
focus of attention away from the actual task and to secondary interface and data management tasks (
Woods et al., 1994; Woods, 1993b );
(e) Effort — the representation affects effort and therefore the effort-performance relationship for
that individual and that task context (cf. Johnson & Payne, 1985; Payne et al., 1988).
The fourth piece of the puzzle is the technological developments that have been underway, providing new
powers to develop representations via advances in the computer medium (computer-based graphics and
intelligent data processing) and via increased penetration of the computer medium into places in which
substantive cognitive work is performed. However, it is critical to note that representation aiding forces a
reinterpretation of what it means to design human-computer interaction — the display of data through the
medium of the computer should be considered in terms of how different types of representations vary in their
effect on the problem solver’s information processing activities and performance — representation design in the
computer medium (Woods, 1991).
Although the technological advances increase the potential for attempts at representation aiding, their
primary effect on research and development has been a kind of negative motivation. The technological changes
that are now in motion with regard to information technology in complex, high-consequence applications are
radically changing the kinds of problem representations available to practitioners (Cook, Woods, & Howie,
1990; Woods, Potter, Johannesen, & Holloway, 1991). Implicit representation re-design is widespread,
accompanying the advances and increased penetration of the computer medium into domains of cognitive work.
The implicit representation re-design going on should remind us that there is a flip side to the problem
representation principle: Poor representations will degrade task performance.
The problem representation principle allows for no a priori neutral representations. The representations of
the problem domain available to the practitioner can degrade or support information processing tasks and
strategies related to task performance. Thus, there is an increasingly pressing need “with developing a
theoretical base for creating meaningful artifacts and for understanding their use and effects” (Winograd, 1987,
p. 10), in other words, to develop a theoretical base for representation design in the computer medium.
The concept for representation aiding as a means for computer-based decision support and some initial
attempts to construct such systems predate 1979 (and the history of science, mathematics, and technology is
replete with examples of non-computer-based representation aiding). However, the conjunction of several
factors, beginning about 1979–1980, both pushed and pulled representation aiding as an approach to support
cognitive work. The conjunction of need (failures in complex human-technological systems as energizer of work
on aiding dynamic cognitive work), opportunity (advances and increasing penetration of computer-based
graphic systems), and motive (the problem representation principle and recognition-based models of dynamic
cognitive work) together have energized the development of representation aiding as a means for computer-
based decision support.
6.2 The Computer as a Medium for Representation
The development of information displays within the computer medium is constrained and shaped by the
properties of computer and display technologies as a medium for representation.
6.2.1 The Symbol Mapping Principle
The computer can be seen as a referential medium in which visual and other elements are signs or tokens that
function as meaning carriers within a symbol system (Woods, 1991, in preparation). Peirce’s (1903/1955)
treatment of symbols points to representation as a three part relationship (Fig. 6.2) between (a) signs or tokens
in the medium for representation, (b) the referent objects and environment, and (c) the interpretants or observers
who extract meaning from the token-referent relationship given a larger context of goals and activities. What
is unique is that Peirce includes the observer as part of the symbolic link itself. This is essentially what Gibson
does in the mutuality assumption in which perceptual properties of the external world are specified with the
observer/actor as a fundamental part of the variable. This idea changes the concept of information for a
representation aiding approach, as it did for Gibson and ecological perception. Given the mutuality
assumption, information is seen as a relation between the data, the world the data refer to, and the observer’s
expectations, intentions, and interests and not as a thing in itself (Woods, 1986). Results in cognitive psychology
are replete with examples that reinforce this three-part relationship (e.g., Gonzalez & Kolers, 1982). For
example, change the perceptual characteristics of a scene in ways that change properties of perceptual
organization and visual search performance will be radically affected (e.g., the pop-out effect).
If the computer is a medium for representation, then we can think of representational forms as distinct from
visual forms. The physical form of sign tokens within a medium for representation does not by itself indicate a
mode of symbolizing. Pictures and words can both refer either propositionally or analogically (Gonzalez &
Kolers, 1982; Woods, in preparation). How sign tokens represent depends on a three part relationship — how
tokens map onto the structure and behavior of relevant objects in the referent domain for some agent or set of
agents in some context (Figure 6.2). See Hutchins, 1991, for an analysis of this three part relationship for one
particular referent (airspeed), set of representations (e.g., speed bugs), set of agents (flight crews), and
goal/task context (descent phase of commercial air flights).
Thus, the fundamental principle for any discussion of types of displays from a representation point of view
is the symbol mapping principle: computer-based displays of data function as a representation of something for
someone in some task context. Or, more elaborately, representational form is defined in terms of how data on the
state and behavior of the domain is mapped into the syntax and dynamics of visual forms in order to produce
information transfer to the agent using the representation, given some task and goal context (Figure 6.2). The
symbol mapping principle means that one cannot understand computer–based information displays in terms of
purely visual characteristics; the critical properties relate to how data are mapped into the structure and
behavior of the visual elements. The dynamic aspect of the mapping is critical. The symbol mapping principle
means that to characterize or design a graphic form one must consider how the form behaves or changes as the
state of the referent changes.
Given the symbol mapping principle, representational form in the computer medium is defined in terms of
how data on the state and semantics of the domain is mapped into the syntax and dynamics of perceivable
tokens/forms in order to produce information transfer to the agent using the representation (Woods, 1991, in
preparation). Representational form cannot be assessed from visual appearance per se so that a single visual
format such as a bar chart can be used to represent in different ways. Conversely, bar charts, trends, and other
visual forms can be used to create the same representational form (or variants on a representational theme).
There are design challenges raised by the symbol mapping principle. One part of the design challenge is to
set up the mapping between domain referents and visual tokens in the computer medium so that the
representation captures task-meaningful semantics. What are interesting properties and changes in the
monitored process or device? How are these properties and changes captured or reflected in the structure and
behavior of tokens in the computer medium. Note that meeting this part of the mapping principle requires
uncovering what are the task meaningful domain semantics to be mapped into the structure and behavior of the
representation. This is the problem of cognitive task analysis (Woods, 1988): identifying what is informative
and what are the interesting properties and changes of the domain referents given the goals and task context of
the team of practitioners.
The second part of the design challenge for creating effective representations is to set up this mapping so
that the observer can extract information about task-meaningful semantics. Becker and Cleveland (1987) call
this the decoding problem, that is, domain data may be cleverly encoded into the attributes of a visual form,
but, unless observers can effectively decode the representation to extract relevant information under the
conditions of actual task performance (attention switching, time pressure, risk, uncertainty), the representation
will fail to support the practitioner.
6.2.2 Virtual Perceptual Field
A fundamental property of the computer as a medium for representation is freedom from the physical
constraints acting on the referent real-world objects/systems (see Hochberg, 1986, p. 22-2 – 22-3 for an elegant
treatment of this property of the computer medium). In many media (e.g., most notably, cinema), the structure
and constraints operating in the physical world will ensure that much of the appropriate information about
relationships in the referent domain is preserved in the representation. On the other hand, in the computer
medium, the designer of computer displays of data must do all of the work to constrain or link attributes and
behaviors of the representation to the attributes and behaviors of the referent domain.
This property means that data displays in the computer medium can be thought of as a virtual perceptual
field1. It is a perceivable set of stimuli, but it differs from a natural perceptual field and other media for
representation, because there is nothing inherent in the computer medium that constrains the relationship
between things represented and their representation. This freedom from the physical constraints acting on the
referent real world objects is a double-edged sword in human-computer interaction (HCI), providing at the same
time the potential for very poor representations (e.g., see Cook, Potter, Woods, & McDonald, 1991; Woods et al.,
1991) and the potential for radically new and more effective representations.
Note the parallels to the Gibsonian revolt against tachistoscopic perception. For Gibson, the problem was
to discover the higher order properties of the perceptual field (the optic array). Discovering potential sources
of information in the stimulus array is a critical part of understanding how the perceptual systems functions. It
is difficult to debate specific cognitive processing mechanisms without having some understanding of what it is
the perceptual system needs to extract about its environment given its behavioral competencies.
The virtuality property of the computer medium creates a new twist on Gibsonian agenda. The designer
creates the perceptual field and the mapping to the semantics of the domain. The designer’s choices
manipulate the properties of the virtual perceptual field at several levels to achieve an overall desired
phenomenological effect/result (Woods, in preparation). One level I call workspace coordination — the set of
viewports and classes of views that can be seen together in parallel or in series as a function of context (Woods,
1984). Another is the level of coherent process views where a kind of view is a coherent unit of representation of
a portion of the underlying process or systems that the observer could select for display in a viewport. A third
level of analysis is the the level of graphics forms -- the forms, objects, or groups and their perceptual context.
The designer’s choices affect the kind of cognitive processing that the practitioner must bring to bear to do
cognitive work through computer-based representations of the underlying system or process (e.g., mentally
effortful, serially bottlenecked, deliberative processes versus mentally economical, effectively parallel,
Another important property of the virtual perceptual field of computer-based display systems is that the
viewport size (the windows/VDUs available) is very small relative to the large size of the artificial data
space or number of data displays that potentially could be examined (Figure 6.3). In other words, the proportion
of the virtual perceptual field that can be seen at the same time (physically in parallel) is very, very small.
This property is often referred to as the keyhole effect (e.g., Woods, 1984). Given this property, shifting one’s
gaze within the virtual perceptual field is carried out by selecting another part of the artificial data space
and moving it into the limited viewport. But in the default design, the observer can see only one small portion
of the total field at a time or a very small number of the potentially available displays (cf. Cook et al., 1990;
and Woods et al., 1991, for examples). The consequences for the observer are exacerbated by the default
tendency in interface design that places each piece of data in only one location within the virtual perceptual
field (one "home").
However, how do we know where to look next in a virtual perceptual field like this (cf. Woods, 1984;
Woods, Watts, & Potter 1993)? Meaningful tasks involve knowing where to look next in the data space
available behind the limited viewports and extracting information across multiple views. Yet, the default
tendency in interface design is to leave out any orienting cues that indicate in mentally economical ways
whether something interesting may be going on in another part of the virtual perceptual field. Instead, the
processes involved in directing where to look next are forced into a mentally effortful, high memory load,
deliberative mode (in addition, the interface structure may create other cognitive problems in translating
intentions into specific actions — Norman’s (1988) Gulf of Execution). Observers must remember where the
desired data is located, and they must remember and execute the actions necessary to bring that portion of the
field into the viewport, given he knows what data are potentially interesting to examine next (cf. Woods et al.,
1991, for a detailed case). One can see the potential problems that derive from this keyhole property by
imagining what it would be like to function with no peripheral vision or without other orienting perceptual
systems to help determine where to look next, that is, where to direct focal attention next.
Given this property of the computer medium and its implications for cognitive processing of the observer, it
is ironic that discussions, guidelines, and empirical studies in HCI are stuck in the design of single isolated
graphic forms/views. There is still virtually nothing available on the coordination of multiple views as a
virtual workspace, virtually nothing on the information extraction problems that occur at the workspace level
— keyhole effects, getting lost effects, and virtually nothing on the tradeoff between searching one display and
searching within the larger network of displays (for one exception, see Henderson & Card, 1987).
But technological advances (both in power and in penetration of applications) proceed regardless of our
understanding of representation design. Windowing capabilities have resulted in an undisciplined
proliferation of displays that create navigation problems where users have difficulty finding related data,
especially when complicated by hidden windows and complex menu structures. The proliferation of windows
tends to fragment data across an increasingly complex structure of the virtual perceptual field. This forces
serial access to highly interrelated data and increases the cognitive load in deciding where to look next (see
Cook et al., 1990; Woods et al., 1991 for data and cases on poor window coordination at the workspace level of a
display system). Users may possess great flexibility to tailor their workspace by manipulating the number,
size, location, and other window parameters, but this flexibility creates extra data management burdens in
event-driven task domains that increase practitioner workload often during high-tempo operations.
Practitioner attention shifts to the interface (where is the desired data located in the display space?) and to
interface control (how do I navigate to that location in the display space?) at the very times when his or her
attention needs to be devoted most to assessing/managing the monitored process. This factor appears to be one
source of the phenomenon that Wiener (1989) has termed clumsy automation. Clumsy automation or the clumsy
use of technology is a form of poor coordination between the human and machine in which the benefits of the
automation accrue during workload troughs, and the costs of automation occur during high-criticality or high-
Directing perceptual exploration to potentially interesting data in dynamic and event-driven domains is a
fundamental competency of perceptual systems in natural perceptual fields. "The ability to look, listen, smell,
taste, or feel requires an animal capable of orienting its body so that its eyes, ears, nose, mouth, or hands can be
directed toward objects and relevant stimulation from objects. Lack of orientation to the ground or to the medium
surrounding one, or to the earth below and the sky above, means inability to direct perceptual exploration in an
adequate way (cf. Reed, 1988, p. 227, on Gibson and perceptual exploration in Gibson, 1966)." The problem of
building representations in the computer medium that support rather than undermine this fundamental
competency has barely been acknowledged, much less addressed (Woods, 1984).
6.2.3 Dynamic Reference
Another fundamental property of the computer as a medium for representation is that computer-based displays
can behave and change. This is one property that distinguishes the computer from traditional media for visual
expression. This property provides new potential and new challenges for developing representations.
Previously, discussions of symbol-referent relationships were based at least implicitly on static symbols. But
the computer medium creates the potential/challenge of dynamic reference — the behavior of the token is part
of the process linking symbol to referent. Thus, to characterize or design a graphic form, one must consider how
the form behaves or changes as the state of the referent changes.
In the current default in computer-based representations, the basic unit of display remains an individual
datum usually represented as a digital value, for example, oxygen tank pressure is 297 psi (cf. Woods, 1991, or
Woods et al., 1991, which contains examples of typical displays). No attempt is made in the design of the
representation of the monitored process to capture or highlight operationally interesting events — behaviors of
the monitored process over time, for example, the remaining cryogenics are deteriorating faster (for one
exception, see Woods & Elias, 1988). Furthermore, this failure to develop representations that reveal change
and highlight events in the monitored process has contributed to incidents in which practitioners using such
opaque representations miss operationally significant events (e.g., Freund & Sharar, 1990; Moll van Charante
et al, 1993).
In the most well known accident in which this representational deficiency contributed to the incident
evolution (cf. Murray & Cox, 1989), the Apollo 13 mission, an explosion occurred in the oxygen portion of the
cryogenics system (oxygen tank 2). The mission controller (electrical, environmental, and communication
controller) monitoring this system was examining a screen filled with digital values (display CSM ECS CRYO
TAB; Figure 6.4).
After other indications of trouble in the spacecraft, he noticed that oxygen tank 2 was depressurized (about
19 psi) as well as a host of other problems in the systems he monitored. It took a precious 54 minutes as a variety
of hypotheses were pursued before the team realized that the command module was dying and that that an
explosion in the oxygen portion of the cryogenics system was responsible. The digital display had hidden the
critical event (2 digital values out of 54 changing digital numbers had changed anomalously; compare Figures
6.4, 6.5, and 6.6). So none of the three noticed the numbers for oxygen tank 2 during 4 particularly crucial
seconds. At 55 hours, 54 minutes, and 44 seconds into the mission, the pressure stood at 996 psi–high but still
within accepted limits. One second later, it peaked at 1,008 psi. By 55:54:48, it had fallen to 19 psi. If one of
them had seen the pressure continue on through the outer limits, then plunge, he would have been able to
deduce that oxygen tank 2 had exploded (Figure 6.7). It would have been a comparatively small leap to have
put the whole puzzle of multiple disturbances across normally unconnected systems together (Murray & Cox,
1989, p. 406).
It is reported that the relevant controller experienced a continuing nightmare for 2 weeks following the
incident, in which, when the astronauts reported a problem, "he looked at the screen only to see a mass of
meaningless numbers." Finally, a new version of the dream came — he looked at the critical digitals "before
the bang and saw the pressure rising. ... Then the tank blew, and he saw the pressure drop and told Flight
exactly what had happened (Murray & Cox, 1989, p. 407)."
The poor representation could be compensated for through human adaptability and knowledge; in other
words, as Norman (1988) likes to put it, knowledge-in-the-head can compensate for the absence of knowledge-
in-the-world. However, what is the point of the computer as a medium for the display of data if it does not
reduce practitioner memory loads. In fact, in computer system after computer system (e.g., Woods et al., 1991),
we find that despite the availability of new computational and graphic power, the end result is an increase in
demands on practitioner memory. The contrast cannot be greater with studies of successful, but often
technologically simple, cognitive artifacts such as Hutchins (1991) which reveal how effective cognitive tools
off load memory demands, support attentional control and support the coordination of cognitive work across
To begin to move toward better representations that do not obscure the perception of events in the
underlying system, there are three inter-related critical criteria in representation design (Woods, in
1. Put data into context: (a) put a given datum into the context of related values; (b) collect and integrate data
about important domain issues. Data are informative based on relationships to other data, relationships to
larger frames of reference, and relationships to the interests and expectations of the observer. The challenge is
the context-sensitivity problem — what is interesting depends on the context in which it occurs.
2. Highlight changes and events. Representations should highlight change/events and help reveal the
dynamics of the monitored process. Events are temporally extended behaviors of the device or process involving
some type of change in an object or set of objects. One key question is to determine what are 'operationally
interesting' changes or sequences of behavior Examples include -- highlighting approach to a limit,
highlighting movement and rate of change, emphasizing what event will happen next, and highlighting
significant domain events (e.g., Woods & Elias, 1988). Representing change and events is critical because the
computer medium affords the possibility of dynamic reference -- the behavior of the representation can refer to
the structure and behavior of the referent objects and processes.
3. Highlight contrasts. Representations should highlight and support observer recognition of contrasts.
Meaning lies in contrasts -- some departure from a reference or expected course; Representing contrast means that
one indicates the relationship between the contrasting objects, states or behaviors. One shows how the actual
course of behavior follows or departs from reference or expected sequence of behavior given the relevant context.
Representing contrast signals both the the contrasting states of behavior and their relationship (how behavior
departs or conforms to the contrasting case). In this way, one is in effect, highlighting anomalies. By this, I
mean one indicates what is anomalous (the contrast) as opposed to simply indicating that some unspecified
thing is general is anomalous. For example, coding a number or icon red shows that some anomaly is present, but
it does not show the contrast of what is anomolous relative to what (cf., Woods, 1992).
The relation to ecological perception is obvious: Event perception is at the heart of situation assessment and
recognition-driven processing. As researchers in ecological perception have pointed out, events and sequence are
more than a mere succession of static views. Ongoing events are dynamic patterns that directly specify the type
of event. When the pattern is available (represented in the medium) there is no need to abstract or derive it;
and the event structure is available even with discontinuous eye fixations (there is no minimal cognitive load in
re-orienting after a glance away).
But given that the computer representation is free from the physical constraints acting on the referent
objects, support for event perception in the computer medium requires the designer to actively identify
operationally interesting changes or sequences of behavior and to actively develop representations that
highlight these events to the observer given the actual task context. The default representations typically
available do not make interesting events directly available for the practitioner to observe. Instead, the typical
computer displays of data force the practitioner into a serial deliberative mode of cognition to abstract change
and events from the displayed data (typically digital representations of sensed data).
6.3 Towards a Context-Bound Science of Human-Machine Systems
6.3.1 What is Informative?
The ubiquitous computerization of the modern world has tremendously advanced our ability to collect, transmit,
and transform data. In all areas of human endeavor, we are bombarded with computer processed data,
especially when anomalies occur. The problem of our day is data overload — computerization of process control
centers generates huge networks of computer displays (a European computerized nuclear power control room
containing over 16,000 displays is scheduled to come online in a few years; Easter, 1991), huge databases are
generated about the performance of telecommunication networks, and so on. But our ability to digest and
interpret these data has failed to keep pace with our capacity to generate and manipulate it. Practitioners get
lost in large networks of computer-based display frames; they experience keyhole effects in trying to monitor
dynamic systems through the narrow viewports of windowed computer screens; they are overwhelmed by the
massive field of available data and fail to focus on the data critically important to a specific context.
Fundamentally, ecological perception sees the problem of perception in terms of understanding how meaning
is specified and how we find what is meaningful or significant in the perceptual field (von Uexkull, 1934), for
example, the perception of affordances. In the ecological view, perceptual systems are concerned with
extracting meaningful information rather than a mere encoding and transmission front end for cognition.
Similarly, the problem for human-machine systems is that the technological view emphasizes data encoding,
manipulation, and transmission, not the processes involved in meaning extraction. Given the properties of the
computer as a medium — narrow keyhole, the accumulation of ever larger amounts of data—the critical
research issue is understanding how we focus in on what is interesting or significant (Doyle, Sellers, & Atkinson,
1989; Woods, 1984, 1986).
This is all based on how one answers the question of what is informative. Information is not a thing-in-
itself, but is rather a relation between the data, the world the data refer to, and the observer’s expectations,
intentions, and interests. As a result, informativeness is not a property of the data field alone, but is a relation
between the observer and the data field. The important point is that there is significant difference between
the available data and the meaning or information that a person extracts from that data (e.g., Woods, 1986).
The available data are raw materials or evidence that the observer uses and evaluates to answer questions
(questions that can be vague or well formed, general, or specific). The degree to which the data help answer
those questions determines the informativeness or inferential value of the data. Thus, the meaning associated
with a given datum depends on its relationship to the context or field of surrounding data, including its
relationship to the objects or units of description of the domain (what object and state of the object is referred
to), to the set of possible actions, and to perceived task goals (after Gibson, what that object/state affords the
Thus, all issues in representation design (Woods, in preparation) revolve around putting data into context.
The significance or meaning extracted from an observed datum depends on the relationship of that datum to a
larger set of data about the state of the domain, goals, past experience, possible actions, and so on — the context
in which the datum occurs. This means that processing of an observed value or change on one data channel
(extracting meaning from such observations) depends on contact with a variety of other information derived
from checking or being aware of or remembering or assuming the state of other data.
Not only does a datum gain significance in relation to a set of other data, but also what data belong in this
relevance set will change as a function of the state of the domain and the state of the problem solving process.
The above points — (a) a set of contextual data is needed to extract meaning from a datum, and (b) the data in
this relevance set change with both system state and the state of the problem solving process — define the
context-sensitivity problem (Woods, 1986, 1991). The amount of context-sensitivity present in a particular
domain of application is a major cognitive demand factor that has profound implications for representation
design and human performance. If the relevance sets are limited and do not change much with context, then
many of the challenges in representation design are relaxed, that is, practitioners will be able to extract needed
information, even from otherwise problematical representations. However, if the relevance sets are larger and
are sensitive to changes in context (as in time-pressured, high-consequence applications), then the problem
representation principle and its inverse become a particularly important contributor to human interaction with
the computer display system and human performance at domain tasks. The quality of a representation depends
on how it affects the cognitive processes involved in extracting meaning given the context-sensitivity problem.
Poor representations present available data as individual signals without relating a signal to its context. The
observer must acquire, remember, or build the context of interpretation datum by datum with the associated
possibilities for incompleteness or error and with the associated mental workload (cf. Doyle et al., 1989, Woods
& Elias, 1988, for examples of context–sensitive representations).
6.3.2 Agent-Environment Mutuality
Ecological perception also has pointed out that the problem in perception is not that the scene is underspecified
requiring cognition to make up for an impoverished retinal image. Instead, if one considers actual visual scenes,
the issue is that there are a multiplicity of cues available for perceptual systems, that is, the critical
properties of the scene are overspecified (Cutting, 1986). Similarly, the problem in human-machine system
performance is not a lack of data, but rather the difficulties associated with finding the right data at the right
time out of a huge field of potentially relevant data (Woods, 1986).
The multiplicity of cues available in natural problem-solving domains creates the problem of deciding
what counts as the effective stimulus. Psychologists cannot simply assert the grain of analysis or the specific
driving stimuli in a particular behavioral situation without risking falling into the psychologist’s fallacy of
William James in which the psychologist’s reality is confused with the psychological reality of the human in
his or her problem-solving world. As Gibson saw, meeting this challenge required new ways and efforts to
characterize the stimulus world based on an organism-environment mutuality. Meeting this challenge in
person-machine systems also requires new ways and efforts to characterize the demands of problem solving
domains based on an agent-environment mutuality (Rasmussen, 1986; Woods, 1988). Whereas Gibson called this
ecological physics, in human-machine systems it has generally gone under the name cognitive task analysis.
This is a set of empirically based investigations to model the mutual shaping between (a) the cognitive
demands/constraints of the domain and (b) the strategies evolved by groups of practitioners given (c) the
cognitive tools and resources available in that field of practice (for an outstanding example, see Hutchins,
Remember the design challenge for creating effective representations is to set up the mapping between
domain referents and visual tokens in the computer medium so that the observer can extract information about
task-meaningful semantics. One kind of error in representation design is failure to map task meaningful domain
semantics into the structure and behavior of the representation. To avoid this, however, requires that designers
directly confront the problem of cognitive task analysis (Rasmussen, 1986): identifying what is informative,
what are the interesting properties and changes of the domain referents given the goals and task context of the
team of practitioners.
Note the seeming paradox at the heart of the agent-environment mutuality assumption. In order to
understand the psychology of human information processing, one must understand the nature of actual problems
(i.e., information processing tasks) as it relates to the possibilities for information processing. For ecological
perception, agent-environment mutuality meant the abandonment of the minimalist research strategy in
perception based on excessive simplification of perceptual stimuli (e.g., tachistoscopic research) and a
commitment to studying the properties of complex perceptual situations as they relate to the potential for
perception by an organism. As ecological perception led to the need for a better understanding of the stimulus
world-- ecological physics, progress in understanding aided cognition requires progress on understanding of the
properties of its stimulus world -- an ecology of problems. To do this will require development of new methods
and approaches to understand the dynamic interplay of people and technology (e.g., Hutchins, 1990).
Adopting the agent-environment mutuality assumption for human-machine systems forces a revolutionary
shift with regard to normal disciplinary boundaries and research agendas. Technologists work on problems
related to expanding the boundaries of what machines can do autonomously. However, data overload is a
problem concerned with the interaction of people and technology in cognitive work. Experimental psychologists
study human information processing, but almost always divorced from any technological context. They do not
study how people create, use, and adapt cognitive tools to assist them in solving problems, or, what could be
called, aided information processing. But problems like data overload or mode errors only exist at the
intersection of people and technology.
6.3.3 The Context-Bound Approach to Human-Machine Systems
At the risk of oversimplifying, one can think of human factors being divided into two types. One is a context-
free approach to study human-machine systems. It is characterized by studies of generic tasks; user modeling is
a critical focus; results are organized by domain of application (aviation, consumer products, medical, etc.) or
technological developments (e.g., hypertext). The research methods are laboratory–based hypothesis testing.
The subjects tend to be naive and passive relative to the test tasks. The test tasks bear no or only a superficial
relationship to the tasks and task contexts in the actual target domain of reference. The relationship between
basic and applied research is seen as a pipeline in which basic work eventually flows toward applications.
The context-bound approach, on the other hand, is based on a commitment to study complex human-system
interactions directly. Its methods, data, concepts, and theories are all bound to the situations under study, that
is, to the context (Hutchins, in press; Woods, 1993a). In this approach it is axiomatic that one cannot separate
the study of problem solving from analysis of the situations in which it occurs (Lave, 1988, p. 42). Hence, it has
become popular to refer to the context-bound approach as situated cognition (cf. Suchman, 1987). This approach
is characterized by studies of specific meaningful tasks as carried out by actual practitioners; models of errors
and expertise in context are a critical focus; results are organized by cognitive characteristics of the interaction
between people and technology (e.g., distributed or cooperative cognition, or how practitioners shape the
information processing tools that they use). The research methods are based on field study techniques and
detailed protocol analysis of the process of solving a problem. The study participants are active skilled
practitioners. Understanding the cognitive task—practitioner strategy relationship and how practitioners
adapt cognitive tools to aid them in their work— is fundamental. The relationship between basic and applied
research is seen as complementary in which growing the research base and developing effective applications
are mutually interdependent (Woods, 1993a). Recent examples of the context-bound approach are Mitchell’s
studies of satellite control centers (e.g., Mitchell & Saisi, 1987), my studies of nuclear power emergency
operations and new support systems (e.g., Woods, O'Brien, & Hanes, 1987); Klein’s studies of recognition-driven
decision making in several domains (e.g., Klein et al., 1993); Cook’s studies of computerized surgical operating
room devices (e.g., Cook et al., 1990, 1991; Moll van Charante et al., 1993), Hutchins’s studies of navigation
(e.g., Hutchins, 1990; in press); P. Smith and J. Smith’s studies revolving around blood matching decisions in
immunohematology (e.g., Smith et al., 1991); and several studies of cognitive activities in commercial airline
cockpits (Hutchins, 1991; Sarter & Woods, 1994).
To say that the study of human-machine systems can and should be context bound is not simply to call for
more applied studies in particular domains. "It is . . . the fundamental principle of cognition that the
universal can be perceived only in the particular, while the particular can be thought of only in reference to the
universal” (Cassirer, 1923/1953, p. 86). As Hutchins puts it:
"There are powerful regularities to be described at a level of analysis that transcends the details of the
specific domain. It is not possible to discover these regularities without understanding the details of
the domain, but the regularities are not about the domain specific details, they are about the nature of
human cognition in human activity" (Hutchins, 1992, personal communication).
This reveals the proper complementarity between so called basic and applied work in which the experimenter
functions as designer and the designer as experimenter (Woods, 1993a). "New technology is a kind of
experimental investigation into fields of ongoing activity. If we truly understand cognitive systems, then we
must be able to develop designs that enhance the performance of operational systems; if we are to enhance the
performance of operational systems, we need conceptual looking glasses that enable us to see past the unending
variety of technology and particular domains” (Woods & Sarter, 1993).
Each approach is subject to risks and dangers. The challenges for a context-bound science of human-machine
systems are (a) methods and results for building cognitive task models (the ecology of problems), (b) methods
and theory building to generate generalizable results from context bound studies (producing distilled results
transportable across scenarios, participants, and domains rather than just diluted motherhood
generalizations), and (c) methods and theories to stimulate critical growth of knowledge across context bound
It is the context-bound approach that can draw on analogies to ecological perception. For example, what
follows is a listing of the research program of a context-bound human factors expressed in terms parallel to the
ecological perception and action research program (based on Pittenger, 1991):
1. Discovery of the events/information that serves to guide the observer’s action in the
2. Specification of the information in the domain which supports perception/identification of events.
3. Design of a representation of the domain that supports perception and extraction of the
information/events important to action.
4. Testing that the information/events are perceivable/extractable and that they are useful in guiding
6.3.4 The Adaptive Practitioner
In developing new computer-based information technology and automation, the conventional view seems to be
that new technology makes for better ways of doing the same task activities. We often act as if the domain
practitioner were a passive recipient of the resulting operator aids, the user of what the technologist provides
However, this view overlooks the fact that the introduction of new technology represents a change from one
way of doing things to another. The design of new technology is always an intervention into an ongoing world
of activity. It alters what is already going on — the everyday practices and concerns of a community of people
— and leads to a resettling into new practices (Flores, Graves, Hartfield, & Winograd, 1988, p. 154).
Practitioners are not passive in this process of accommodation to change. Rather, they are an active adaptive
element in the person-machine ensemble, usually the critical adaptive portion. Studies show that practitioners
adapt information technology provided for them to the immediate tasks at hand in a locally pragmatic way,
usually in ways not anticipated by the designers of the information technology (Cook et al., 1990; Roth,
Bennett, & Woods, 1987; Flores et al., 1988; Hutchins, 1990). Tools are shaped by their users.
One of the forces that drive user adaptations is clumsy automation (Wiener, 1989). One of several forms of
the clumsy use of technology occurs when the benefits of the automation accrue during workload troughs, and
the costs of automation occur during high-criticality or high-tempo operations (Woods et al., 1994; Woods,
Practitioners (commercial pilots, anesthesiologists, nuclear power operators, operators in space control
centers) are responsible not just for device operation, but also for the larger system and performance goals of the
overall system. Practitioners tailor their activities to insulate the larger system from device deficiencies and
peculiarities of the technology. This occurs, in part, because practitioners inevitably are held accountable for
failure to correctly operate equipment, diagnose faults, or respond to anomalies, even if the device setup,
operation, and performance are ill suited to the demands of the environment. This creates the paradoxical
situation in which practitioners’ adaptive, coping responses often help to hide the corrosive effects of clumsy
technology from designers.
Again there is a parallel to research in perception and especially, ecological perception. The human is an
adaptive, active perceiver, not a passive element. We will begin to understand human-machine systems only
when we begin to understand the adaptive interplay of practitioner and tools in the course of meeting task
demands. Unfortunately, this demands, like Gibson demanded of the minimalist tachistoscopic school of work
in perception, a paradigm shift for work on human-machine systems. It demands that researchers examine
problem solving in situ — in complex settings, in which significant information processing tools are available to
support the practitioner and in which domain–knowledgeable people are the appropriate study participants
The parallel to Gibson and ecological perception can be overdrawn with respect to the study of human-machine
systems. The agent-environment mutuality assumption is (or should be) common to both endeavors. It is fairly
easy to draw analogies between concepts in ecological perception and the ideas of some researchers in human-
machine systems: the user as an active adaptive practitioner, the shift in the sense of what is informative, the
search for meaning as a fundamental parameter in human-computer interaction, the need for a new way to
characterize problem-solving habitats, the equivalent of an ecological physics, the multiplicity of cues
available in natural problem-solving habitats, and the need to take into account the dynamism of real
problems. But at another level the appeal of the term ecology of human-machine systems is based on the
perceived need for a paradigm shift in human-machine systems — a parallel to the Gibsonian paradigm shift
in research on perception and action. The paradigm shift is an abandonment of the context-free approach and
methods in the study of human-machine systems and a commitment to the methods and agenda of a context-
This work was supported by the Aerospace Human Factors Research Division of the NASA Ames Research
Center under Grant NCC2-592 (Dr. Everett Palmer, technical monitor) and by the of the NASA Johnson Space
Center (Dr. Jane Malin, technical monitor).
Becker, R. A., & Cleveland W. S. (1987). Brushing scatterplots. Technometrics, 29, 127–142.
Cassirer, E. (1953). The philosophy of symbolic forms, Vol. 1: Language (R. Manheim, Trans.). Yale
University Press. New Haven, CT. (Original work published 1923).
Cook, R. I., Potter, S. D., Woods, D., & McDonald, J.S. (1991). Evaluating the human engineering of
microprocessor controlled operating room devices. Journal of Clinical Monitoring, 7, 217–226.
Cook, R. I., Woods, D. D., & Howie, M. B. (1990). The natural history of introducing new information
technology into a dynamic high-risk environment. Proceedings of the Human Factors Society, 34th
Annual Meeting. Santa Monica, CA.
Cutting, J. E. (1986). Perception with an eye for motion. Cambridge MA: MIT Press.
Doyle, R., Sellers, S., & Atkinson, D. (1989). A focused, context sensitive approach to monitoring. Proceedings
of the Eleventh International Joint Conference on Artificial Intelligence. IJCAI.
Easter, J. R. (1991). The role of the operator and control room design. In J. White and D. Lanning (Eds.),
European nuclear instrumentation and controls, (Rep. PB92–100197). World Technology Evaluation
Center, Loyola College, National Technical Information Service.
Flores, F., Graves, M., Hartfield, B., & Winograd, T. (1988). Computer systems and the design of
organizational interaction. ACM Transactions on Office Information Systems, 6, 153–172 .
Freund, P. R., & Sharar, S. R. (1990). Hyperthermia alert caused by unrecognized temperature monitor
malfunction. Journal of Clinical Monitoring, 6, 257-257.
Gibson, J. J. (1979). The ecological approach to visual perception. Boston: Houghton-Mifflin.
Gonzalez, E. G., & Kolers, P. A. (1982). Mental manipulation of arithmetic symbols. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 8, 308–319.
Goodstein, L. (1981). Discriminative display support for process operators. In J. Rasmussen and W. Rouse (Eds.),
Human detection and diagnosis of system failures. New York: Plenum Press.
Henderson, A., & Card, S. (1987). Rooms: The use of multiple virtual workspaces to reduce space contention in a
window-based graphical interface. ACM Transactions on Graphics, 5, 211–243.
Hochberg, J. (1986). Representation of motion and space in video and cinematic displays. In K. R. Boff, L.
Kaufman, & J. P. Thomas, (Eds.), Handbook of human perception and performance, I. New York: Wiley.
Hutchins, E. (1980). Culture and inference. Cambridge, MA: Harvard University Press.
Hutchins, E. (1990). The technology of team navigation. In J. Galegher, R. Kraut, and C. Egid (Eds.),
Intellectual teamwork: Social and technological foundations of cooperative work. Hillsdale, NJ:
Lawrence Erlbaum Associates.
Hutchins, E. (1991). How a cockpit remembers its speed Technical Report. La Jolla: Distributed Cognition
Laboratory, University of California, San Diego.
Hutchins, E. (in press). Cognition in the wild. Cambridge, MA: MIT Press.
Johnson, E., & Payne, J. W. (1985). Effort and accuracy in choice. Management Science, 31, 395–414.
Klein, G., Orasanu, J., Calderwood, R., and Zsambok, C. E. (1993). (Eds.), Decision making in action: Models and
methods. Norwood, NJ: Ablex.
Lave, J. (1988). Cognition in practice. New York: Cambridge University Press.
Mitchell, C., & Saisi, D. (1987). Use of model-based qualitative icons and adaptive windows in workstations
for supervisory control systems. IEEE Transactions on Systems, Man, and Cybernetics, SMC-17, 573-593.
Moll van Charante, E., Cook, R. I., Woods, D. D., Yue L., & Howie, M. B. (1993). Human-computer interaction
in context: Physician interaction with automated intravenous controllers in the heart room. In H.G.
Stassen (Ed.), Analysis, design and evaluation of man-machine systems 1992. New York: Pergamon
Murray, C., & Cox, C. B. (1989). Apollo: The race to the moon. New York: Simon & Schuster.
Neisser, U. (1976). Cognition and reality. San Francisco: W. H. Freeman.
Norman, D. A. (1981). Categorization of action slips. Psychological Review, 88, 1-15
Norman, D. A. (1988). The psychology of everyday things. New York: Basic Books.
Norman, D. A. (1993). Things that make us smart. Reading, MA: Addison-Wesley.
Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988). Adaptive strategy selection in decision making. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 14, 534-552.
Peirce, C. S. (1955). Abduction and induction. In J. Buchler (Ed.), Philosophical writings of Peirce. London:
Dover. (Original work published, 1903).
Pittenger, J. B. (1991). Cognitive physics and event perception: Two approaches to the assessment of people's
knowledge of physics. In R. R. Hoffman and D. S. Palmero (Eds.), Cognition and the symbolic processes:
Applied and ecological approaches. Hillsdale, NJ: Lawerence Erlbaum Associates.
Rasmussen, J. (1986). Information processing and human-machine interaction: An approach to cognitive
engineering. New York: North-Holland.
Rasmussen, J., & Lind, M. (1981). Coping with complexity. In H. G. Stassen (Ed.), First European annual
conference on human decision making and manual control. New York: Plenum Press.
Rasmussen, J., & Rouse, W. (1981). (Eds.), Human Detection and Diagnosis of System Failures. New York:
Reason, J. & Mycielska, K. (1982). Absent minded? The psychology of mental lapses and everyday errors.
Englewood Cliffs, NJ: Prentice-Hall.
Reed, E. S. (1988). James J. Gibson and the psychology of perception. New Haven, CT: Yale University Press.
Roth, E. M., Bennett, K., & Woods, D. D. (1987). Human interaction with an intelligent machine. International
Journal of Man-Machine Studies, 27, 479–525.
Sarter, N., & Woods, D. D. (1994). Pilot interaction with cockpit automation II: An experimental study of
pilots' models and awareness of the flight management system. International Journal of Aviation
Psychology, 4, 1-28.
Simon, H. A. (1969). The sciences of the artificial. Cambridge, MA: MIT Press.
Smith, P. J., Smith, J. W., Svirbely, J., Krawczak, D., Fraser, J., Rudman, S., Miller, T., & Blazina, J. (1991).
Coping with the complexities of multiple-solution problems. International Journal of Man-Machine
Studies, 35, 429–453.
Suchman, L. A. (1987). Plans and situated actions: The problem of human-machine communication. Cambridge
University Press, Cambridge, England.
Tenney, Y. J., Jager Adams, M., Pew, R. W., Huggins, A. W. F., & Rogers, W. H. (1992). A principled approach
to the measurement of situation awareness in commercial aviation. (NASA Contractor Report No.
NAS1-18788). Hampton, VA: NASA Langley Research Center.
von Uexkull, J. (1957). A stroll through the worlds of animals and men. In C. Schiller, Ed. and Trans.,
Instinctive behavior. New York: International Universities Press. (Original work published, 1934).
Wiener, E. L. (1989). Human factors of advanced technology (glass cockpit) transport aircraft. (Technical
Report 117528), Washington, D. C.: NASA.
Winograd, T. (1987). Three responses to situation theory. Technical Report CSLI-87-106, Center for the Study of
Language and Information, Stanford University.
Woods, D. D. (1984). Visual momentum: A concept to improve the cognitive coupling of person and computer.
International Journal of Man-Machine Studies, 21, 229–244.
Woods, D. D. (1986). Paradigms for intelligent decision support. In E. Hollnagel, G. Mancini, and D. D. Woods
(Eds.), Intelligent decision support in process environments. (pp. 153–174). New York: Springer-Verlag.
Woods, D. D. (1988). Coping with complexity: The psychology of human behavior in complex systems. In L. P.
Goodstein, H. B. Andersen, and S. E. Olsen (Eds.), Mental models, tasks and errors. London: Taylor &
Woods, D. D. (1991). The cognitive engineering of problem representations. In G. R. S. Weir and J. L. Alty
(Eds.), Human-computer interaction and complex systems. London: Academic Press.
Woods, D. D. (1992). The alarm problem and directed attention. Cognitive Systems Engineering Laboratory
Report 92-TR-06, Department of Industrial and Systems Engineering, The Ohio State University,
Woods, D. D. (1993a). Process tracing methods for the study of cognition outside of the experimental
psychology laboratory. In G. A. Klein, J. Orasanu, R. Calderwood, and C. E. Zsambok (Eds.), Decision
making in action: Models and methods. Norwood, NJ: Ablex.
Woods, D. D. (1993b). The price of flexibility in intelligent interfaces. Knowledge-Based Systems, 6, 1-8.
Woods, D. D. (in preparation). Visualizing function: The theory and practice of representation design in the
computer medium. Columbus, OH: Cognitive Systems Engineering Laboratory, Ohio State University.
Woods, D. D., & Elias, G. (1988). Significance messages: An integral display concept. Proceedings of the
Human Factors Society, 32nd Annual Meeting. Santa Monica, CA.
Woods, D. D., Johanssen, L., Cook, R. I., & Sarter, N. (1994). Behind human error: Cognitive systems, computers
and hindsight. Crew Systems Ergonomic Information and Analysis Center, Wright-Patterson AFB, OH
(State of the Art Report).
Woods, D. D., O'Brien, J., & Hanes, L. F. (1987). Human factors challenges in process control: The case of nuclear
power plants. In G. Salvendy (Ed.), Handbook of Human Factors/Ergonomics. New York:Wiley.
Woods, D. D., Potter, S. S., Johannesen, L., & Holloway, M. (1991). Human interaction with intelligent
systems: Trends, problems, new directions. Cognitive Systems Engineering Laboratory Report, prepared
for NASA Johnson Space Center, Washington, D. C.
Woods, D. D., & Sarter, N. (1993). Evaluating the impact of new technology on human-machine cooperation. In
J. Wise, V. D. Hopkin, and P. Stager (Eds.), Verification and validation of complex and integrated
human-machine systems. New York: Springer-Verlag.
Woods, D. D., Watts, J., & Potter, S. D. (1993). How not to have to nagivate through way too many displays.
Cognitive Systems Engineering Laboratory Report 93-TR-02, Department of Industrial and Systems
Engineering, The Ohio State University, Columbus, OH.
Woods, D. D., Wise, J. A., & Hanes, L. F. (1981). An evaluation of nuclear power plant safety parameter
display systems. Proceedings of the Human Factors Society, 25th Annual Meeting. Human Factors
Society, Santa Monica, CA.
Figure 6.2. Symbol Mapping Principle