ArticlePDF Available

Abstract and Figures

New technology is flexible in the sense that it provides practitioners with a large number of functions and options for carrying out a given task under different circumstances. However, this flexibility has a price. Because the human supervisor must select the mode best suited to a particular situation, he or she must know more than before about system operations and the operation of the system as well as satisfy new monitoring and attentional demands to track which mode the automation is in and what it is doing to manage the underlying processes. When designers proliferate modes without supporting these new cognitive demands, new mode-related error forms and failure paths can result. Mode error has been discussed in human-computer interaction for some time; however, the increased capabilities and the high level of autonomy of new automated systems appear to have created new types of mode-related problems. We explore these new aspects based on results from our own and related studies of human-automation interaction. In particular, we draw on empirical data from a series of studies of pilot-automation interaction in commercial glass cockpit aircraft to illustrate the nature, circumstances, and potential consequences of mode awareness problems in supervisory control of automated resources. The result is an expanded view of mode error that takes into account the new demands imposed by more automated systems.
Content may be subject to copyright.
http://hfs.sagepub.com/
Society
Human Factors and Ergonomics
Human Factors: The Journal of the
http://hfs.sagepub.com/content/37/1/5
The online version of this article can be found at:
DOI: 10.1518/001872095779049516 1995 37: 5Human Factors: The Journal of the Human Factors and Ergonomics Society
Nadine B. Sarter and David D. Woods
How in the World Did We Ever Get into That Mode? Mode Error and Awareness in Supervisory Control
Published by:
http://www.sagepublications.com
On behalf of:
Human Factors and Ergonomics Society
can be found at:Human Factors: The Journal of the Human Factors and Ergonomics SocietyAdditional services and information for
http://hfs.sagepub.com/cgi/alertsEmail Alerts:
http://hfs.sagepub.com/subscriptionsSubscriptions:
http://www.sagepub.com/journalsReprints.navReprints:
http://www.sagepub.com/journalsPermissions.navPermissions:
http://hfs.sagepub.com/content/37/1/5.refs.htmlCitations:
What is This?
- Mar 1, 1995Version of Record >>
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
HUMAN FACTORS, 1995,37(1),5-19
How in the World Did We Ever Get into That
Mode? Mode Error and Awareness in
Supervisory Control
NADINE B. SARTER
1and
DAVID D. WOODS,
Cognitive Systems Engineering Laboratory,
Ohio State University
New technology is flexible in the sense that it provides practitioners with a large
number of functions and options for carrying out a given task under different
circumstances. However, this flexibility has a price. Because the human supervisor
must select the mode best suited to a particular situation, he or she must know
more than before about system operations and the operation of the system as well
as satisfy new monitoring and attentional demands to track which mode the au-
tomation is in and what it is doing to manage the underlying processes. When
designers proliferate modes without supporting these new cognitive demands, new
mode-related error forms and failure paths can result. Mode error has been dis-
cussed in human-computer interaction for some time; however, the increased ca-
pabilities and the high level of autonomy of new automated systems appear to
have created new types of mode-related problems. We explore these new aspects
based on results from our own and related studies of human-automation interac-
tion. In particular, we draw on empirical data from a series of studies of pilot-
automation interaction in commercial glass cockpit aircraft to illustrate the na-
ture, circumstances, and potential consequences of mode awareness problems in
supervisory control of automated resources. The result is an expanded view of
mode error that takes into account the new demands imposed by more automated
systems.
INTRODUCTION
New technology is increasing the potential for
automated resources to support human supervi-
sory controllers. The technology's inherent flex-
ibility allows designers to add a wide range of
capabilities so as to provide the practitioner
with a set of tools that can be used to optimize
system performance across a wide range of
circumstances.
1
Requests for reprints should be sent to Nadine B. Sarter,
Ohio State University, Department of Industrial and Systems
Engineering, 210 Baker Systems Building, 1971 Neil Ave.,
C0-
lumbus, OH 43210.
However, the same flexibility tends to create
and proliferate various modes of operation. This
proliferation of modes that can so easily accom-
pany new levels of automation in complex sys-
tems (i.e., systems involving a large number of
highly dynamic interacting subcomponents)
also creates new cognitive demands on practi-
tioners (Woods, 1993). Practitioners must know
more than they did before about how the system
works in each different mode and about how to
manage the new set of options in different oper-
ational contexts (Sarter and Woods, 1992, 1994).
New attentional demands are created as the
practitioner must keep track of which mode the
©
1995, Human Factors and Ergonomics Society. All rights reserved.
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
6-March 1995
device is in so as to select the correct inputs
when communicating with the automation and
to track what the automation is doing, why it is
doing it, and what it will do next.
For example, an automated cockpit system
such as the Flight Management System (FMS) is
flexible in the sense that it provides pilots with a
large number of functions and options for car-
rying out a given flight task under different cir-
cumstances. Pilots can choose from at least five
different methods at different levels of automa-
tion to change altitude. This flexibility is usually
portrayed as a benefit that allows the pilot to
select the mode best suited to a particular flight
situation. However, this flexibility has a price:
The pilot must know about the functions of the
different modes, which mode to use when, how
to "bumplessly" switch from one mode to an-
other, and how each mode is set up to fly the
aircraft as well as keep track of which mode is
active. These new cognitive demands can easily
congregate at high-tempo and high-criticality
periods, thereby adding new workload at pre-
cisely those times when practitioners are most
in need of effective support systems. This situa-
tion has been described by Wiener (1989) as
"clumsy automation." The clumsy use of tech-
nological possibilities, such as the proliferation
of modes, creates the potential for new forms of
human-machine system failure, such as the air
crashes at Bangalore (e.g., Lenorovitz, 1990) and
Strasbourg (Ministere de l'Equipement, du Lo-
gement, des Transports et de l'Espace, 1992).
In a variety of studies, we have investigated
human-automation interaction in the contexts
of commercial "glass" cockpits (Sarter and
Woods, 1992, 1994), anesthesia management for
surgery (Cook, Potter, Woods, and McDonald,
1991; Moll van Charante, Cook, Woods, Yue, and
Howie, 1992), and nuclear power plant process
control (Woods, O'Brien, and Hanes, 1987).
Based on these and other studies, we think that
the classic concept of mode error as described in
the following section is inadequate to describe
the problems in human interaction with today's
automated resources. In this paper we extend
the concept of mode error to take into ac-
HUMAN FACTORS
count the problems caused by new automation
capabilities.
MODE ERROR AND MODE AWARENESS
Classic Forms
of
Mode-Related Problems
Multiple modes in devices can create the po-
tential for mode error, which has been estab-
lished as one kind of problem in human interac-
tion with computerized devices (Lewis and
Norman, 1986) and as a basic type of erroneous
action in psychological taxonomies of error
forms (Norman, 1981). Norman (1988) summa-
rized the source of mode error simply by sug-
gesting that if one wishes to create or increase
the possibilities for erroneous actions, one way
is to "change the rules. Let something be done
one way in one mode and another way in an-
other mode" (p. 179). When this is the case, a
human user can commit an erroneous action by
executing an intention in a way that is appro-
priate to one mode when the device is actually in
another mode.
Note that mode error is inherently a human-
machine system breakdown, in that it requires
that the users lose track of which mode the de-
vice is in (or confuse which methods or actions
are appropriate to which mode) and requires a
machine for which the same actions and indica-
tions mean different things in different modes of
operation. Several studies have shown that hu-
man-computer interface design and evaluation
should identify computerized devices that have
a high potential for mode error (e.g., Cook et al.,
1991; Lewis and Norman, 1986), and several de-
sign techniques have been proposed to reduce
the chances for mode errors to occur (Monk,
1986; Sellen, Kurtenbach, and Buxton, 1992).
The original work on mode error was done pri-
marily with reference to relatively simple com-
puterized devices, such as word processors .. The
erroneous actions in question were acts of com-
mission in carrying out self-paced tasks with de-
vices that reacted only to user inputs and com-
mands. Increases in the complexity and
autonomy of automated systems for event-
driven, dynamic task environments such as
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
MODE AWARENESS IN SUPERVISORY CONTROL
March 1995-7
commercial aviation flight decks and anesthesia
management for surgery have resulted in a pro-
liferation of system and interface modes.
Human supervisory control of automated re-
sources in event-driven task domains is quite
different from the applications in the original
research on mode error. Automation is often in-
troduced as a resource that provides the human
supervisor with a large number of modes of op-
eration for carrying out tasks under different cir-
cumstances. The human's role is to select the
mode best suited to a particular situation, but to
accomplish this, he or she must know more and
must meet new monitoring and attentional de-
mands to track which mode the automation is in
and what it is doing to manage the underlying
process. These cognitive demands can be partic-
ularly challenging in the context of highly auto-
mated systems, which can change modes on
their own, based on environmental inputs or for
protection purposes, independent of direct and
immediate instructions from the human super-
visor. This capability of highly automated sys-
tems drives the demand for mode awareness-
that is, for the ability of a supervisor to track
and to anticipate the behavior of automated
systems.
What is involved in maintaining mode aware-
ness is determined to a large extent by the de-
sign and capabilities of the automated resources
and especially the interface between the auto-
mation and the people in the system. How have
changes in automation and in the interface
between person and automated resources af-
fected mode awareness? How have the human's
role and tasks changed, and how can they be
supported?
The Complexity
of
Modes in Automated Systems
and the Challenge to Mode Awareness
In the past, most automated systems were
characterized by a fairly small number of
modes. In most cases, these modes provided the
passive background on which the operator
would act by entering target data and request-
ing system operations. Another characteristic of
these systems was that they would have only one
overall mode setting for each function to be per-
formed. Consequently, mode annunciations (in-
dications of the currently active mode and of
transitions between modes) could be dedicated
to one spatial location on the display. Finally,
consequences of a breakdown in mode aware-
ness tended to be fairly small, in part because of
the short time-constant feedback loops involved
in these systems. The operator seemed to be able
to detect and recover from erroneous actions rel-
atively quickly.
The flexibility of more advanced technology
allows automation designers to develop much
more complicated, mode-rich systems. Modes
proliferate as designers provide multiple levels
of automation and various optional methods for
many individual functions. The result is numer-
ous mode indications distributed over multiple
displays, each containing just that portion of the
mode status data corresponding to a particular
system or subsystem. Furthermore, the designs
allow for interactions across the various modes.
The increased capability of the automated re-
sources creates increased delays between user
input and feedback about system behavior.
These longer time-constant feedback loops make
it more difficult to detect and recover from er-
rors and challenges the human's ability to main-
tain awareness of the active and the armed
modes (in the active state the mode provides
guidance, whereas in the armed state the mode
has been preselected without providing guid-
ance yet), the contingent interactions between
environmental status and mode behavior, and
the contingent interactions across modes.
A very important trend relates to the input
sources that can evoke changes in system status
and behavior. Early systems would change their
mode status and behavior only in response to
operator input. More advanced technology,
however, responds to operator input as well as
situational and system factors. In the case of the
Flight Management System in highly automated
cockpits, a mode transition can occur as an im-
mediate consequence of operator input, but it
can also happen when a preprogrammed inter-
mediate target (e.g., a target altitude) is reached
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
8-March 1995
or when the system changes its mode in order to
prevent the pilot from putting the aircraft into
an unsafe configuration.
Even the aspect of operator input has itself
become more complicated as the complexity of
the system of automated resources has in-
creased. Incidents and accidents have shown
that there is an increased risk of inadvertent ac-
tivation of modes by the operator. Of course, a
mode can be activated through deliberate, ex-
plicit selection
by
pushing the corresponding
button. In addition, however, taking an action
can also result in the inadvertent activation of a
different mode, depending on the system status
and environmental parameters at the time of
manipulation. The resulting system behavior
can be disastrous but may be missed by the op-
erator because of gaps or misconceptions in his
or her mental model of the system combined
with a lack of adequate feedback concerning
mode status and transitions.
An example of such an inadvertent mode ac-
tivation contributed to a major recent aviation
accident (the Bangalore crash; e.g., Lenorovitz,
1990). In that case, the pilot put the automation
into a mode called OPEN DESCENT during an
approach without realizing it. In this mode, air-
speed is controlled by pitch rather than thrust
(i.e., the throttles go to idle). In the desirable
mode for this phase of flight (i.e., SPEED), air-
speed is controlled by thrust. As a consequence
of going into OPEN DESCENT, the aircraft
could not sustain the glide path and maintain
the pilot-selected target speed at the same time.
The flight director bars commanded the pilot to
fly the aircraft well below the required profile to
try to maintain airspeed. It was not until 10 s
before impact that the crew discovered what
had happened-too late for them to recover with
engines at idle. How could this happen?
One contributing factor in this accident may
have been that there are at least five different
ways of activating the OPEN DESCENT mode.
The first two options involve the explicit manual
selection of the OPEN DESCENT mode. In one
of these cases, activation of OPEN DESCENT is
dependent on the automation being in a partic-
HUMAN FACTORS
ular state. It can be selected by pulling the AL-
TITUDE knob after selecting a lower altitude, or
it can be activated by pulling the SPEED knob,
provided the aircraft is in the EXPEDITE mode
at that time.
The other three methods of activating the
OPEN DESCENT mode are indirect in that they
do not require the explicit manual selection of a
mode. Rather, they are related to the selection of
a new target altitude in a specific context or to
protections that prevent the aircraft from ex-
ceeding a safe airspeed. In the case of the Ban·
galore accident, for example, the fact that the
automation was in the ALTITUDE ACQUISI-
TION phase resulted in the activation of the
OPEN DESCENT mode when the pilot selected
a lower altitude. The pilot may not have been
aware of the fact that the aircraft was within 200
feet of the previously entered target altitude
(which is the definition of ALTITUDE ACQUISI-
TION). Consequently, he may not have expected
that the selection of a lower altitude at that
point would result in a mode transition. Because
he did not expect any mode change, he may not
have closely monitored his mode annunciations,
and he missed the transition.
The Bangalore accident also illustrates an-
other mode-related problem: the multiple re-
quirements imposed on the pilots not only to
track the current active mode configuration and
to understand its implications but also to keep
track of other status information on the automa-
tion that may result in the indirect activation of
modes. In this case, the nature of feedback con-
cerning the status of the autot1ight-related sys-
tems may have played a role. The pilot-flying
had disengaged his flight director at some point
during the approach and was assuming that the
pilot-not-flying would do the same thing. This
would have resulted in a mode configuration in
which airspeed would automatically have been
controlled by the autothrottles (the SPEED
mode, which is the recommended procedure for
the approach phase). However, the pilot-not-
flying never turned off his flight director. There-
fore, the OPEN DESCENT mode became active
when a lower altitude was selected.
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
MODE AWARENESS IN SUPERVISORY CONTROL
March 1995-9
The problem in this case is that each pilot re-
ceived an indication of the status of his own
flight director only. In other words, when the
pilot-flying turned off his flight director, the FDl
message on his primary flight display disap-
peared. However, he never had an indication of
the status of FD2 (the other pilot's flight direc-
tor) and therefore could not verify via his dis-
play whether FD2 was turned off as well.
The example of the Bangalore accident illus-
trates how a variety of factors can contribute to
a lack of mode awareness. Gaps or misconcep-
tions in the operator's mental model may pre-
vent him or her from predicting and tracking
indirect mode transitions or from understand-
ing the interactions between different modes. A
lack of salient feedback on mode status and tran-
sitions can also make it difficult to maintain
awareness of the current and future system con-
figuration. In addition to allocating attention to
the different displays of system status and be-
havior, the operator has to monitor environmen-
tal states and events, remember past instruc-
tions to the system, and consider possible inputs
to the system by another operator. If he or she
manages to monitor, integrate, and interpret all
this information, system behavior will appear
deterministic and transparent. However, current
system design and training do not seem to support
the operator in this task, and depending on cir-
cumstances, missing just one of these factors can
be sufficient to result in an automation surprise
and the impression of an animate system that acts
independent of operator input and intent.
Display modes are another factor aggravating
the problem of mode awareness. In some de-
vices, not only does the current mode configura-
tion determine which control functions become
activated by a given input, but these devices also
interpret user-entered target values differently
depending on the active display mode. In the
following example, one can see how this may
result in unintended system behavior.
In one current glass cockpit aircraft, pilots en-
ter a desired vertical speed or a desired flight
path angle via the same display. The interpreta-
tion of the entered value depends on the active
display mode. Although the verbal expressions
for different targets differ considerably (for ex-
ample, a vertical speed of 2500 feet per minute
versus a flight path angle of 2.5 deg) , these two
targets look almost the same on the display. This
design requires the pilot to know about the pos-
sibility and implications of different display
modes, to remember the indications associated
with these modes, to check the currently active
setting, and to interpret the displayed target
value in light of the active mode. The design
thus creates a cognitively demanding task in-
stead of supporting an intuitive, mentally eco-
nomical apprehension of the active mode (see
Figure 1). In this case, the problem is further
aggravated by the fact that the pilot is increas-
ingly removed from the ongoing process, as pre-
viously available cues about system behavior,
such as moving throttle or engine noise, may have
been reduced or removed in the design process.
The behavior and capabilities of the machine
agent in human-machine systems have changed
considerably. In simpler devices, each system
activity was dependent on operator input; con-
sequently, in order for a lack of mode awareness
to become operationally significant, the opera-
tor had to act to evoke undesired system behav-
ior. In more automated systems, the level of an-
imacy of machine agents has dramatically
increased. Once activated, systems are capable
of carrying out long sequences of tasks autono-
mously. For example, advanced Flight Manage-
ment Systems can be programmed to automat-
ically control the aircraft from takeoff through
landing. Inadvertent mode settings and selec-
tions may not produce visible consequences for
a long time, thereby complicating the process of
error or failure detection. This creates the pos-
sibility of errors of omission (i.e., failure to in-
tervene) in addition to errors of commission as a
consequence of a lack of mode awareness.
Another complicating factor that makes it dif-
ficult to maintain awareness of the active mode
configuration is the fact that many systems al-
low for simultaneous use by multiple practitio-
ners rather than input by only one user. Track-
ing system status and behavior becomes more
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
10-March 1995
HUMAN FACTORS
1
5
332
ol
I
H
oo5
01
ALT,.---- LVLICH ~ VIS
HOGVIS
29000
0
+25
0
0
'00 1000
@
0
0
0
§~
~ON
rr=JJ
CJ
~[~I
[
1
5
332
ol
1005 01
ALT,.---- LVLICH ~ FPA
I
)
TRKFPA
29000
0
+2.5
0
0
,00 1000
@
0
0
0
§~
~DN
[;]
CJ
[~I
[~I
Figure 1. Example of subtle differences in display modes from the flight deck of an advanced-technology aircraft (see
text for discussion of the particular mode problem in this example).
difficult if it is possible for other users to inter-
act with the system without the need for consent
by all of the practitioners involved. This prob-
lem is most obvious when two experienced op-
erators have developed different strategies of
system use. When they have to cooperate, it can
be particularly difficult for them to maintain
awareness of the history of interaction with the
system, which may determine the effect of the
next system input.
All of the foregoing factors challenge a human
supervisor's ability to maintain mode awareness
in highly automated systems. The results of a
number of studies of human-automation inter-
action in a variety of domains have indicated
that problems in mode awareness are often a
consequence of technology-centered automation
(e.g., Cook et al., 1991; Moll van Charan te et al.,
1992; Sarter and Woods, 1992, 1994). In the
following section, we will examine in more de-
tail the results of a series of studies on pilot-
automation interaction that illustrate the trends
in mode awareness problems in the context of
the mode-rich cockpit environment.
Some Empirical Results on Mode Awareness in
Pilot-Automation Interaction
The role of pilots in the modem glass cockpit
aircraft has shifted from direct control of the
aircraft to supervisory control of automated ma-
chine agents. One of the core automation sys-
tems in these cockpits is the Flight Management
System (FMS), which can be programmed by
the pilot to automatically follow a desired flight
path and profile from takeoff through landing.
To maintain awareness of the status and behav-
ior of the various modes of operation within the
FMS, pilots have to gather and integrate a vari-
ety of data from numerous displays in the cock-
pit. In addition to monitoring these nominal in-
dications of system targets and status, pilots
need to be able to interpret the indications to
extract what is implied about current and future
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
MODE AWARENESS IN SUPERVISORY CONTROL
March 1995-11
system and aircraft behavior. In other words,
the automation itself is becoming more of a dy-
namic process in which the indications are raw
data that require interpretation in order to be-
come information about current or future states.
Interpreting the raw indications requires that
the human supervisor has an adequate mental
model of the various automated modes and their
interrelationships and that he or she is knowl-
edgeable about how to use these as resources in
various contexts.
Given the fairly low rate of change in aircraft
behavior throughout large parts of the flight, the
pilot does not have to continuously monitor the
mode annunciations. Rather, he or she has to be
able to predict the occurrence of transitions in
system behavior in order to attend to the right
indications at the right time. During busy
phases of flight (e.g., final approach), numerous
changes in system status and behavior can occur
in a very short period. During this high-tempo
phase of flight, with a large number of concur-
rent tasks, the crew now has another set of cog-
nitive tasks to perform: monitoring and inter-
preting mode annunciations relative to expected
behavior.
In a series of studies of pilot-automation in-
teraction, we had the opportunity to investigate
the nature and circumstances of mode-related
problems in highly automated glass cockpit air-
craft. In one investigation, we built a corpus of
FMS-related problems that were encountered in
line operations (Sarter and Woods, 1992). For
this purpose, descriptions of automation sur-
prises were collected from experienced airline
pilots. A second converging line of study was to
observe pilots during their transition training
from a conventional to a glass cockpit aircraft-
that is, before they had a chance to adapt to the
system. Analysis of these corpus data suggested
that difficulties in mode awareness and gaps in
pilots' understanding of all of the modes of op-
eration and their interactions contributed to au-
tomation surprises and related supervisory con-
trol difficulties.
Based on the results from the corpus-
gathering studies, a field experiment was car-
ried out to examine pilot-automation interac-
tion more closely (Sarter and Woods, 1994).
Twenty airline pilots were asked to fly a mission
on a part-task flight simulator. The scenario was
designed to contain numerous tasks and events
that served as probes of pilots' mode awareness
and of their mental model of the automation.
This phenomenon-driven scenario design per-
mitted on-line data collection on various issues
regarding pilot-automation interaction. In addi-
tion, we were able to question the pilots about
their knowledge and assessments of the status
and behavior of the automation during low-
workload phases of the simulated flight and af-
ter completion of the simulation.
These studies provided consistent and con-
verging data that helped us to understand why
and under what circumstances pilots encounter
problems related to the interaction with cockpit
automation. Most of the observed difficulties
were related to lack of mode awareness and to
gaps in mental models of how the various auto-
mated modes work and interact. The problems
in coordination between pilot and automation
(e.g., automation surprises) occurred primarily
in the context of nonnormal, time-critical situa-
tions: for example, aborted takeoff, disengage-
ment from an automatic mode during approach
for collision avoidance, and loss of the glide
slope signal during final approach. In the case of
the aborted takeoff, 65% of the pilots were not
aware that the autothrottle system was in
charge of thrust control. Consequently, they did
not disengage the autothrottles in order to have
full manual control of the throttle setting. In the
debriefing, 15% of these pilots could describe
the active mode settings and the system activi-
ties during takeoff, but their knowledge was in-
ert; they had not been capable of applying this
knowledge to the ongoing situation. Overall,
only 4 of 20 participants responded completely
correctly in managing the automation during
the aborted takeoff, and 1 of these 4 pilots ex-
plained that he did so because he was trying to
comply with standard procedures, not because
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
12-March 1995
he understood what was going on within the
automation.
In the second nonnormal situation, the pilots
had to comply quickly with an air traffic control
(ATC) request to disengage the automatic AP-
PROACH mode after localizer and glide slope
capture in order to change heading and altitude
to avoid a collision. Most of the pilots knew only
one of the several methods to disengage the
mode, and 14 pilots also "knew" at least one
inappropriate method that could lead to a de-
layed response to the ATC request. In the case of
the glide slope loss during final approach, about
half of the pilots were not aware of the conse-
quences of this event in terms of FMS behavior.
They could not explain the effects in the debrief-
ing, and some even had difficulty detecting the
occurrence of the problem during the ongoing
simulation.
Another interesting result of this study was
related to the future-oriented aspect of mode
awareness. Pilots sometimes had problems an-
ticipating system behavior and the associated
mode annunciations. For example, only five par-
ticipants knew when to expect the indication
that the GO-AROUND mode would be available.
Such failures to anticipate mode status and
transitions indicate a lack of mode awareness,
which degrades the pilot's ability to allocate at-
tention effectively and to detect errors, failures,
or miscommunications between pilot and
automation prior to explicit flight events-
automation surprises. The more experience pi-
lots had with the automation, the more they
were capable of applying their knowledge about
the advantages and shortcomings of the differ-
ent modes to manage the automated resources
in different contexts. Pilots with less glass
cockpit experience tended to utilize a single
strategy or mode over a wide range of flight
circumstances.
One could interpret this as an attempt to cope
with the complexity of the automation by ignor-
ing some modes and options, even in situations
in which the stereotypical strategy was less than
ideal relative to other strategies for managing
the automated resources. Finally, in several in-
HUMAN FACTORS
stances, pilots instructed the automation by en-
tering new flight path targets but did not acti-
vate a mode of the automation to work on
acquiring these targets. They were surprised
when the aircraft did not respond as expected;
they did not realize or understand why their in-
structions to the automation had not resulted in
the desired change.
In a sense, this is a good example of how pilots
try to communicate with the system in a way
analogous to communication with another hu-
man agent. They assume that entering a desired
target value is sufficient for the system (as it
would be for a human crew member) to under-
stand that it is supposed to achieve this new tar-
get and which method it should use.
These investigations into one specific field of
activity illustrate a trend in human-machine co-
operation (Woods, 1993). Technology allows a
proliferation of modes of increasing complexity
and capability for autonomous activity, as illus-
trated in Figure 2. These changes create new
cognitive demands for human supervisory con-
trollers, demands that tend to congregate at
higher-tempo epochs, at which workload de-
mands are highest (see also Moll van Charante et
aI., 1992, for similar results from the field of an-
esthesiology). The complexity of modes chal-
lenges the human supervisor's ability to track
and anticipate the behavior of the automation.
Difficulties in maintaining mode awareness oc-
cur primarily in situations in which mode be-
havior is complex or transitions are frequent.
Sources of Problems in Mode Awareness
The data on problems in mode awareness im-
ply that there are two kinds of contributing fac-
tors: "buggy" mental models and opaque indi-
cations of the status and behavior of the
automation. The former derives from a failure of
the designers to anticipate the new kinds of
knowledge demands created by the automation
and a failure to provide mechanisms to help
practitioners acquire and maintain this knowl-
edge in ways usable in operational contexts
(Spiro, Vispoel, Schmitz, Samarapungavan, and
Boerger, 1987). Buggy mental models can also
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
MODE AWARENESS IN SUPERVISORY CONTROL
Flight Modes
········•···· ·.. · ·•.•.1 ...................•................... , , ................................•...... ,
A{fHR Modes! Vertical Modes
I
i
Lateral Modes
i
........................................! :: :
TOGA SRS
1
i
RWY \
FLX42 CLB [[ NAV
i
MeT DES
11
HDG(fRK
!
CLB OPEN eLB ~
i
LOC'"
!
IDLE OPEN DES [[ LOC/APPNAV :
THR EXPEDITE!
i
LAND
i
SPEED/MACH ALT
i:
ROLL OUT l
A.FLOOR V/S-FPA
1
i
1
TOGA LK GIS-FINAL
j : j
~ __ .___ FLARE
!
l__ ...
J
Approach
Capability
APPR. CAP.
DA-
March 1995-13
Autoflight Systems Status
r' -~-- _'~'- ,
I
AP-FD-A{fHR
1~~t.2
~
I
i
i
Figure 2. Example of the proliferation of modes from the primary flight display of an advanced-technology aircraft.
result from an inappropriate approach to train-
ing that does not acknowledge the need for ex-
ploration and experimentation in the process of
learning how these new systems work and how
to work these systems.
The problem of opaque, low-observability in-
terfaces derives from a failure of designers to
support the supervisor's increasingly challeng-
ing cognitive task of tracking the state and be-
havior of the automation as another kind of dy-
namic process within his or her scope of
responsibility (e.g., Norman, 1990). The indica-
tions of the nominal status of the automation are
a kind of data; the practitioner must interpret
these data to develop and maintain an assess-
ment of the automation as a process over time.
The data on problems in mode awareness
strongly imply that this cognitive demand is
poorly supported by the kinds of displays on the
automation currently provided to practitioners.
As Wiener (1989) puts it, the three most com-
monly asked questions on the highly automated
flight deck are what is it doing, why is it doing
that, and what will it do next? We would like to
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
14-March 1995
add to this list a fourth question: How in the
world did we (i.e., all human and machine
agents cooperating in the cockpit) get into that
mode? The interpretation of data on the auto-
mation as process is apparently a cognitively de-
manding task rather than a mentally economi-
cal one, given the "strong and silent" character
of the machine agent. This is troublesome,
as this cognitive task is important during
high-tempo, high-workload, high-criticality
situations.
Coping with Mode Error and Aiding
Mode Awareness
This examination of mode awareness leads to
several strategic directions for responding to
problems in this cognitive task. First, one can
say that mode awareness problems are induced
by the complexity of the technological system.
Technological powers for automation are used
clumsily when cognitive and other demands on
the operational system created by new automa-
tion are ignored (Norman,
1990;
Woods,
1993;
Woods, Johannesen, Cook, and Sarter, 1994).
This is what we mean by
technology-centered au-
tomation
(see Billings, 1991, for an extensive dis-
cussion of technology-centered vs. human-
centered automation). Then one way to improve
the human-machine system is to reduce the op-
erational complexity induced by how technol-
ogy is deployed. In the case of mode awareness,
this can be stated clearly: Reduce the number
and complexity of the modes. However, there
may be a variety of pressures, such as market-
ing demands from a diverse set of customers or
the desire to optimize parameters such as preci-
sion or efficiency across different operational
circumstances, that reduce the designer's abil-
ity to counter mode proliferation through
simplification.
Two other directions for change probably are
tightly coupled in their implementation in a real
field of practice, though they can be discussed
separately. One is to support the new knowledge
demands created by increasingly complex auto-
mated resources through new approaches to
HUMAN FACTORS
training human supervisory controllers. This re-
quires much more than simply adding a new list
of facts about how the automation works. In-
stead, training must be focused on knowledge
activation in context in order to avoid what we
are already seeing-inert knowledge, demon-
strated by the fact that the user can recite the
facts but fails to use the knowledge effectively in
real operational contexts (e.g., Feltovich, Coul-
son, Spiro, Dawson-Saunders, in press). Train-
ing to enhance skill at controlling attention
would also be relevant here (Gopher, 1991).
The new knowledge demands require greater
investments in developing and teaching strate-
gies for working the system of automated reo
sources in varying operational contexts. Finally,
the knowledge demands of new levels of auto-
mation are strongly conditioned by a major con-
straint: If the automation is well engineered in a
narrow sense, it will work well in a variety of
routine situations; the problems of supervisory
control will be manifest in situations with com-
plicating factors that go beyond these routines.
However, these will be relatively infrequent, at
least for the individual practitioner.
Thus meeting the knowledge demands will re-
quire an investment in maintaining usable
knowledge relevant to the more difficult but in-
frequently occurring situations. In this, as in
many other cases of introducing new levels of
automation (e.g., Adler,
1986;
Bereiter and
Miller, 1988), new automation produces new
training requirements.
Another direction for change is to develop new
forms of aiding mode awareness itself through
changes in the interface and displays that reveal
what the automation is doing, why it is doing it,
and what it will do next. The strategy is to pro-
vide better indications of what mode the system
is in and how future conditions may produce
changes without direct practitioner interven-
tion, as well as to support better detection of and
recovery from mode misassessments and mode
errors when they do occur. Some attempts to do
this have been made by changing the overall
format of a display in different modes or by
providing other salient visual indications of
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
MODE AWARENESS IN SUPERVISORY CONTROL
March 1995-15
transitions between modes. However, because
the practitioner's visual channels are often heav-
ily loaded in some fields of practice, signaling
mode changes through nonvisual channels such
as aural or kinesthetic feedback may be more
useful (Monk, 1986; Sellen et aI., 1992). Another
concept is to provide displays that capture past
instructions to the automation and the corre-
sponding system behavior. Such "history of in-
teraction" displays could provide a visual trace
of past and even projected system behavior un-
der the current mode configuration. However,
such displays would have to be "intelligent," as
the future behavior of the automated systems is
contingent on future events in the environment.
Research can offer several suggestions con-
cerning potentially fruitful countermeasures.
These include general strategies to effectively
and clearly signal mode status and changes,
such as using orienting auditory or tactile sig-
nals (Monk, 1986; Sellen et aI., 1992), as well as
particular tips, such as displaying active targets
with mode annunciations. However, the human
error, cognitive engineering, and human-
computer interaction communities have barely
begun to study the relevant issues to provide the
necessary research base to drive or support
practical advice to designers. Developing such
advice probably requires that we advance our
understanding of how attention shifts across the
perceptual field in dynamic multi task domains
(e.g., Eriksen and Murphy, 1987; Jonides and
Yantis, 1988).
In this field of activity, shifting the focus of
attention does not refer to initial adoption of a
focus from some neutral waiting state (Kahne-
man, 1973). Instead, one reorients attentional fo-
cus to a newly relevant event from a previous
state in which attention was focused on other
data channels or on other cognitive activities
(such as diagnostic search, response planning,
and communication to other agents). We need to
understand how some practitioners develop a
facility in reorienting attention rapidly to new,
potentially relevant stimuli (Woods, 1992).
Thus, investigating how to aid mode awareness
and how to provide cognitive tools to avoid or
cope with mode-related problems is a fruitful
avenue for advancing our understanding of gen-
eral issues, such as the cognitive processes (e.g.,
control of attention, workload management, and
mental simulation) or, more simply, the panoply
of cognitive processes that go under the generic
label of situation awareness.
Another design-aiding path that has been pro-
posed to deal with mode-related problems is
forcing functions. Forcing functions are defined
as "something that prevents the behavior from
continuing until the problem has been cor-
rected" (Lewis and Norman, 1986, p. 420). Forc-
ing functions can take a variety of forms: the
system can prevent (or "gag") the user from ex-
pressing impossible intentions, it can react to
illegal actions by doing nothing, or it can ex-
plore with the user what the user's intention was
and then help translate this intention into a le-
gal action ("self-correct," "teach me," or "let's
talk about it"). The problem with such forcing
functions is that they require (a) that there is
only one legal action/strategy for each intention
or (b) that a system is capable of inferring the
user's intention and can compare that with his
or her input in order to judge the acceptability
of the input. Such a system would also need ac-
cess to information on the overall context, which
can help to determine whether or not an action
is appropriate. Without these capabilities, it
would have to question almost any action, just
in case, and run the hazard of overinterrupting
at the wrong times.
The last direction is to consider supervisory
control of automated resources as a cooperative
or distributed multiagent architecture. One co-
operative agent concept, "management by con-
sent," requires that the human members of the
team agree to changes in target or mode of con-
trol before they are activated. This cooperative
architecture could help the people in the system
to stay involved and informed about the activi-
ties of their automated partners. Forcing practi-
tioners to attend to automated system actions
could help them maintain a model or trace of all
prior system interactions and lead to better pre-
diction of future automated system behavior.
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
16-March 1995
However, having all changes checked and
agreed to by every member of the team may
have limits. Is the automation offloading bur-
dens in this style of coordination? Are new bur-
dens created? If human checks and consent are
required for too many "obvious" changes, will
the task become mindless and automatic?
Should only "critical" changes require consent?
How does one determine which changes in
which contexts are the critical ones?
Note that all of these recommendations are
human-centered in the sense that the costs of
clumsy automation are seen in human error.
Thus the way to reduce perceived problems in
the human element is to recognize that they are
symptoms of the complexities produced by the
clumsy use of technological possibilities (Woods
et al., 1994).
THE RELATIONSHIP BETWEEN MODE AND
SITUATION AWARENESS
Our results suggest that the design and capa-
bilities of advanced automated systems make it
more important and at the same time more dif-
ficult for users to maintain awareness of the sta-
tus and behavior across the system's different
modes of operation. Despite the fact that vectors
of technology change are increasingly challeng-
ing mode awareness, little research has been
done to better understand the relevant human-
machine questions. Without this understanding,
it will not be possible to develop effective coun-
termeasures to mode-related problems. The
same deficit can be observed for the issue of sit-
uation awareness in general; a long tradition of
research has not brought us much closer to be-
ing able to understand and support the phenom-
enon. What kind of research agenda is needed so
that the research base can be expanded and
practical countermeasures developed before
technology change creates mode-related prob-
lems of such magnitude that industries cry out
for immediate answers?
First, extended efforts to develop the "right"
definition or a consensus definition of situation
(and mode) awareness will probably not be con-
structive. Rather, the term
situation awareness
HUMAN FACTORS
should be viewed as just a label for a variety of
cognitive processing activities that are critical
to dynamic, event-driven, and multi task fields of
practice. A few of the cognitive processes that
may be involved when one invokes the label of
situation awareness are control of attention (Go-
pher, 1991), mental simulation (Klein and Cran-
dall, in press), directed attention (Woods, 1992),
and contingency planning (Orasanu, 1990). An-
alyzing these cognitive processes and under-
standing what factors affect these processes
should be the focus in the attempt to support
situation and mode awareness (see Endsley,
1988; Sarter and Woods, 1991; and Tenney,
Jager Adams, Pew, Huggins, and Rogers, 1992,
for some initial steps). Second, it appears to be
futile to try to determine the most important
contents of situation awareness, because the sig-
nificance and meaning of any data are depen-
dent on the context in which they appear.
Measuring Situation or Mode Awareness
Conceptual or theoretical developments about
the cognitive processes peculiarly associated
with supervisory control of dynamic processes
are of critical importance to the eventual devel-
opment of effective measures of mode or situa-
tion awareness. Measurement techniques need
to be developed based on such theoretical devel-
opments in order to ensure that they capture the
phenomenon of interest.
Currently, there are three major categories of
measurement of situation awareness: subjective
ratings, explicit performance measures, and im-
plicit performance measures. The use of subjec-
tive measures (e.g., Situation Awareness Rating
Technique, or SART; Vidulich, 1992), in which
the operator is expected to rate his or her own
level of awareness, is problematic on a variety of
grounds-for instance, they confuse process and
product, and field data exist that show that
misassessments color a person's whole under-
standing and recall of the incident evolution.
(For example, video replay of the participant's
behavior coupled with replay of the actual
state of affairs is often necessary to get partici-
pants to recognize their own misassessments.)
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
MODE AWARENESS IN SUPERVISORY CONTROL
March 1995-17
Subjective measures seem to make sense only
when combined with other measurement tech-
niques-for example, in order to learn about
how well calibrated were the participants in the
evolving incident (extending the concept of cal-
ibration to control of attentional focus).
An example of an explicit performance mea-
sure to assess situation awareness is the Situa-
tion Awareness Global Assessment Technique
(SAGAT; Endsley, 1988). This method requires
that subjects, typically pilots, fly a given mis-
sion on a flight simulator. At some random point
or points in time, the simulation is halted and
the cockpit displays and outside view are
blanked. The pilot is then asked a series of ques-
tions about the existing situation. The pilot's an-
swers are later compared with what was actu-
ally going on in the scenario. The agreement
between the two serves as a measure of the pi-
lot's situation awareness.
The most important problem associated with
this technique is that halting the simulation and
prompting the pilot for information concerning
particular aspects of the situation is likely to dis-
turb the very phenomena the investigator
wishes to observe. All research methods for the
study of attentional- and awareness-related pro-
cesses suffer from this dilemma-the methods of
observation disturb and change or eliminate the
phenomenon under observation. For example,
one of the important cognitive constituents of
situation awareness is the ability to activate rel-
evant knowledge during the process of handling
an evolving incident. Prompting the participant
for knowledge concerning particular aspects of a
situation is itself a retrieval cue and relevance
marker that can change what the participant
will call to mind. This will reveal what knowl-
edge the pilot can activate when prompted with
investigator cues as to relevance, but it will not
shed light on what knowledge the participant
would activate on his or her own or see as rele-
vant in a particular situation.
The third approach-implicit performance
measures-involves the design of experimental
scenarios that include tasks and events that
probe the subject's situation awareness (e.g.,
Sarter and Woods, 1994). In order for this tech-
nique to work, the probes have to be operation-
ally significant in the sense that they should pro-
vide cues to the operator that, if perceived,
should lead to an observable change in behavior.
The shortcoming of this technique is that it as-
sumes a direct relationship between situation
awareness and performance. This problem can
be addressed in part, however, by means of de-
briefings in which the attempt is made to deter-
mine why a certain behavior did or did not oc-
cur. The major advantage of the approach is that
it allows for a focused on-line collection of data
while minimizing the disruptive impact of
probes on the subject's behavior-or allowing
no more than is inevitable in any simulated sit-
uation. (For a more comprehensive critique of
techniques for measuring situation awareness
see, e.g., Sarter and Woods, 1991; Tenney et aI.,
1992.)
MODE AWARENESS IN SUPERVISORY
CONTROL: CONCLUSIONS AND CRITICAL
RESEARCH ISSUES
As technology allows the proliferation of more
automated modes of operation in a vast variety
of systems across numerous domains, human su-
pervisory control faces new challenges. The flex-
ibility achieved through these mode-rich sys-
tems has a price: It increases the need for mode
awareness in that human supervisory control-
lers must track what their machine counterparts
are doing, what they will do, and why they are
doing it. Data from field experiments and inci-
dent and accident reports show that this leads to
new kinds of mode-related problems.
At this stage, research results point to general
classes of countermeasures that can be useful.
However, applications of the currently available
knowledge about mode error and mode aware-
ness are lagging. Designers and trainers have
barely begun to confront the issues created by
systems with more numerous and more complex
modes. Although a great deal is understood
about mode problems, the research to examine
specific classes of countermeasures in more de-
tail-and to determine what is required to use
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
l8-March 1995
them effectively, either singly or in combina-
tion-is just beginning.
In this paper we have laid out the multiple
cognitive demands involved in maintaining
mode awareness in supervisory control. Opera-
tors dealing with advanced technology need to
know how their system works-that is, they
need to have an appropriate mental model of the
device. They need to track environmental states
and events as well as past instructions that were
given to the automation by themselves or by
other operators. Finally, they need to monitor,
integrate, and interpret numerous indications of
the active mode configuration.
This last aspect of utilizing system-provided
feedback involves a complex set of issues. It is
not simply a question of how much is enough
feedback (though this is one of the most fre-
quently raised questions in discussions of situa-
tion awareness). In hindsight, it seems that all
the necessary data are available if only the user
attends to and interprets them properly, based
on complete knowledge of how the automation
works, on perfect memory for past instructions,
and on an accurate assessment of all relevant
environmental parameters. The question is,
what approaches to training and feedback de-
sign can enhance skill at this task? Is it impor-
tant to improve the feedback on the status of the
automation as a process in itself, or should feed-
back be enriched concerning actual system be-
havior? When are more indications informative,
and when do they merely add more clutter to an
already crowded data field? How can nonvisual
sensory channels be utilized to enhance periph-
eral awareness and reduce demands on focal at-
tention?
Our research results as well as recent incident
and accident reports concerning glass cockpit
aircraft suggest initial answers to these ques-
tions. They show, for example, that when pilots
experience automation surprises, the timely de-
tection of these surprises is, in most cases, based
on observations of a discrepancy between de-
sired and actual aircraft behavior, not on indi-
cations of the nominal status of the automated
systems.
HUMAN FACTORS
The proliferation of systems with multiple
modes and limited observability is not limited
to aviation, though aviation is an excellent nat-
ural laboratory in which to find and study new
mode-related problems. Cook et al. (1991) and
Moll van Charante et al.
(1992)
studied com-
puter-based devices used in critical care medi-
cine and found many of the same kinds of mode-
related issues. Similarly, the conceptual and
methodological issues surrounding mode and
situation awareness extend beyond aviation to
other domains in which practitioners confront
evolving problems, multiple changing data
streams, and multiple interleaved tasks with
changing priorities. The following paper in this
special issue (Gaba, Howard, and Small) shows
how the problem of situation awareness arises
in the domain of anesthesiology.
Our work shows that practitioners in a variety
of domains need assistance to meet the demands
imposed by systems with numerous complex
and apparently autonomous modes. Those who
provide support to these practitioners need to
better understand the set of cognitive processing
activities involved in maintaining situation and
mode awareness. In other words, a domain-
independent, process-oriented approach, not a
product-oriented approach, must be taken in or-
der to analyze the phenomena of mode and sit-
uation awareness. This will enable the determi-
nation of why and under what circumstances
breakdowns in mode and situation awareness
occur. This in turn will help point the way to-
ward how to train human supervisory control-
lers; how to provide them with applicable sys-
tem knowledge; how to design cognitive tools
that support the monitoring, assessment, and
awareness of system behavior; and how to im-
prove the coordination between the human and
machine agents in the system.
ACKNOWLEDGMENTS
The preparation of this manuscript was supported in part
by
NASA-Ames Research Center under cooperative research
agreement NCe 2-592. Special thanks go to our technical mono
itors, Ev Palmer and Kevin Corker, for their challenging ques-
tions and continual support. We would also like to thank the
pilots and instructors who collaborated with us on the re-
ported research.
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
MODE AWARENESS IN SUPERVISORY CONTROL
March 1995-19
REFERENCES
Adler, P. (1986). New technologies, new skills. California Man-
'agement Review, 29, 9-28.
Bereiter, S" and Miller, S. M. (1988). Sources of difficulty in
troubleshooting automated manufacturing systems. In W.
Karwowski, H. Parsaei, and M. R. Willhelm (Eds.), Ergo-
nomics of hybrid automated systems. Amsterdam: Elsevier.
Billings, C. E, (1991). Human-centered aircraft automation phi-
losophy (NASA Tech. Memorandum 103885). Moffett Field,
CA: NASA-Ames Research Center.
Cook, R. I., Potter, S. S., Woods, D. D., and McDonald,
J.
M.
(1991). Evaluating the human engineering of microproces-
sor-controlled operating room devices. Journal of Clinical
Monitoring, 7, 217-226.
Endsley, M. R. (1988, May). Situation awareness global assess-
ment technique (SAGAT). Presented at the National Aero-
space and Electronic Conference, Dayton, OH.
Eriksen, C. W., and Murphy, T. D. (1987). Movement of atten-
tional focus across the visual field: A critical look at the
evidence. Perception and Psychophysics, 42, 299-305.
Feltovich, P. J., Coulson, R. L., Spiro, R, J., and Dawson-
Saunders, B. K. (in press). Knowledge application and
transfer for complex tasks in ill-structured domains: Im-
plications for instruction and testing in biomedicine. In D.
Evans and V. Patel (Eds.), Advanced models of cognition for
medical training and practice, New York: Springer-Verlag.
Gopher, D. (1991). The skill of attention control: Acquisition
and execution of attention strategies. In D. Meyer and S.
Kornblum (Eds.), Attention and performance XIV. Hills-
dale, NJ: Erlbaum.
Jonides, J., and Yantis, S. (1988). Uniqueness of abrupt visual
onset in capturing attention. Perception and Psychophysics,
43, 346--354.
Kahneman, D. (1973). Attention and effort, Englewood Cliffs,
NJ: Prentice-Hall.
Klein, G., and Crandall, B, (in press). The role of mental sim-
ulation in problem solving and decision making. In J.
Flach, P. Hancock, J. Caird, and K. Vicente (Eds,), The
ecology of human-machine systems, Hillsdale, NJ: Erl-
baum.
Lenorovitz, J. M. (1990). Indian A320 crash probe data show
crew improperly configured the aircraft. Aviation Week &
Space Technology, 132(6/25/90), 84-85.
Lewis, C., and Norman, D. A. (1986). Designing for error. In
D. A. Norman and S, W. Draper (Eds.), User-centered sys-
tem design: New perspectives of human-computer interaction
(pp. 411-432). Hillsdale, NJ: Erlbaum.
Ministere de l'Equipement, du Logement, des Transports et de
l'Espace (1992, February). Rapport preliminaire relatif a
['accident survenu Ie 20 janvier 1992 pres du Mont Sainte-
Ddile (Bas-Rhin) a ['Airbus A320 de la Commission d'En-
quete Administrative [Preliminary report of the Inquiry
Commission on the Airbus A-320 accident near Mont
Sainte-Odile on January 20, 1992]. Paris: Author.
Moll van Charante, E., Cook, R. I., Woods, D. D., Yue, L., and
Howie, M. B. (1992). Human-computer interaction in con-
text: Physician interaction with automated intravenous
controllers in the heart room, In Proceedings of the Fifth
Symposium on Analysis, Design and Evaluation of Man-
Machine Systems. The Hague, Netherlands.
Monk, A, (1986). Mode errors: A user-centered analysis and
some preventative measures using key-contingent sound.
InternationalJournal of Man-Machine Studies, 24, 313-327.
Norman, D. A. (1981). Categorization of action slips. Psycho-
logical Review, 88(1), 1-15.
Norman, D. A, (1988). The psychology of everyday things. New
York: Basic Books.
Norman, D. A. (1990). The "problem" of automation: Inappro-
priate feedback and interaction, not "over-automation."
Philosophical Transactions of the Royal Society of London,
B327, 585--593.
Orasanu, J. M. (1990). Shared mental models and crew decision
making
(CSL Report 46). Princeton, NJ: Princeton Univer-
sity, Cognitive Science Laboratory.
Sarter, N. B., and Woods, D. D. (1991). Situation awareness: A
critical but iIl-defined phenomenon. International Journal
of Aviation Psychology, I, 45--57.
Sarter, N, B., and Woods, D. D. (1992). Pilot interaction with
cockpit automation: Operational experiences with the
Flight Management System. International Journal of Avia-
tion Psychology, 2, 303-322.
Sarter, N. B., and Woods, D. D. (1994). Pilot interaction with
cockpit automation: II. An experimental study of pilots'
model and awareness of the Flight Management System.
International Journal of Aviation Psychology, 4, 1-28.
Sellen, A.
J.,
Kurtenbach, G. P., and Buxton, W. A. S, (1992).
The prevention of mode errors through sensory feedback.
Human-Computer Interaction, 7, 141-164.
Spiro, R.
J.,
Vispoel, W., Schmitz,
J.,
Samarapungavan, A.,
and Boerger, A, (1987). Knowledge acquisition for appli-
cation: Cognitive flexibility and transfer in complex con-
tent domains. In B. C. Britton (Ed.), Executive control pro-
cesses (pp. 177-199). Hillsdale, NJ: Erlbaum.
Tenney, Y. J" Jager Adams, M., Pew, R. W., Huggins, A. W. F.,
and Rogers, W. H. (1992). A principled approach to the mea-
surement of situation awareness in commercial aviation
(NASA Contractor Report NASI-18788). Hampton, VA:
NASA Langley Research Center.
Vidulich, M, A. (1992). Measuring situation awareness. In Pro-
ceedings of the Human Factors Society 36th Annual Meeting
(pp. 40-41). Santa Monica, CA: Human Factors and Ergo-
nomics Society.
Wiener, E. L. (1989). Human factors of advanced technology
("glass cockpit") transport aircraft (Tech. Report 177528).
Moffett Field, CA: NASA Ames Research Center.
Woods, D. D. (1992). The alarm problem and directed attention
in dynamic fault management (Cognitive Systems Engi-
neering Laboratory Report CSEL 92-TR-06). Columbus:
Ohio State University.
Woods, D. D. (1993). The price of flexibility in intelligent in-
terfaces. Knowledge-Based Systems, 6, 1-8.
Woods, D. D., Johannesen, L. J., Cook, R. I., and Sarter, N. B.
(1994, December). Behind human error: Cognitive systems,
computers, and hindsight (CSERIAC State-of-the-Art Re-
port 94-01). Dayton, OH: Crew Systems Ergonomic Infor-
mation and Analysis Center.
Woods, D. D., O'Brien, J, F., and Hanes, L. F. (1987). Human
factors challenges in process control: The case of nuclear
power plants. In G. Salvendy (Ed.), Handbook of human
factors (pp. 1724-1770). New York: Wiley.
Date received: February
2, 1993
Date accepted: December
21,1993
at HFES-Human Factors and Ergonomics Society on April 3, 2012hfs.sagepub.comDownloaded from
... and finally a lack of adaptivity of the worker-technology system due to a lack of anticipation and feedback [36]. Moreover, these limits are reviewed as being one cause of accidents [21,38]. They can be explained by: (1) the limitations of the technology, the lack of in-depth modelling of human socio-cognitive activities [2,16,35]; but also (2) by the way, the introduction of technology is managed in particular with relation to socio-organisational consequences of technological changes, as outlined by the participatory approach and activity-centred ergonomics approach since the 70s [24,40]. ...
Chapter
Full-text available
The purpose of this chapter is to highlight the main findings of French language studies on senior mentoring. It shows in particular that mentoring is a protean system which can be defined and fashioned by the way it is exercised, by the type of activity that it involves and by the context in which the mentoring practices take place.
... and finally a lack of adaptivity of the worker-technology system due to a lack of anticipation and feedback [36]. Moreover, these limits are reviewed as being one cause of accidents [21,38]. They can be explained by: (1) the limitations of the technology, the lack of in-depth modelling of human socio-cognitive activities [2,16,35]; but also (2) by the way, the introduction of technology is managed in particular with relation to socio-organisational consequences of technological changes, as outlined by the participatory approach and activity-centred ergonomics approach since the 70s [24,40]. ...
Chapter
Full-text available
While the COVID-19 pandemic has brought new attention to how essential Internet-connected services are to society's functioning, there continues to be a dearth of research about how these critical digital services (CDSs) are operated, maintained, and delivered from a cognitive work, human factors, and safety science perspective. Efforts to anticipate what the future of work will look like must consider the challenges and opportunities this now critical domain faces. The conditions are favorable for making progress in studying work in this domain. A small (but growing) community of practitioners in software engineering and operations are enthusiastic about exploring and understanding the cognitive work they engage in every day. There is also (at least, currently) a relative absence of regulatory or procedural barriers that would otherwise hamper productive exploration by researchers.
... and finally a lack of adaptivity of the worker-technology system due to a lack of anticipation and feedback [36]. Moreover, these limits are reviewed as being one cause of accidents [21,38]. They can be explained by: (1) the limitations of the technology, the lack of in-depth modelling of human socio-cognitive activities [2,16,35]; but also (2) by the way, the introduction of technology is managed in particular with relation to socio-organisational consequences of technological changes, as outlined by the participatory approach and activity-centred ergonomics approach since the 70s [24,40]. ...
Chapter
Full-text available
Nuclear safety depends largely on the competence and competencies of the employees in the sector. While this is not a new subject, numerous current and future changes once again bring into question these competencies and their management. In recent years, a number of studies have demonstrated the limitations of the managerial approach to competencies, in particular their failure to take into account the collective dimension of said competencies and the contextual aspects of their implementation. In our opinion, competencies must be considered in terms of the work activity so that they are not restricted to a systematised formalism, and their management must be tackled within the organised framework in which they are deployed.
... and finally a lack of adaptivity of the worker-technology system due to a lack of anticipation and feedback [36]. Moreover, these limits are reviewed as being one cause of accidents [21,38]. They can be explained by: (1) the limitations of the technology, the lack of in-depth modelling of human socio-cognitive activities [2,16,35]; but also (2) by the way, the introduction of technology is managed in particular with relation to socio-organisational consequences of technological changes, as outlined by the participatory approach and activity-centred ergonomics approach since the 70s [24,40]. ...
Chapter
Full-text available
The revolution in systems autonomy is underway. We are at the dawn of a radical change in the use we make of the equipment available to us, with the upcoming and progressive autonomy of this equipment allowing it to adapt to the situation. The danger and complexity of the military world makes it difficult to anticipate the major issues linked to the adoption of these machines, in particular that of the responsibility of the leaders who will allocate their tasks or missions, as well as that of those who design and test these systems. This chapter proposes to list some of the safeguards necessary for their use: the control of these systems by leaders, the necessary confidence in their use and the need to integrate into their embedded software rules to be respected when tasks are executed (rules of engagement linked to the circumstances, or standards not to be violated). It emphasises the absolute role of leaders whose choices must, at all times, prevail over those of AIs, especially in complex situations, and who must ensure the conduct of the manoeuver because they are the guarantor who gives meaning to military action.
... and finally a lack of adaptivity of the worker-technology system due to a lack of anticipation and feedback [36]. Moreover, these limits are reviewed as being one cause of accidents [21,38]. They can be explained by: (1) the limitations of the technology, the lack of in-depth modelling of human socio-cognitive activities [2,16,35]; but also (2) by the way, the introduction of technology is managed in particular with relation to socio-organisational consequences of technological changes, as outlined by the participatory approach and activity-centred ergonomics approach since the 70s [24,40]. ...
Chapter
Full-text available
Reflecting about the challenges ahead and the evolution of safety raises a multitude of questions about the future but also about safety. This introductory chapter first nuances the notion of safety as if it were a homogeneous whole. It starts with an overview of the macro-safety model that underpins safety in high-risk industries, or more precisely, of the way industries are expected by society, and thus regulators, to ensure the safety of their operations. Following this safety “as demonstrated” view, it zooms in closer to the field and underlines some discrepancies between this “control” perspective and the actual practices contributing to the safety of high-risk industries, including non-proceduralised ones. It then explores some of the major evolution trends for 2040 and the uncertainties attached to them and reflects upon their possible effects on safety models in the future. With this global picture in mind illustrating the complexity of the topic addressed by the book, the last section provides an overview of the content of the book and its various constitutive chapters.
... and finally a lack of adaptivity of the worker-technology system due to a lack of anticipation and feedback [36]. Moreover, these limits are reviewed as being one cause of accidents [21,38]. They can be explained by: (1) the limitations of the technology, the lack of in-depth modelling of human socio-cognitive activities [2,16,35]; but also (2) by the way, the introduction of technology is managed in particular with relation to socio-organisational consequences of technological changes, as outlined by the participatory approach and activity-centred ergonomics approach since the 70s [24,40]. ...
Chapter
Full-text available
This chapter starts with the premise that the future of work is unpredictable. This has been illustrated by the COVID-19 pandemic, and further profound changes in contexts of work will bring significant and volatile changes to future work, as well as health, safety, security, and productivity. Micronarrative testimony from healthcare practitioners whose work has been affected dramatically by the emergence of the pandemic is used in this chapter to derive learning from experience of this major change. The narratives concern the nature of responding to a rapidly changing world, work-as-imagined and work-as-done, human-centred design and systems thinking and practice, and leadership and social capital. Seven learning points were drawn from clinicians’ reflections that may be more widely relevant to the future of work.
... and finally a lack of adaptivity of the worker-technology system due to a lack of anticipation and feedback [36]. Moreover, these limits are reviewed as being one cause of accidents [21,38]. They can be explained by: (1) the limitations of the technology, the lack of in-depth modelling of human socio-cognitive activities [2,16,35]; but also (2) by the way, the introduction of technology is managed in particular with relation to socio-organisational consequences of technological changes, as outlined by the participatory approach and activity-centred ergonomics approach since the 70s [24,40]. ...
Chapter
Full-text available
Hazardous industrial activities have historically been regulated from a safety and/or risk management viewpoint based on regulations enacted by governmental authorities. This chapter describes and explains the decline in the role played by governmental actors in the process of safety regulation and the rise of private standards development organizations. Such an evolution raises a wide range of concerns regarding the incentives to enhance safety, the interests that are protected by standards endorsed by regulators, and at a societal level, the drift away from democratic governance of high-hazard activities.
... and finally a lack of adaptivity of the worker-technology system due to a lack of anticipation and feedback [36]. Moreover, these limits are reviewed as being one cause of accidents [21,38]. They can be explained by: (1) the limitations of the technology, the lack of in-depth modelling of human socio-cognitive activities [2,16,35]; but also (2) by the way, the introduction of technology is managed in particular with relation to socio-organisational consequences of technological changes, as outlined by the participatory approach and activity-centred ergonomics approach since the 70s [24,40]. ...
Chapter
Full-text available
This chapter is a brief overview of the Global Workforce Forecast (GWF), a document made public by Airbus presenting the results of a large study launched by the group in 2018. The aim of the GWF is to provide every employee, every reader with relevant information and data to be prepared for the 2019–2029 competence challenge the company will face by 2030.
... and finally a lack of adaptivity of the worker-technology system due to a lack of anticipation and feedback [36]. Moreover, these limits are reviewed as being one cause of accidents [21,38]. They can be explained by: (1) the limitations of the technology, the lack of in-depth modelling of human socio-cognitive activities [2,16,35]; but also (2) by the way, the introduction of technology is managed in particular with relation to socio-organisational consequences of technological changes, as outlined by the participatory approach and activity-centred ergonomics approach since the 70s [24,40]. ...
Chapter
Full-text available
Moving from the military to civilian organisations, this chapter discusses the key points of Gérard de Boisboissel’s contribution about the impact of autonomous systems on safety issues. We comment on these points from two angles: (1) human–system interaction and (2) organisational reliability. Stressing differences in context and in resulting implications, we strongly recommend that the upcoming invasion of work organisations by autonomous systems should be viewed (also) through the lens of the social sciences.
... and finally a lack of adaptivity of the worker-technology system due to a lack of anticipation and feedback [36]. Moreover, these limits are reviewed as being one cause of accidents [21,38]. They can be explained by: (1) the limitations of the technology, the lack of in-depth modelling of human socio-cognitive activities [2,16,35]; but also (2) by the way, the introduction of technology is managed in particular with relation to socio-organisational consequences of technological changes, as outlined by the participatory approach and activity-centred ergonomics approach since the 70s [24,40]. ...
Chapter
Full-text available
In Japan, population ageing is leading the government to raise the retirement age to beyond 70, and even to 75 by 2040. This policy of maintaining older workers in employment is compelling companies to provide job opportunities to people with up to 50 years of work experience. This has consequences on the updating of skills—particularly, those related to new technologies, on employee engagement and motivation, on the management of age-related constraints in workstation ergonomics and work organisation, and it could pose a serious threat to safety. This chapter aims to describe the situation in Japan and the possible solutions put forward to overcome challenges. It then invites reflection on the management of longer careers in France and in Europe, where population projections also point to an increasingly aged population by 2040.
Chapter
Full-text available
Educational transfer occurs when knowledge, acquired in a specific context or for one purpose, is used in a different context or for a different purpose. There are the obvious cases of transfer, for example, the transfer of knowledge learned in a school classroom to the running of a business. However, many less obvious instances of transfer involve the productive utilization (applicatioń) of knowledge in doing substantial cognitive tasks, since these tasks are not likely to duplicate exactly the conditions under which the pertinent knowledge was acquired (cf. Voss 1978). In complex professional domains such as medicine, two kinds of failures of transfer and knowledge application have often been identified. One involves the demonstration of erratic clinical competence from one instance of knowledge application to another. That is, practitioners may handle certain clinical cases quite well while performing poorly on others (e.g., Barrows et al. 1978; Elstein et al. 1978). The second is the difficulty that individuals appear to have in applying conceptual knowledge from pertinent basic sciences to clinical problem solving tasks (e.g., Lesgold et al. 1988; Myers et al. 1990; Patel et al. 1989; Patel et al. 1991 ).
Article
Following is a continuation of the list of titles and authors. Digital MTI Processors. By A. B. Corbin and M. P. Toohey. Digital Step Transform Approach to Airborne Radar Processing. By R. P. Perry and H. W. Kaiser. Threshold Logic Multiplier. By D. Hampel, L. Micheel and K. Prost. Introduction to a Model of the Human Visual System. By Matthew Kabrisky. Spatial Information Coding in the Human Visual System: Psychophysical Data. By Allan Pantle. Pattern Recognition Techniques Suggested from Psychological Correlates of a Model of the Human Visual System. By Arthur P. Ginsburg. Applications of a Model of the Human Visual System to Pattern Recognition Problems. By Roger A. Gagnon. Jet Engine Malfunction Diagnosis. The Sensing Problem, Candidate Solutions and Experimental Results. By Jerry E. Minnear and W. J. Harris. Active Flutter Suppression - A Practical Application. By Thomas E. Noll and Larry R. Felt.
Article
One result of recent research on human error and disaster is that the design of the human-machine system, defined broadly, modulates the potential for erroneous action. Clumsy space use of technological powers can create additional mental burdens or other constraints on human performance that can increase the chances of erroneous actions by people especially in high workload, high tempo operations. This paper describes studies of a computer based automated device that combines critical incident analysis, bench test evaluations of the device, and field observation in order to understand physician-device interaction in the context of heart surgery. The results link, for the same device, user group context and three findings. First, the device exhibits classic human-computer interaction flaws such as lack of feedback on device state and behaviour. Second, these HCI flaws actually do increase the potential for erroneous actions and increase the potential for erroneous assessments of device state and behaviour. The potential for erroneous state assessment is especially troublesome because it impairs the user's ability to detect and recover from misassemblies, misoperations and device failures. Third, these data plus critical incident studies directly implicate the increased potential for erroneous setup and the decreased ability to detect errors as one kind of important contributor to actual incidents. The increased potential for error that emanates from poor human-computer interaction is one type of latent failure that can be activated and progress towards disaster given the presence of other potentiating factors in Reason's model of the anatomy of disasters.