ArticlePDF Available

How to Make Automated Systems Team Players

Authors:

Abstract

ters, medication use in hospitals). What 2 can we offer to jump start these cases of organizational and technological change from more than 30 years of investigations on human-automation cooperation (from supervisory control studies in the 1970's to intelligent software agents in the 1990's)? Ironically, despite the numerous past studies and attempts to synthesize the research, a variety of myths, misperceptions, and debates continue. Furthermore, some stakeholders, aghast at the apparent implications of the research on human-automation problems, contest interpretations of the results and demand even more studies to replicate the sources of the problems. Escaping from Attributions of Human Error versus Over-Automation Generally, reactions to evidence of problems in human-automation cooperation have taken one of two directions (cf. Norman, 1990). There are those who argue that these failures are due to inherent human limitations and that with just a lit
1. HOW TO MAKE AUTOMATED
SYSTEMS TEAM PLAYERS
Klaus Christoffersen and David D. Woods
Interface (noun): an arbitrary line of demarcation set up in order to apportion the blame for
malfunctions.
(Kelly-Bootle, 1995, p. 101).
HUMAN-AUTOMATION COOPERATION:
WHAT HAVE WE LEARNED?
Advances in technology and new levels of automation have had many effects
in operational settings. There have been positive effects from both an economic
and a safety point of view. Unfortunately, operational experience, field research,
simulation studies, incidents, and occasionally accidents have shown that new
and surprising problems have arisen as well. Breakdowns that involve the inter-
action of operators and computer-based automated systems are a notable and
dreadful path to failure in these complex work environments.
Over the years, Human Factors investigators have studied many of the “natural
experiments” in human-automation cooperation – observing the consequences in
cases where an organization or industry shifted levels and kinds of automation.
One notable example has been the many studies of the consequences of new
levels and types of automation on the flight deck in commercial transport aircraft
(from Wiener & Curry, 1980 to Billings, 1996). These studies have traced how
1
1
2
3
4
5
6
7
8
9
10111
11
12
13
14
15
16
17
18
19
20111
21
22
23
24
25
26
27
28
29
30111
31
32
33
34
35
36
37
38
39
40111
Advances in Human Performance and Cognitive Engineering Research, Volume 2,
pages 1–12.
Copyright © 2002 by Elsevier Science Ltd.
All rights of reproduction in any form reserved.
ISBN: 0-7623-0864-8
episodes of technology change have produced many surprising effects on many
aspects of the systems in question.
New settings are headed into the same terrain (e.g. free flight in air traffic
management, unmanned aerial vehicles, aero-medical evacuation, naval opera-
tions, space mission control centers, medication use in hospitals). What can we
offer to jump start these cases of organizational and technological change from
more than 30 years of investigations on human-automation cooperation
(from supervisory control studies in the 1970s to intelligent software agents in
the 1990s)?
Ironically, despite the numerous past studies and attempts to synthesize the
research, a variety of myths, misperceptions, and debates continue. Furthermore,
some stakeholders, aghast at the apparent implications of the research on human-
automation problems, contest interpretations of the results and demand even
more studies to replicate the sources of the problems.
Escaping from Attributions of Human Error versus Over-Automation
Generally, reactions to evidence of problems in human-automation cooperation
have taken one of two directions (cf. Norman, 1990). There are those who argue
that these failures are due to inherent human limitations and that with just a
little more automation we can eliminate the “human error problem” (e.g. “clear
misuse of automation . . . contributed to crashes of trouble free aircraft”, La
Burthe, 1997). Others argue that our reach has exceeded our grasp – that the
problem is over-automation and that the proper response is to revert to lesser
degrees of automated control (often this position is attributed to researchers by
stakeholders who misunderstand the research results – e.g. (“. . . statements
made by . . . Human Factors specialists against automation ‘per se’ ”, La Burthe,
1997). We seem to be locked into a mindset of thinking that technology and
people are independent components – either this electronic box failed or that
human box failed.
This opposition is a profound misunderstanding of the factors that influence
human performance (hence, the commentator’s quip quoted in the epigraph).
The primary lesson from careful analysis of incidents and disasters in a
large number of industries is that many accidents represent a breakdown in
coordination between people and technology (Woods & Sarter, 2000). People
cannot be thought about separately from the technological devices that are
supposed to assist them. Technological artifacts can enhance human expertise
or degrade it, “make us smart” or “make us dumb” (Norman, 1993).
The bottom line of the research is that technology cannot be considered in
isolation from the people who use and adapt it (e.g. Hutchins, 1995). Automation
2 KLAUS CHRISTOFFERSEN AND DAVID D. WOODS
1
2
3
4
5
6
7
8
9
10111
11
12
13
14
15
16
17
18
19
20111
21
22
23
24
25
26
27
28
29
30111
31
32
33
34
35
36
37
38
39
40111
and people have to coordinate as a joint system, a single team (Hutchins, 1995;
Billings, 1996). Breakdowns in this team’s coordination is an important path
towards disaster. The real lessons of this type of scenario and the potential for
constructive progress comes from developing better ways to coordinate the
human and machine team – human-centered design (Winograd & Woods, 1997).
The overarching point from the research is that for any non-trivial level
of automation to be successful, the key requirement is to design for fluent,
coordinated interaction between the human and machine elements of the system.
In other words, automation and intelligent systems must be designed to
participate in team play (Malin et al., 1991; Malin, 1999).
The Substitution Myth
One of the reasons the introduction of automated technologies into complex
work environments can fail or have surprising effects is an implicit belief on
the part of designers that automation activities simply can be substituted for
human activities without otherwise affecting the operation of the system. This
belief is predicated on an assumption that the tasks performed within the system
are basically independent. However, when we look closely at these environ-
ments, what we actually see is a network of interdependent and mutually adapted
activities and artifacts (e.g. Hutchins, 1995). The cognitive demands of the work
domain are not met simply by the sum of the efforts of individual agents working
in isolation, but are met through the interaction and coordinated efforts of
multiple people and machine agents.
Adding or expanding the role of automation changes the nature of the
interactions in the system, often affecting the humans’ role in profound ways
(one summary is in Woods & Dekker, 2000). For example, the introduction of
a partially autonomous machine agent to assist a human operator in a high
workload environment is, in many respects, like adding a new team member.
This entails new coordination demands for the operator – they must ensure that
their own actions and those of the automated agent are synchronized and
consistent. Designing to support this type of coordination is a post-condition
of more capable, more autonomous automated systems. However meeting this
post-condition receives relatively little attention in development projects. The
result can be automation which leaves its human partners perplexed, asking
Wiener’s (1989) now familiar questions: what is it doing? why is it doing that?
what is it going to do next?
As designers, we clearly want to take advantage of the power of computa-
tional technologies to automate certain kinds of cognitive work. However,
we must realize that the introduction of automation into a complex work
How to Make Automated Systems Team Players 3
3
1
2
3
4
5
6
7
8
9
10111
11
12
13
14
15
16
17
18
19
20111
21
22
23
24
25
26
27
28
29
30111
31
32
33
34
35
36
37
38
39
40111
environment is equivalent to the creation of a new cognitive system of distrib-
uted human and machine agents and new artifacts. We must also realize
the coordination across agents in the system is at least as important as the
performance of the individual agents taken in isolation, especially when
situations deviate from textbook cases. The attention we give to designing
support for this coordination as incidents evolve and escalate can be the
determining factor in the success or failure of the human-machine system
(Woods & Patterson, 2000).
How to Design for Coordination: Observability And Directability
More sophisticated automated systems or suites of automation represent an
increase in autonomy and authority (Woods, 1996). Increasing the autonomy
and authority of machine agents is not good or bad in itself. The research results
indicate that increases in this capability create the demand for greater
coordination. The kinds of interfaces and displays sufficient to support human
performance for systems with lower levels of autonomy or authority are no
longer sufficient to support effective coordination among people and more
autonomous machine agents. When automated systems increase autonomy or
authority without new tools for coordination, we find automation surprises
contributing to incidents and accidents (for summaries see Woods, 1993; Woods,
Sarter & Billings, 1997; Woods & Sarter, 2000).
The field research results are clear – the issue is not the level of autonomy
or authority, but rather the degree of coordination. However, the design impli-
cations of this result are less clear. What do research results tell us about how
to achieve high levels of coordination between people and machine agents?
What is necessary for automated systems to function as cooperative partners
rather than as mysterious and obstinate black boxes? The answer, in part, can
be stated simply as – Cooperating automation is both observable and directable.
OBSERVABILITY: OPENING UP THE BLACK BOX
One of the foundations of any type of cooperative work is a shared represen-
tation of the problem situation (e.g. Grosz, 1981; McCarthy et al., 1991). In
human-human cooperative work, a common finding is that people continually
work to build and maintain a “common ground” of understanding in order to
support coordination of their problem solving efforts (e.g. Patterson et al., 1999).
We can break the concept of a shared representation into two basic (although
interdependent) parts: (1) a shared representation of the problem state, and
(2) representations of the activities of other agents. The first part, shared
4 KLAUS CHRISTOFFERSEN AND DAVID D. WOODS
1
2
3
4
5
6
7
8
9
10111
11
12
13
14
15
16
17
18
19
20111
21
22
23
24
25
26
27
28
29
30111
31
32
33
34
35
36
37
38
39
40111
representation of the problem situation, means that the agents need to maintain
a common understanding of the nature of the problem to be solved. What type
of problem is it? Is it a difficult problem or a routine problem? Is it high priority
or low priority? What types of solution strategies are appropriate? How is the
problem state evolving? The second part, shared representation of other agents’
activities, involves access to information about what other agents are working
on, which solution strategies they are pursuing, why they chose a particular
strategy, the status of their efforts (e.g. are they having difficulties? why? how
long will they be occupied?), and their intentions about what to do next.
Together with a set of stable expectations about the general strategies and
behavior of other agents across contexts, mutual knowledge about the current
situation supports efficient and effective coordination among problem solving
agents (Patterson et al., 1999). Agents can anticipate and track the problem
solving efforts of others in light of the problem status and thus coordinate their
own actions accordingly. The communicative effort required to correctly inter-
pret others’ actions can be greatly reduced (e.g. short updates can replace lengthy
explanations). The ability to understand changes in the state of the monitored
process is facilitated (e.g. discerning whether changes are due to a new problem
or to the compensatory actions of others). An up to date awareness of the
situation also prepares agents to assist one another if they require help.
Notice how much of the knowledge discussed here is available at relatively
low cost in “open” work environments involving multiple human agents. For
example, in older, hardwired control centers, individual controllers can often
infer what other controllers are working on just by observing which displays
or control panels they are attending to. In the operating room, surgical team
members can observe the activities of other team members and have relatively
direct, common access to information about the problem (patient) state. The
open nature of these environments allows agents to make intelligent judgments
about what actions are necessary and when they should be taken, often without
any explicit communication. However, when we consider automated team
members, this information no longer comes for free – we have to actively design
representations to generate the shared understandings which are needed to
support cooperative work.
Data Availability Does Not Equal Informativeness
Creating observable machine agents requires more than just making data about
their activities available (e.g. O’Regan, 1992). As machine agents increase
in complexity and autonomy, simple presentations of low-level data become
insufficient to support effective interaction with human operators. For example,
How to Make Automated Systems Team Players 5
5
1
2
3
4
5
6
7
8
9
10111
11
12
13
14
15
16
17
18
19
20111
21
22
23
24
25
26
27
28
29
30111
31
32
33
34
35
36
37
38
39
40111
many early expert systems “explained” their behavior by providing lists of the
individual rules which had fired while working through a problem. While
the data necessary to interpret the system’s behavior was, in a literal sense,
available to operators, the amount of cognitive work required to extract a useful,
integrated assessment from such a representation was often prohibitive. A more
useful strategy was to provide access to the intermediate computations and
partial conclusions that the machine agent generated as it worked on a problem.
These were valuable because they summarized the machine agent’s conception
of the problem and the bases for its decisions at various points during the
solution process.
In general, increases in the complexity and autonomy of machine agents
requires a proportionate increase in the feedback they provide to their human
partners about their activities. Representations to support this feedback process
must emphasize an integrated, dynamic picture of the current situation, agent
activities, and how these may evolve in the future. Otherwise, mis-assessments
and miscommunications may persist between the human and machine agents
until they become apparent through resulting abnormal behavior in the process
being controlled. For example, the relatively crude mode indicators in the current
generation of airliner cockpits have been implicated in at least one major air
disaster. It is clearly unacceptable if the first feedback pilots receive about a
miscommunication with automation is the activation of the ground proximity
alarm (or worse).
Human agents need to be able to maintain an understanding of the problem
from the machine agent’s perspective. For instance, it can be very valuable to
provide a representation of how hard the machine agent is having to work
to solve a problem. Is a problem proving especially difficult? Why? If the
automated agent has a fixed repertoire of solution tactics, which have been
tried? Why did they fail? What other options are being considered? How
close is the automation to the limits of its competence? Having this sort
of information at hand can be extremely important to allow a human agent to
intervene appropriately in an escalating critical situation.
Providing effective feedback to operators in complex, highly automated
environments represents a significant challenge to which there are no ready-
made solutions. Answering this challenge for the current and future generations
of automation will require fundamentally new approaches to designing
representations of automation activity (e.g. Sarter, 1999; Sklar & Sarter, 1999;
Nikolic & Sarter, 2001). While the development of these approaches remains
to be completed, we can at least sketch some of the characteristics of these
representation strategies (Woods & Sarter, 2000). The new concepts will
need to be:
6 KLAUS CHRISTOFFERSEN AND DAVID D. WOODS
1
2
3
4
5
6
7
8
9
10111
11
12
13
14
15
16
17
18
19
20111
21
22
23
24
25
26
27
28
29
30111
31
32
33
34
35
36
37
38
39
40111
Event-based: representations will need to highlight changes and events
in ways that the current generation of state-oriented display
techniques do not.
Future-oriented: in addition to historical information, new techniques will
need to include explicit support for anticipatory reasoning,
revealing information about what should/will happen next
and when.
Pattern-based: operators must be able to quickly scan displays and pick
up possible abnormalities or unexpected conditions at a
glance rather than having to read and mentally integrate
many individual pieces of data
DIRECTABILITY: WHO OWNS THE PROBLEM?
Giving human agents the ability to observe the automation’s reasoning processes
is only one side of the coin in shaping machine agents into team players. Without
also giving the users the ability to substantively influence the machine agent’s
activities, their position is not significantly improved. One of the key issues
which quickly emerges in trying to design a cooperative human-machine system
is the question of control. Who is really in charge of how problems are solved?
As Billings (1996) pointed out, as long as some humans remain responsible
for the outcomes, they must also be granted effective authority and therefore
ultimate control over how problems are solved. Giving humans control over
how problems are solved entails that we, as designers, view the automation as
a resource which exists to assist human agents in the process of their problem
solving efforts.
While automation and human activities may integrate smoothly during
routine situations, unanticipated problems are a fact of life in complex work
environments such as those where we typically find advanced automation. It
is impossible in practice, if not in principle, to design automated systems
which account for every situation they might encounter. While entirely novel
problems may be quite rare, a more common and potentially more troublesome
class of situations are those which present complicating factors on top of
typical, “textbook” cases (cf. studies of brittleness of automated systems
include Roth et al., 1987; Guerlain et al., 1996; Smith et al., 1997). These cases
challenge the assumptions on which the pre-defined responses are based, calling
for strategic and tactical choices which are, by definition, outside the scope
of the automation’s repertoire. The relevant question is, when these sorts of
problems or surprises arise, can the joint system adapt successfully?
How to Make Automated Systems Team Players 7
7
1
2
3
4
5
6
7
8
9
10111
11
12
13
14
15
16
17
18
19
20111
21
22
23
24
25
26
27
28
29
30111
31
32
33
34
35
36
37
38
39
40111
Traditionally, one response to this need has been to allow human operators
to interrupt the automation and take over a problem manually. Conceiving of
control in this way, an all-or-nothing fashion, means that the system is limited
to operating in essentially one of two modes – fully manual or fully automatic.
This forces people to buy control of the problem at the price of the considerable
computational power and many potentially useful functions which the automa-
tion affords. What is required are intermediate, cooperative modes of interaction
which allow human operators to focus the power of the automation on particular
sub-problems, or to specify solution methods that account for unique aspects
of the situation which the automated agent may be unaware of. In simple
terms, automated agents need to be flexible and they need to be good at taking
direction.
Part of the reason that directability is so important is that the penalties for
its absence tend to accrue during those critical, rapidly deteriorating situations
where the consequences can be most severe. One of the patterns that we see
in the dynamic behavior of complex human-machine systems during abnormal
situations is an escalation in the cognitive and coordinative demands placed on
human operators (Woods & Patterson, 2000). When a suspicious or anomalous
state develops, monitoring and attentional demands increase; diagnostic activi-
ties may need to be initiated; actions to protect the integrity of the process may
have to be undertaken and monitored for success; coordination demands increase
as additional personnel/experts are called upon to assist with the problem;
others may need to be informed about impacts to processes under their control;
plans must be modified, contingencies considered; critical decisions need to be
formulated and executed in synchronization with other activities. All of this can
occur under time pressure (Klein et al., 2000).
These results do not imply that automation work only as a passive adjunct
to the human agent. This is to fall right back into the false dichotomy of people
versus automation. Clearly, it would be a waste of both humans’ and automa-
tion’s potential to put the human in the role of micro-managing the machine
agent. At the same time however, we need to preserve the ability of human
agents to act in a strategic role, managing the activities of automation in ways
that support the overall effectiveness of the joint system. As was found for the
case of observability, one of the main challenges is to determine what levels
and modes of interaction will be meaningful and useful to practitioners. In some
cases human agents may want to take very detailed control of some portion of
a problem, specifying exactly what decisions are made and in what sequence,
while in others they may want only to make very general, high level correc-
tions to the course of the solution in progress. Accommodating all of these
possibilities is difficult and requires careful iterative analysis of the interactions
8 KLAUS CHRISTOFFERSEN AND DAVID D. WOODS
1
2
3
4
5
6
7
8
9
10111
11
12
13
14
15
16
17
18
19
20111
21
22
23
24
25
26
27
28
29
30111
31
32
33
34
35
36
37
38
39
40111
between system goals, situational factors, and the nature of the machine agent.
However, this process is crucial if the joint system is to perform effectively in
the broadest possible range of scenarios (Roth et al., 1997; Dekker & Woods,
1999; Guerlain et al., 1999; Smith et al., 2000; Smith, in press).
In contrast to this, technology-driven designs tend to isolate the activities of
humans and automation in the attempt to create neatly encapsulated, pseudo-
independent machine agents. This philosophy assumes that the locus of expertise
in the joint human-machine system lies with the machine agent, and that
the human’s role is (or ought to be)1largely peripheral. Such designs give
de facto control over how problems are solved to the machine agent. However,
experience has shown that when human agents are ultimately responsible for
the performance of the system, they will actively devise means to influence it.
For example, pilots in highly automated commercial aircraft have been known
to simply switch off some automated systems in critical situations because they
have either lost track of what the automation is doing, or cannot reconcile the
automation’s activities with their own perception of the problem situation.
Rather than trying to sort out the state of the automation, they revert to manual
or direct control as a way to reclaim understanding of and control over the
situation. The uncooperative nature of the automated systems forces the pilots
to buy this awareness and control at the price of abandoning the potentially
useful functions that the automation performs, thus leaving them to face the
situation unaided.
Whither Automated Agents? Invest in Design for Team Play
Repeatedly, performance demands and resource pressures lead mission organi-
zations to invest in increasing the autonomy and authority of automated systems.
Because of unquestioned assumptions that people and automated systems are
independent and inter-changeable, organizations fail to make parallel investments
in design for observability and directability. Often in the process of recruiting
resources for new levels of automation, advocates vigorously promote the claim
that the more autonomous the machine, the less the required investment in team
play and the greater the savings for the organization.
The operational effects of this pattern of thinking are strikingly consistent.
Inevitably, situations arise requiring team play; inevitably, the automation is
brittle at the boundaries of its capabilities; inevitably, coordination breakdowns
occur when designs fail to support collaborative interplay; and inevitably,
operational personnel must scramble to work around clumsy automation which
is ill-adapted to the full range of problems or to working smoothly with
How to Make Automated Systems Team Players 9
9
1
2
3
4
5
6
7
8
9
10111
11
12
13
14
15
16
17
18
19
20111
21
22
23
24
25
26
27
28
29
30111
31
32
33
34
35
36
37
38
39
40111
other agents. Meanwhile, cycling in the background, commentators from various
perspectives bicker about crediting one or another agent as the sole cause of
system failures (Woods & Sarter, 2000).
We have no need to witness or document more of these natural experiments
in strong, silent, difficult to direct automation. Experience has provided us with
ample evidence for the shallowness, error, and sterility of these conventional
beliefs. If we simply drop the blinders of the Substitution Myth, the scene
comes into clear focus (Woods & Tinapple, 1999). The analysis of past natural
experiments reveals ways to go forward. Because of increasing capabilities of
automated systems, the design issue is collaboration within the joint human-
machine system as this joint system copes with the variety and dynamics of
situations that can occur. For this joint human-machine system to operate
successfully, automated agents need to be conceived and designed as “team
players”. Two of the key elements needed to support this coordinated cognitive
work are observability and directability.
SUMMARY
When designing a joint system for a complex, dynamic, open environment,
where the consequences of poor performance by the joint system are potentially
grave, the need to shape the machine agents into team players is critical.
Traditionally, the assumption has been that if a joint system fails to perform
adequately, the cause can be traced to so-called “human error.” However,
if one digs a little deeper, they find that the only reason many of these
joint systems perform adequately at all is because of the resourcefulness and
adaptability that the human agents display in the face of uncommunicative
and uncooperative machine agents. The ability of a joint system to perform
effectively in the face of difficult problems depends intimately on the ability
of the human and machine agents to coordinate and capitalize upon the unique
abilities and information to which each agent has access.
For automated agents to become team players, there are two fundamental
characteristics which need to be designed in from the beginning: observability
and directability. In other words, users need to be able to see what the automated
agents are doing and what they will do next relative to the state of the process,
and users need to be able to re-direct machine activities fluently in instances
where they recognize a need to intervene. These two basic capabilities are the
keys to fostering a cooperative relationship between the human and machine
agents in any joint system.
10 KLAUS CHRISTOFFERSEN AND DAVID D. WOODS
1
2
3
4
5
6
7
8
9
10111
11
12
13
14
15
16
17
18
19
20111
21
22
23
24
25
26
27
28
29
30111
31
32
33
34
35
36
37
38
39
40111
NOTE
1. Recall that intelligent automation has often been introduced as an attempt to replace
“inefficient” or “error-prone” human problem solvers.
REFERENCES
Billings, C. E. (1996). Aviation Automation: The Search for a Human-Centered Approach. Hillsdale,
NJ: Erlbaum.
Dekker, S. W. A., & Woods, D. D. (1999). To Intervene or Not to Intervene: The Dilemma of
Management by Exception. Cognition, Technology and Work, 1, 86–96.
Grosz, B. J. (1981). Focusing and description in natural language dialogues. In: A. K. Joshi,
B. L. Webber & I. A. Sag (Eds), Elements of Discourse Understanding. Cambridge, MA:
Cambridge University Press.
Guerlain, S., Smith, P. J., Obradovich, J. H., Rudmann, S., Strohm, P., Smith, J. W., Svirbely, J.,
& Sachs, L. (1999). Interactive critiquing as a form of decision support: An empirical
evaluation. Human Factors, 41, 72–89.
Guerlain, S., Smith, P. J., Obradovich, J. H., Rudmann, S., Strohm, P., Smith, J., & Svirbely, J.
(1996). Dealing with brittleness in the design of expert systems for immunohematology.
Immunohematology, 12(3), 101–107.
Hutchins, E. (1995). Cognition in the Wild. Cambridge, MA: MIT Press.
Kelly-Bootle, S. (1995). The Computer Contradictionary (2nd ed.). Cambridge MA: MIT Press.
Klein, G., Armstrong A., Woods, D., Gokulachandra, M., & Klein, H. A. (2000). Cognitive
Wavelength: The Role of Common Ground in Distributed Replanning. Prepared for
AFRL/HECA, Wright Patterson AFB, September.
La Burthe, C. (1997). Human Factors perspective at Airbus Industrie. Presentation at International
Conference on Aviation Safety and Security in the 21st Century. January 13–16, Washington,
D.C.
Malin, J. T., Schreckenghost, D. L., Woods, D. D., Potter, S. S., Johannesen, L., Holloway, M.,
& Forbus, K. D. (1991). Making Intelligent Systems Team Players: Case Studies and Design
Issues. (NASA Technical Memorandum 104738). Houston, TX: NASA Johnson Space
Center.
Malin, J. T. (1999). Preparing for the Unexpected: Making Remote Autonomous Agents Capable
of Interdependent Teamwork. In: Proceedings of ???.
McCarthy, J. C., Miles, V. C., & Monk, A. F. (1991). An experimental study of common ground
in text-based communication. In: Proceedings of the 1991 Conference on Human Factors
in Computing Systems (CHI’91). New York, NY: ACM Press.
Nikolic, M. I., & Sarter, N. B. (2001). Peripheral Visual Feedback: A Powerful Means of Supporting
Attention Allocation and Human-Automation Coordination In Highly Dynamic Data-Rich
Environments. Human Factors, in press.
Norman, D. A. (1990). The ‘problem’ of automation: Inappropriate feedback and interaction, not
‘over-automation.’ Philosophical Transactions of the Royal Society of London, B 327: 585–593.
Norman, D. A. (1993). Things that Make us Smart. Reading, MA: Addison-Wesley.
O’Regan, J. K. (1992). Solving the “real” mysteries of visual perception: The world as an outside
memory. Canadian Journal of Psychology, 46, 461–488.
How to Make Automated Systems Team Players 11
11
1
2
3
4
5
6
7
8
9
10111
11
12
13
14
15
16
17
18
19
20111
21
22
23
24
25
26
27
28
29
30111
31
32
33
34
35
36
37
38
39
40111
?
Missing
text
Patterson, E. S., Watts-Perotti, J. C., & Woods, D. D. (1999). Voice Loops as Coordination Aids
in Space Shuttle Mission Control. Computer Supported Cooperative Work, 8, 353–371.
Roth, E. M., Bennett, K., & Woods, D. D. (1987). Human interaction with an ‘intelligent’ machine.
International Journal of Man-Machine Studies, 27, 479–525.
Roth, E. M., Malin, J. T., & Schreckenghost, D. L. (1997). Paradigms for Intelligent Interface
Design. In: M. Helander, T. Landauer & P. Prabhu (Eds) Handbook of Human-Computer
Interaction (2nd ed.) (pp. 1177–1201). Amsterdam: North-Holland.
Sklar, A. E., & Sarter, N. B. (1999). “Good Vibrations”: The Use of Tactile Feedback in Support
of Mode Awareness on Advanced Technology Aircraft. Human Factors,41(4), 543–552.
Sarter, N. B. (1999). The Need for Multi-sensory Feedback in Support of Effective Attention
Allocation in Highly Dynamic Event-Driven Environments: The Case of Cockpit
Automation. International Journal of Aviation Psychology, 10(3), 231–245.
Smith, P. J., McCoy, E., & Layton, C. (1997). Brittleness in the design of cooperative problem-
solving systems: The effects on user performance. IEEE Transactions on Systems, Man and
Cybernetics, 27, 360–371.
Smith, P. J., Billings, C., Chapman, R. J., Obradovich, J. H., McCoy, E., & Orasanu, J. (2000).
Alternative architectures for distributed cooperative problem solving in the national airspace
system. Proceedings of the 5th International Conference on Human Interaction with Complex
Systems. Urbana, IL, 203–207.
Smith, P. J., McCoy, E., & Orasanu, J. (in press). Distributed cooperative problem-solving in the
air traffic management system. In: G. Klein & E. Salas (Eds), Naturalistic Decision Making
(pp. 369–384). Mahwah, NJ: Erlbaum.
Wiener, E. L. (1989). Human factors of advanced technology (“Glass Cockpit”) transport aircraft.
(NASA Contractor Report No. 177528). Moffett Field, CA: NASA Ames Research Center.
Wiener, E. L., & Curry, R. E. (1980). Flight-deck automation: Promises and pitfalls. Ergonomics,
23, 995–1011.
Winograd, T., & Woods, D. D. (1997). Challenges for Human-Centered Design. In: J. Flanagan,
T. Huang, P. Jones & S. Kasif (Eds), Human-Centered Systems: Information, Interactivity,
and Intelligence. Washington, D.C.: National Science Foundation, July.
Woods, D. D. (1993). Price of flexibility in intelligent interfaces. Knowledge-Based Systems, 6(4),
189–196.
Woods, D. D. (1996). Decomposing Automation: Apparent Simplicity, Real Complexity, In:
R. Parasuraman & M. Mouloua (Eds), Automation Technology and Human Performance.
Erlbaum.
Woods, D. D., & Dekker, S. W. A. (2000). Anticipating the Effects of Technological Change: A
New Era of Dynamics for Human Factors. Theoretical Issues in Ergonomic Science.
Woods, D. D., & Patterson, E. S. (2000). How Unexpected Events Produce an Escalation of
Cognitive and Coordinative Demands. In: P. A. Hancock & P. Desmond (Eds), Stress
Workload and Fatigue. Hillsdale NJ: Lawrence Erlbaum.
Woods, D. D., & Sarter, N. B. (2000). Learning from Automation Surprises and Going Sour
Accidents. In: N. Sarter & R. Amalberti (Eds), Cognitive Engineering in the Aviation
Domain. Hillsdale NJ: Erlbaum.
Woods, D. D., & Tinapple, D. (1999). W3: Watching Human Factors Watch People at Work.
Presidential Address, 43rd Annual Meeting of the Human Factors and Ergonomics Society,
September 28, 1999. Multimedia Production at http://csel.eng.ohio-state.edu/hf99/
Woods, D. D., Sarter, N. B., & Billings, C. E. (1997). Automation Surprises. In: G. Salvendy (Ed.),
Handbook of Human Factors/Ergonomics (2nd ed.). New York, NY: Wiley.
12 KLAUS CHRISTOFFERSEN AND DAVID D. WOODS
1
2
3
4
5
6
7
8
9
10111
11
12
13
14
15
16
17
18
19
20111
21
22
23
24
25
26
27
28
29
30111
31
32
33
34
35
36
37
38
39
40111
... Human-machine collaborative decision-making systems into the enterprise not only have different impacts on the innovative behavior of employees, but also have significant differences in the mechanism of influence on the two innovation approaches. First, human-machine collaborative decision-making accelerates the rapid capture and processing of information by blending the coordinated participation of humans with the intelligent convenience of machines, providing employees with real-time data [33]. It dismantles inter-departmental and inter-firm barriers, fostering the seamless flow of information, enhancing organizational agility, and enabling rapid responses to market fluctuations. ...
... (2) Enriching and expanding the antecedent variables of dualistic innovations. Balancing exploratory and exploitative innovations is crucial for resource-constrained enterprises [21], [22], especially with the advent of new technologies that introduce abundant digital resources [33]. This study investigates how human-machine collaboration influences dualistic innovations and incorporates human-machine trust as a key mediating variable. ...
Article
Full-text available
Intelligent and virtualized transformation of the enterprise requires employees to undertake both technological breakthroughs and in-depth development, two innovative activities that seem to compete for resources. How employees, guided by algorithmically integrated decision-making, utilize massive information and technology to catalyze creativity in this path is not yet known. Based on research data from 198 corporate innovators, the enabling mechanisms of human-machine collaborative decision-making for dualistic innovations are examined. The study finds that: human-machine collaborative decision-making positively promotes both exploratory innovations and exploitative innovations; human-machine collaborative decision-making stimulates both types of dualistic innovations by enhancing the level of human-machine trust among employees; and corporate innovation culture positively moderates the direct effect of human-machine collaborative decision-making on dualistic innovations. The findings broaden the application of human-machine cooperation theory in the field of management and provide new ideas for enterprises to accurately identify employees’ attitudes and tendencies towards human-machine cooperation and formulate targeted strategies to stimulate dualistic innovations.
... At the individual level, prerequisites for perceiving a human-agent interaction as teamwork are (1) the agent's decision-making latitude and communicative, coordinative, and cooperative abilities (cf. Christoffersen & Woods, 2002), as well as (2) the human's control over task performance. Fig. 4 presents an input mediator outcome framework with a particular focus on the necessary input variables and mediators that we could infer from our qualitative and quantitative results. ...
Article
Full-text available
https://www.sciencedirect.com/science/article/pii/S2451958825001162
... Predictability refers to the degree to which the flightdeck automation acts/operates predictably to pilots, the ability of pilots to know what the system will do under foreseeable circumstances. Predictability of automated systems can be supported through effective, shared knowledge and coordination between the automated systems and flightcrew (Christoffersen & Woods, 2002). From a flightcrew perspective, questions related to predictability include: "What is the automation going to do next?" ...
Technical Report
Full-text available
Recent accidents such as the B737-MAX8 crashes highlight the need to address and improve the current aircraft certification process. This includes understanding how design characteristics of automated systems influence flightcrew behavior and how to evaluate the design of these systems to support robust and resilient outcomes. We propose a process which could be used to evaluate the compliance of automated systems looking through the lens of the 3Rs: Reliability, Robustness, and Resilience. This process helps determine where additional evidence is needed in a certification package to support the flightcrew in interacting with automated systems. Two diagrams, the Design Characteristic Diagram (DCD) and the What’s Next diagram, are used to uncover scenarios which complicate flightcrew response. The DCD is used to look at the relationship between characteristics in design and potential vulnerabilities which commonly occur when design does not support the flightcrew. The What’s Next diagram looks at the ability of the design to support the flightcrew in anticipating what will happen next. In our process, claims surrounding the 3Rs that are present in a certification package are systematically evaluated using these two diagrams to uncover additional areas of support for the flightcrew. Questions about when these claims may break down which are identified using the DCD can be tested using scenarios developed on the What’s Next diagram. Further vignettes looking at different versions of a scenario can be assessed to increase the robustness in the design. The FAA has sponsored this research through the Center of Excellence for Technical Training and Human Performance. However, the FAA neither endorses nor rejects the findings of this research. The dissemination of this research is in the interest of invoking academic or technical community comments on the results and conclusions of the research.
Article
Objective We examined whether allowing operators to self-select automation transparency level (adaptable transparency) could improve accuracy of automation use compared to nonadaptable (fixed) low and high transparency. We examined factors underlying higher transparency selection (decision risk, perceived difficulty). Background Increased fixed transparency typically improves automation use accuracy but can increase bias toward agreeing with automated advice. Adaptable transparency may further improve automation use if it increases the perceived expected value of high transparency information. Methods Across two studies, participants completed an uninhabited vehicle (UV) management task where they selected the optimal UV to complete missions. Automation advised the optimal UV but was not always correct. Automation transparency (fixed low, fixed high, adaptable) and decision risk were manipulated within-subjects. Results With adaptable transparency, participants selected higher transparency on 41% of missions and were more likely to select it for missions perceived as more difficult. Decision risk did not impact transparency selection. Increased fixed transparency (low to high) did not benefit automation use accuracy, but reduced decision times. Adaptable transparency did not improve automation use compared to fixed transparency. Conclusion We found no evidence that adaptable transparency improved automation use. Despite a lack of fixed transparency effects in the current study, an aggregated analysis of our work to date using the UV management paradigm indicated that higher fixed transparency improves automation use accuracy, reduces decision time and perceived workload, and increases trust in automation. Application The current study contributes to the emerging evidence-base regarding optimal automation transparency design in the modern workplace.
Article
Full-text available
As Artificial Intelligence (AI) is increasingly employed in robots and makes interaction with robots more intricate, humans’ need for transparency becomes predominant; however, current research demonstrates debates regarding the form, degree, and implementation of transparency. The industrial and military robotics sectors highlighted the positive role of transparency in improving user experiences. In contrast, medical and service robotics usually discussed the negative impact of transparency in diminishing users’ evaluations of robots and escalating their cognitive workloads. The proposed meta-analysis, drawing from 644 publications and distilling 25 studies, sought to comprehensively comprehend the role of transparency in robotics, specifically, its impact on user trust, perception, and workload. These findings suggest that excessive transparency may undermine the trust in humanoid service robots. Subtle variations in the experimental design and nature of online platforms could influence the outcomes. The focus lies on methodological recommendations, improving existing transparency models to enable customization of transparency levels based on the practical application contexts of robots.
Article
Full-text available
This article describes the main contributions made by the late Paul J. Feltovich to the fields of cognitive engineering and decision making.
Article
Full-text available
In N. Sarter and R. Amalberti (Eds.) Cognitive Engineering in the Aviation Domain, Erlbaum, Hillsdale NJ, in press.
Article
Full-text available
Human factors studies the intersection between people, technology and work, with the major aim to find areas where design and working conditions produce human error. It relies on the knowledge base and research results of multiple fields of inquiry (ranging from computer science to anthropology) to do so. Technological change at this intersection (1) redefines the relationship between various players (both humans and machines), (2) transforms practice and shifts sources of error and excellence, and (3) often drives up operational requirements and pressures on operators. Human factors needs to predict these reverberations of technological change before a mature system has been built in order to steer design into the direction of cooperative human-machine architectures. The quickening tempo of technology change and the expansion of technological possibilities has largely converted the traditional shortcuts for access to a design process (task analysis, guidelines, verification and validation studies, etc.) into oversimplification fallacies that retard understanding, innovation, and, ultimately, human factors' credibility. There is an enormous need for the development of techniques that gain empirical access to the future-that generate human performance data about systems which have yet to be built.
Article
Autonomous agents on remote planets present new challenges for human-centered computing. Remote processing plants will operate autonomously for extended periods of time. When unexpected problems occur, control systems will shift from autonomous operation to interdependent teamwork with flight controller teams on Earth. Communication will be limited in bandwidth, infrequent and delayed. This paper presents human-centered design concepts for remote autonomous agents derived from study of flight controller teams, and current work on architectures, scenarios and user interface designs.
Article
In a variety of domains, automation technology has evolved from passive tools to highly autonomous agents that can initiate actions independent of user input and without explicit operator consent. This evolution brings with it an increased need for effective human-automation communication and coordination to ensure that both agents stay informed about each others' goals, activities, and limitations. Yet, most modern systems are not equipped with the skills required to contribute effectively and in a timely manner to the exchange of information on commitments and actions. In particular, systems fail to provide external attentional guidance to their operators in the case of uncommanded changes and events, which can lead to automation surprises and, sometimes, incidents and accidents. To a large extent, these problems can be explained by designers' increasing reliance on automation feedback that requires focal visual attention. This article explores the potential of multisensory displays to better support attentional guidance in multidisplay environments and to allow for parallel processing of the considerable amount of information that is available in many complex dynamic domains such as the modern flight deck.
Article
The term “intelligent interface” has grown to be an umbrella term that covers a wide and diverse range of topics including dialog understanding, user modeling, adaptive interfaces, cooperative person-machine approaches to problem-solving and decision making, and use of machine intelligence to create more effective explanations and visualizations. This chapter uses the term “intelligent interface” to refer to both the design of user interfaces for intelligent systems and the design of user interfaces that utilize knowledge-based approaches. The chapter examines three broad paradigms for development of intelligent interfaces: intelligent interfaces as cognitive tools that can be utilized by practitioners in solving their problems; intelligent interfaces as members of cooperative person-machine systems that jointly work on problems and share task responsibility; and intelligent interfaces as representational aids that dynamically structure the presentation of information to make key information perceptually salient. The chapter begins with a review of some of the limitations associated with the stand-alone machine problem-solver paradigm that stimulated exploration of alternative paradigms for deployment of machine intelligence. This is followed by a description of each of the three paradigms for intelligent interface design. In each case, examples of systems are presented representing that paradigm and some of the key design principles that derive from that paradigm.