ArticlePDF Available

Abstract and Figures

As designers conceive and implement what are commonly (but mistakenly) called autonomous systems, they adhere to certain myths of autonomy that are not only damaging in their own right, but also by their continued propagation. This article busts such myths and gives reasons why each of these myths should be called out and cast aside.
Content may be subject to copyright.
HISTORIES AND FUTURES
21541-1672/13/$31.00 © 2013 IEEE IEEE INTELLIGENT SYSTEMS
Published by the IEE E Computer Socie ty
HUMAN-CENTERED COMPUTING
The Seven Deadly Myths
of “Autonomous Systems”
department,1 is a recent US Defense Science Board
(DSB) Task Force Report on “The Role of Auton-
omy in DoD Systems.” This report affords an op-
portunity to examine the concept of autonomous
systems in light of the new DSB  ndings. This
theme will continue in a future column, in which
we’ll outline a constructive approach to design au-
tonomous capabilities based on a human-centered
computing perspective. But to set the stage, in this
essay we bust some “myths” of autonomy.
Myths of Autonomy
The reference in our title to the “seven deadly
myths” of autonomous systems alludes to the
seven deadly sins. The latter were so named not
only because of their intrinsic seriousness but also
because the commission of one of them would
engender further acts of wrongdoing. As design-
ers conceive and implement what are commonly
(but mistakenly) called autonomous systems, they
have succumbed to myths of autonomy that are
not only damaging in their own right but are also
damaging by their continued propagation—that
is, because they engender a host of other serious
misconceptions and consequences. Here, we pro-
vide reasons why each of these myths should be
called out and cast aside.
Myth 1: “Autonomy” is unidimensional. There
is a myth that autonomy is some single thing and
that everyone understands what it is. However,
the word is employed with different meanings and
intentions.2 “Autonomy” is straightforwardly de-
rived from a combination of Greek terms signi-
fying self (auto) governance (nomos), but it has
two different senses in everyday usage. In the  rst
sense, it denotes self-suf ciency—the capability of
an entity to take care of itself. This sense is pres-
ent in the French term autonome when, for ex-
ample, it’s applied to an individual who is capable
of independent living. The second sense refers to
the quality of self-directedness, or freedom from
outside control, as we might say of a political dis-
trict that has been identi ed as an “autonomous
region.”
The two different senses affect the way autonomy
is conceptualized, and in uence tacit claims about
what “autonomous” machines can do. For exam-
ple, in a chapter from a classic volume on agent au-
tonomy, Sviatoslav Brainov and Henry Hexmoor3
emphasize how varying degrees of autonomy serve
as a relative measure of self- directedness—that is,
independence of an agent from its physical envi-
ronment or social group. On the other hand, in the
same volume Michael Luck and his colleagues,4
unsatis ed with de ning autonomy in such rela-
tive terms, argue that the self-generation of goals
should be the de ning characteristic of autonomy.
Such a perspective characterizes the machine in ab-
solute terms that re ect the belief of these research-
ers in autonomy as self-suf ciency.
It should be evident that independence from
outside control doesn’t entail the self-suf ciency
of an autonomous machine. Nor do a machine’s
autonomous capabilities guarantee that it will be
allowed to operate in a self-directed manner. In
fact, human-machine systems involve a dynamic
balance of self-suf ciency and self-directedness.
We will now elaborate on some of the subtleties
relating to this balance.
Figure 1 illustrates some of the challenges faced
by designers of machine capabilities. A major mo-
tivation for such capabilities is to reduce the bur-
den on human operators by increasing a machine’s
self-suf ciency to the point that it can be trusted
to operate in a self-directed manner. However,
when the self-suf ciency of the machine capabil-
ities is seen as inadequate for the circumstances,
particularly in situations where the consequences
In this article, we explore some widespread mis-
conceptions surrounding the topic of “autono-
mous systems.” The immediate catalyst for this
article, like a previous article that appeared in this
Jeffrey M. Bradshaw, Robert R. Hoffman, Matthew Johnson, and David D. Woods
Editors: Ro bert R . Hoffman, Florida Institute for Human and Machine Cognition,
rhoffman@ihmc.us
IS-28-03-HCC.indd 2 29/06/13 1:47 PM
MAY/JUNE 2013 www.computer.org/intelligent 3
of error may be disastrous, it is com-
mon to limit the self-directedness of
the machine. For example, in such
circumstances a human may take
control manually, or an automated
policy may come into play that pre-
vents the machine from doing harm
to itself or others through faulty ac-
tions. Such a scenario brings to mind
early NASA Mars rovers whose capa-
bilities for autonomous action weren’t
fully exercised due to concerns about
the high cost of failure. Because their
advanced capabilities for autonomous
action weren’t fully trusted, NASA
decided to micromanage the rov-
ers through a sizeable team of engi-
neers. This example also highlights
that the capabilities machines have
for autonomous action interact with
the responsibility for outcomes and
delegation of authority. Only people
are held responsible for consequences
(that is, only people can act as prob-
lem holders) and only people de-
cide on how authority is delegated to
automata.5
When self-directedness is reduced
to the point that the machine is pre-
vented from fully exercising its ca-
pabilities (as in the Mars rover ex-
ample), the result can be described
as under-reliance on the technology.
That is, although a machine may be
sufciently competent to perform a
set of actions in the current situation,
human practice or policy may prevent
it from doing so. The ipside of this
error is to allow a machine to operate
too freely in a situation that outstrips
its capabilities (such as high self-di-
rectedness with low self- sufciency).
This error can be described as
over-trust.
In light of these observations, we
can characterize the primary chal-
lenge for the designers of autono-
mous machine capabilities as a mat-
ter of moving upward in a 45-degree
diagonal on Figure 1—increasing
machine capabilities while main-
taining a ( dynamic) balance between
self- directedness and self-sufciency.
However, even when the self-direct-
edness and self-sufciency of au-
tonomous capabilities are balanced
appropriately for the demands of
the situation, humans and machines
working together frequently encoun-
ter potentially debilitating problems
relating to insufcient observabil-
ity or understandability (upper right
quadrant of Figure 1). When highly
autonomous machine capabilities
aren’t well understood by people or
other machines working with them,
work effectiveness suffers.7, 8
Whether human or machine, a
“team player” must be able to ob-
serve, understand, and predict the
state and actions of others.9 Many
examples can be found in the litera-
ture of inadequate observability and
understandability as a problem in
human-machine interaction.5,10 The
problem with what David Woods
calls “strong silent automation” is
that it fails to communicate effec-
tively those things that would allow
humans to work interdependently
with it—signals that allow opera-
tors to predict, control, understand,
and anticipate what the machine is
or will be doing. As anyone who has
wrestled with automation can at-
test, there’s nothing worse than a
so-called smart machine that can’t
tell you what it’s doing, why it’s do-
ing something, or when it will nish.
Even more frustrating—or danger-
ous—is a machine that’s incapable of
responding to human direction when
something (inevitably) goes wrong.
To sum up our discussion of the
rst myth: First, “autonomy” isn’t a
unidimensional concept—it’s more
useful to describe autonomous sys-
tems at least in terms of the two di-
mensions of self-directedness and
self-sufciency. Second, aspects of
self-directedness and self- sufciency
must be balanced appropriately.
Third, to maintain desirable proper-
ties of human-machine teamwork,
particularly when advanced machine
capabilities exhibit a signicant
degree of competence and self-gover-
nance, team players must be able to
communicate effectively those aspects
of their behavior that allow others to
understand them and to work inter-
dependently with them.
Myth 2: The conceptualization
of “levels of autonomy” is a use-
ful scientic grounding for the de-
velopment of autonomous system
roadmaps. Since we’ve just argued
for discarding the myth that auton-
omy is unidimensional, we shouldn’t
have to belabor the related myth that
machine autonomy can be measured
on a single ordered scale of increasing
levels. However, because this second
myth is so pervasive, it merits sepa-
rate discussion.
A recent survey of human-robot in-
teraction concluded that “perhaps
the most strongly human-centered
application of the concept of auton-
omy is in the notion of level of au-
tonomy.11 However, one of the most
striking recommendations of the DSB
report on the role of autonomy is its
Figure 1. Challenges faced by designers
of autonomous machine capabilities.6
When striving to maintain an effective
balance between self-sufciency and
self-directedness for highly capable
machines, designers encounter the
additional challenge of making the
machine understandable.
Burden
Not well
understood
Over-trust
Under-
reliance
Self-sufficiency
Self-directedness
High
High
Low
IS-28-03-HCC.indd 3 29/06/13 1:47 PM
4 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS
recommendation that the Department
of Defense (DoD) should abandon the
debate over denitions of levels of au-
tonomy.12 The committee received in-
put from multiple organizations on
how some variation of denitions
across levels of autonomy could guide
new designs. The retired ag ofcers,
technologists, and academics on the
task force overwhelmingly and unani-
mously found the denitions irrelevant
to the real problems, cases of success,
and missed opportunities for effectively
utilizing increases in autonomous ca-
pabilities for defense missions.
The two paragraphs (from pp. 23–
24) summarizing the DSB’s rationale
for this recommendation are worth
citing verbatim:
An … unproductive course has been the
numerous attempts to transform concep-
tualizations of autonomy made in the
1970s into developmental roadmaps. ...
Sheridan’s taxonomy [of levels of auto-
mation] ... is often incorrectly interpreted
as implying that autonomy is simply a
delegation of a complete task to a com-
puter, that a vehicle operates at a single
level of autonomy and that these levels are
discrete and represent scaffolds of increas-
ing difculty. Though attractive, the con-
ceptualization of levels of autonomy as a
scientic grounding for a developmental
roadmap has been unproductive. ... The
levels served as a tool to capture what was
occurring in a system to make it autono-
mous; these linguistic descriptions are not
suitable to describe specic milestones
of an autonomous system. ... Research
shows that a mission consists of dynami-
cally changing functions, many of which
can be executing concurrently as well as
sequentially. Each of these functions can
have a different allocation scheme to the
human or computer at a given time.12
There are additional reasons why
the levels of automation notion are
problematic.
First, functional differences matter.
The common understanding of the
levels assumes that signicantly dif-
ferent kinds of work can be handled
equivalently (such as task work and
teamwork; reasoning, decisions, and
actions). This reinforces the errone-
ous notion that “automation activities
simply can be substituted for human
activities without otherwise affecting
the operation of the system.13
Second, levels aren’t consistently
ordinal. It isn’t always clear whether
a given action should be character-
ized as “lower” or “higher” than
another on the scale of autonomy.
Moreover, a given machine capabil-
ity operating in a specic situation
may simultaneously be “low” on self-
sufciency while being “high” on
self-directedness.7
Third, autonomy is relative to the
context of activity. Functions can’t
be automated effectively in isolation
from an understanding of the task,
the goals, and the context.
Fourth, levels of autonomy encour-
age reductive thinking. For example,
they facilitate the perspective that ac-
tivity is sequential when it’s actually
simultaneous.14
Fifth, the concept of levels of au-
tonomy is insufcient to meet both
current and future challenges. This
was one of the most signicant nd-
ings of the DoD report. For exam-
ple, many challenges facing human-
machine interaction designers involve
teamwork rather than the separa-
tion of duties between the human
and the machine.9 Effective team-
work involves more than effective
task distribution; it looks for ways
to support and enhance each mem-
ber’s performance6this need isn’t
addressed by the levels of autonomy
conceptualization.
Sixth, the concept of levels of auton-
omy isn’t “human-centered. If it were,
it wouldn’t force us to recapitulate
the requirement that technologies be
useable, useful, understandable, and
observable.
Last, the levels provide insufcient
guidance to the designer. The chal-
lenge of bridging the gap from cog-
nitive engineering products to soft-
ware engineering results is one of the
most daunting of current challenges
and the concept of levels of autonomy
provides no assistance in dealing with
this issue.
Myth 3: Autonomy is a widget. The
DSB report points (on p. 23) to the
fallacy of “treating autonomy as a
wid g e t ”:
The competing denitions for autonomy
have led to confusion among develop-
ers and acquisition ofcers, as well as
among operators and commanders. The
attempt to dene autonomy has resulted
in a waste of both time and money spent
debating and reconciling different terms
and may be contributing to fears of un-
bounded autonomy. The denitions have
been unsatisfactory because they typi-
cally try to express autonomy as a wid-
get or discrete component, rather than
a capabilit y of the larger system enabled
by the integration of human and machine
abilities.12
In other words, autonomy isn’t a
discrete property of a work system,
nor is it a particular kind of technol-
ogy; it’s an idealized characterization
of observed or anticipated interac-
tions between the machine, the work
to be accomplished, and the situation.
To the degree that autonomy is actu-
ally realized in practice, it’s through
the combination of these interactions.
The myth of autonomy as a widget
engenders the misunderstandings im-
plicit in the next myth.
Myth 4: Autonomous systems are
autonomous. Strictly speaking, the
IS-28-03-HCC.indd 4 29/06/13 1:47 PM
MAY/JUNE 2013 www.computer.org/intelligent 5
term “autonomous system” is a mis-
nomer. No entity—and, for that mat-
ter, no person—is capable enough
to be able to perform competently
in every task and situation. On the
other hand, even the simplest ma-
chine can seem to function “autono-
mously” if the task and context are
sufciently constrained. A thermostat
exercises an admirable degree of self-
sufciency and self-directedness with
respect to the limited tasks it’s de-
signed to perform through the use of
a simple form of automation (at least
until it becomes miscalibrated).
The DSB report wisely observes
that “… there are no fully autono-
mous systems just as there are no
fully autonomous soldiers, sailors,
airmen, or Marines. … Perhaps the
most important message for com-
manders is that all machines are su-
pervised by humans to some degree,
and the best capabilities result from
the coordination and collaboration of
humans and machines” (p. 24).12
Machine designs are always created
with respect to a context of design as-
sumptions, task goals, and boundary
conditions. At the boundaries of the
operating context for which the ma-
chine was designed, maintaining ad-
equate performance might become
a challenge. For instance, a typical
home thermostat isn’t designed to
work as an outdoor sensor in the sig-
nicantly subzero climate of Antarc-
tica. Consider also the work context
of a Navy Seal whose job it is to per-
form highly sensitive operations that
require human knowledge and rea-
soning skills. A Seal doing his job is
usually thought of as being highly au-
tonomous. However, a more careful
examination reveals his interdepen-
dence with other members of his Seal
team to conduct team functions that
can’t be performed by a single indi-
vidual, just as the team is interdepen-
dent with the overall Navy mission
and with the operations of other co-
located military or civilian units.
What’s the result of belief in this
fourth myth? People in positions of
responsibility and authority might
focus too much on autonomy-related
problems and xes while failing to
understand that self-sufciency is al-
ways relative to a situation. Sadly,
in most cases machine capabilities
are not only relative to a set of pre-
dened tasks and goals, they are
relative to a set of xed tasks and
goals. A software system might per-
form gloriously without supervision
in circumstances within its compe-
tence envelope (itself a reection of
the designer’s intent), but fail misera-
bly when the context changes to some
circumstance that pushes the larger
work system over the edge.15 Al-
though some tasks might be worked
with high efciency and accuracy, the
potential for disastrous fragility is
ever present.16 Speaking of autonomy
without adequately characterizing as-
sumptions about how the task is em-
bedded in the situation is dangerously
misguided.
Myth 5: Once achieved, full auton-
omy obviates the need for human-
machine collaboration. Much of the
early research on autonomous sys-
tems was motivated by situations in
which autonomous systems were re-
quired to replace humans, in theory
minimizing the need for consider-
ing the human aspects of such solu-
tions. For example, one of the earliest
high-consequence applications of so-
phisticated agent technologies was in
NASA’s Remote Agent Architecture,
designed to direct the activities of un-
manned spacecraft engaged in distant
planetary exploration.17 The Remote
Agent Architecture was expressly de-
signed for use in human-out-of-the-
loop situations where response laten-
cies in the transmission of round-trip
control sequences from earth would
have impaired a spacecraft’s ability
to respond to urgent problems or to
take advantage of unexpected science
opportunities.
Since those early days, most auton-
omy research has been pursued in a
technology-centric fashion, as if full
machine autonomy—complete inde-
pendence and self-sufciency—were
a holy grail. A primary, ostensible
reason for the quest is to reduce man-
ning needs, since salaries are the larg-
est fraction of the costs of sociotech-
nical systems. An example is the US
Navy’s “Human Systems Integration”
program, initially founded on a belief
that an increase in autonomous ma-
chine capabilities (typically developed
without adequate consideration for
the complexities of interdependence
in mixed human-machine teams)
would enable the Navy to crew large
vessels with smaller human comple-
ments. However, reection on the
nature of human work reveals the
shortsightedness of such a singular
and short-term focus: What could
be more troublesome to a group of
individuals engaged in dynamic, fast-
paced, real-world collaboration cop-
ing with complex tasks and shifting
goals than a colleague who is per-
fectly able to perform tasks alone but
lacks the expertise required to coor-
dinate his or her activities with those
of others?
Of course, there are situations
where the goal of minimizing hu-
man involvement with autonomous
systems can be argued effectively—
for example, some jobs in industrial
manufacturing. However, it should
be noted that virtually all of the most
challenging deployments of autono-
mous systems to date—such as mili-
tary unmanned air vehicles, NASA
rovers, unmanned underwater ve-
hicles, and disaster inspection ro-
bots—have involved people in crucial
IS-28-03-HCC.indd 5 29/06/13 1:47 PM
6 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS
roles where expertise is a must. Such
involvement hasn’t been merely to
make up for the current limitations
on machine capabilities, but also be-
cause their jointly coordinated efforts
with humans were—or should have
been—intrinsically part of the mis-
sion planning and operations itself.
What’s the result of belief in this
myth? Researchers and their sponsors
begin to assume that “all we need is
more autonomy.” This kind of sim-
plistic thinking engenders the even
more grandiose myth that human
factors can be avoided in the design
and deployment of machines. Care-
ful consideration will reveal that, in
addition to more machine capabilities
for task work, there’s a need for the
kinds of breakthroughs in human-
machine teamwork that would en-
able autonomous systems not merely
to do things for people, but also to
work together with people and other
systems. This capacity for teamwork,
not merely the potential for expanded
task work, is the inevitable next leap
forward required for more effective
design and deployment of autono-
mous systems operating in a world
full of people.18
Myth 6: As machines acquire more
autonomy, they will work as simple
substitutes (or multipliers) of human
capability. The concept of automa-
tion began with the straightforward
objective of replacing whenever fea-
sible any tedious, repetitive, dirty, or
dangerous task currently performed
by a human with a machine that
could do the same task better, faster,
or cheaper. This was a core concept
of the Industrial Revolution. The en-
tire eld of human factors emerged
circa World War 1 in recognition of
the need to consider the human oper-
ator in industrial design. Automation
became one of the rst issues to at-
tract the notice of cyberneticists and
human factors researchers during and
immediately after World War II. Pio-
neering researchers attempted to sys-
tematically characterize the general
strengths and weaknesses of humans
and machines. The resulting disci-
pline of “function allocation” aimed
to provide a rational means of deter-
mining which system-level functions
should be carried out by humans and
which by machines.
Obviously, the suitability of a par-
ticular human or machine to take on a
particular task will vary over time and
in different situations. Hence, the con-
cepts of adaptive or dynamic function
allocation and adjustable autonomy
emerged with the hope that shifting
responsibilities between humans and
machines would lead to machine and
work designs more appropriate for the
emerging sociotechnical workplace.2
Of course, certain tasks, such as those
requiring sophisticated judgment,
couldn’t be shifted to machines, and
other tasks, such as those requiring ul-
tra-precise movement, couldn’t be per-
formed by humans. But with regard
to tasks where human and machine
capabilities overlapped—the area of
variable task assignment—software-
based decision-making schemes were
proposed to allow tasks to be allo-
cated according to the potential per-
former’s availability.
Over time, it became plain to re-
searchers that things weren’t this sim-
ple. For example, many functions in
complex systems are shared by hu-
mans and machines; hence, the need
to consider synergies and goal con-
icts among the various performers
of joint actions. Function allocation
isn’t a simple process of transfer-
ring responsibilities from one com-
ponent to another. When system de-
signers automate a subtask, what
they’re really doing is performing a
type of task distribution and, as such,
have introduced novel elements of
interdependence within the work sys-
tem.7 This is the lesson to be learned
from studies of the “substitution
my t h ,”13 which conclude that reduc-
ing or expanding the role of automa-
tion in joint human-machine systems
may change the nature of interdepen-
dent and mutually adapted activities
in complex ways. To effectively ex-
ploit the capabilities that automation
provides (versus merely increasing au-
tomation), the task work—and the
interdependent teamwork it induces
among players in a given situation—
must be understood and coordinated
as a whole.
It’s easy to fall prey to the fallacy
that automated assistance is a simple
substitute or multiplier of human ca-
pability because, from the point of
view of an outsider observing the as-
sisted humans, it seems that—in suc-
cessful cases, at least—the people are
able to perform the task better or
faster than they could without help.
In reality, however, help of whatever
kind doesn’t simply enhance our abil-
ity to perform the task: it changes
the nature of the task.13,19 To take
a simple example, the use of a com-
puter rather than a pencil to compose
a document can speed up the task of
writing an essay in some respects,
but sometimes can slow it down in
other respects—for example, when
electrical power goes out. The es-
sential point is that it requires a dif-
ferent conguration of human skills.
Similarly, a robot used to perform a
household task might be able to do
many things “on its own,” but this
doesn’t eliminate the human’s role, it
changes that role. The human respon-
sibility is now the cognitive task of
goal setting, monitoring, and control-
ling the robot’s progress (or regress).16
Increasing the autonomy of autono-
mous systems requires different kinds
of human expertise and not always
fewer humans. Humans and articial
IS-28-03-HCC.indd 6 29/06/13 1:47 PM
MAY/JUNE 2013 www.computer.org/intelligent 7
agents are two disparate kinds of enti-
ties that exist in very different sorts of
worlds. Humans have rich knowledge
about the world that they’re trying to
understand and inuence, while ma-
chines are much more limited in their
understanding of the world that they
model and affect. This isn’t a matter
of distinguishing ways that machines
can compensate for things that hu-
mans are bad at. Rather, it’s a mat-
ter of characterizing interdependence:
things that machines are good at and
ways in which they depend on hu-
mans (and other agents) in joint activ-
ity; and things that humans are good
at and ways in which they depend on
the machines (and other humans).20
For the foreseeable future this fun-
damental asymmetry, or duality, will
remain. The brightest machine agents
will be limited in the generality, if not
the depth, of their inferential, adap-
tive, social, and sensory capabilities.
Humans, though fallible, are function-
ally rich in reasoning strategies and
their powers of observation, learn-
ing, and sensitivity to context. These
are the things that make adaptability
and resilience of work systems possi-
ble. Adapting to appropriate mutually
interdependent roles that take advan-
tage of the respective strengths of hu-
mans and machines—and crafting
natural and effective modes of inter-
action—are key challenges for tech-
nology—not merely the creation of in-
creasingly capable widgets.
What’s the result of belief in the
myth of machines as simple multi-
pliers of human ability? Because de-
sign approaches based on this myth
don’t adequately take into consider-
ation the signicant ways in which
the introduction of autonomous capa-
bilities can change the nature of the
work itself; they lead to “clumsy au-
tomation.” And trying to solve this
problem by adding more poorly de-
signed autonomous capabilities is, in
effect, adding more clumsy automa-
tion onto clumsy automation, thereby
exacerbating the problem that the in-
creased autonomy was intended to
solve.
Myth 7: “Full autonomy” is not only
possible, but is always desirable. In
refutation of the substitution myth,
Table 1 contrasts the putative benets
of automated assistance with the em-
pirical results. Ironically, even when
technology succeeds in making tasks
more efcient, the human work-
load isn’t reduced accordingly. Da-
vid Woods and Eric Hollnagel5 sum-
marized this phenomenon as the law
of stretched systems: “every system
is stretched to operate at its capac-
ity; as soon as there is some improve-
ment, for example in the form of new
technology, it will be exploited to
achieve a new intensity and tempo of
ac t ivit y.”
As Table 1 shows, the decision to
increase the role of automation in
general, and autonomous capabili-
ties in particular, is one that should
be made in light of its complex ef-
fects along a variety of dimensions.
In this article, we’ve tried to make
the case that full autonomy, the sim-
plistic sense in which the term is usu-
ally employed, is barely possible. This
table summarizes the reasons why
increased automation isn’t always
desirable.
Although continuing research to
make machines more active, adap-
tive, and functional is essential, the
point of increasing such prociencies
Table 1. Putative benets of automation versus actual experience.21
Putative benet Real complexity
Increased per formance is obtained from “substitution
of machine activity for human activity.
Practice is transformed; the roles of people change; old and sometimes beloved habits
and familiar features are altered— the envisioned world problem.
Frees up human by of floading work to the machine. Creates new kinds of cognitive work for the human, often at the wrong times; every
automation advance will be exploited to require people to do more, do it faster, or in
more complex ways—the law of stretched systems.
Frees up limited attention by focusing someone on the
correct answer.
Creates more threads to track; makes it harder for people to remain aware of and
integrate all of the activities and changes around them—with coordination costs,
continuously.
Less human knowledge is required. New knowledge and skill demands are imposed on the human and the human might no
longer have a sufficient context to make decisions, because they have been left out of
the loop—automation surprise.
Agent will function autonomously. Team play with people and other agent s is critical to success—principles of
interdependence.
Same feedback to human will be required. New levels and types of feedback are needed to support peoples’ new roles—with
coordination costs, continuously.
Agent enables more flex ibility to the system in a
generic way.
Resulting explosion of features, options, and modes creates new demands, types of
errors, and paths toward failure—automation surprises.
Human errors are reduced. Both agents and people are fallible; new problems are associated with human-agent
coordination breakdowns; agents now obscure information necessary for human
decision making—principles of complexity.
IS-28-03-HCC.indd 7 29/06/13 1:47 PM
8 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS
isn’t merely to make the machines
more independent during times
when unsupervised activity is desir-
able or necessary (autonomous), but
also to make them more capable of
sophisticated interdependent activ-
ity with people and other machines
when such is required (teamwork).
Research in joint activity highlights
the need for autonomous systems to
support not only the uid orchestra-
tion of task handoffs among people
and machines, but also combined
participation on shared tasks requir-
ing continuous and close interaction
(coactivity).6,9 Indeed, in situations
of simultaneous human-agent col-
laboration on shared tasks, people
and machines might be so tightly in-
tegrated in the performance of their
work that interdependence is a con-
tinuous phenomenon, and the very
idea of task handoffs becomes in-
congruous. We see this, for exam-
ple, in the design of work systems
to support cyber sensemaking, that
aim to combine the efforts of human
analysts with software agents in un-
derstanding, anticipating, and re-
sponding to unfolding events in near
real-time.22
The points mentioned here, like
the ndings of the DSB, focus on
how to make effective use of the ex-
panding power of machines. The
myths we’ve discussed lead devel-
opers to introduce new machine ca-
pabilities in ways that predictably
lead to unintended negative conse-
quences and user-hostile technolo-
gies. We need to discard the myths
and focus on developing coordina-
tion and adaptive mechanisms that
turn platform capabilities into new
levels of mission effectivenessen-
abled through genuine human-cen-
teredness. In complex and domains
characterized by uncertainty, ma-
chines that are merely capable of
performing independent work aren’t
enough. Instead, we need machines
that are also capable of working in-
terdependently.6 We commend the
thoughtful work of the DSB in rec-
ognizing and exemplifying some of
the signicant problems caused by
the seven deadly myths of auton-
omy, and hope these and similar
efforts will lead all of us to sincere
repentance and reformation.
References
1. R.R. Hoffman et al., “Trust in Automa-
ti on ,” IEEE: Intelligent Systems, vol.
28, no. 1, pp. 84 –88.
2. J.M. Bradshaw et al., “Dimensions
of Adjustable Autonomy and Mixed-
Initiative Interaction.” Agents and
Computational Autonomy: Potential,
Risks, and Solutions, LNC S, vol. 2969,
Springer-Verlag, 2004, pp. 17–39.
3. S. Brainov and H. Hexmoor, “Quan-
tifying Autonomy,” Agent Autonomy,
Kluwer, 2002, pp. 43–56.
4. M. Luck, M. D’Inverno, and S. Mun-
roe, “Autonomy: Variable and Genera-
tive,” Agent Autonomy, Kluwer, 2002,
pp. 9–22.
5. D.D. Woods and E. Hollnagel, Joint
Cognitive Systems: Patterns in Cogni-
tive Systems Engineering, Taylor &
Francis, 2006, chapter 11.
6. M. Johnson et al., “Autonomy and In-
terdependence in Human-Agent-Robot
Tea m s.” IEEE Intelligent Systems,
vol. 27, no. 2, 2012, pp. 43–51.
7. M. Johnson et al., “Beyond Coop-
erative Robotics: The Central Role of
Interdependence in Coactive Design,”
IEEE: Intelligent Systems, vol. 26, no.
3, 2011, pp. 81–88.
8. D.D. Woods and E.M. Roth, “Cogni-
tive Systems Engineering,” Handbook
of Human-Computer Interaction,
North-Holland, 1988.
9. G. Klein et al., “Ten Challenges for
Making Automation a “Team Player”
in Joint Human-Agent Activity,” IEEE
Intelligent Systems, vol. 19, no. 6,
2004, pp. 91–95.
10. D.A. Norman, “The ‘Problem’ of Au-
tomation: Inappropriate Feedback and
Interaction, Not ‘Over-Automation.’”
Philosophical Trans. Royal Soc. of
London B, vol. 327, 1990,
pp. 585–593.
11. M.A . Good rich and A.C. Schultz,
“Human-Robot Interaction: A Survey,”
Foundations and Trends in Human-
Computer Interaction, vol. 1, no. 3,
2007, pp. 203–275.
12. R. Murphy et al., The Role of Auton-
omy in DoD Systems, Defense Science
Board Task Force Report, July 2012,
Washi ng to n, DC .
13. K. Christofferson and DD. Woods,
“How to Make Automated Systems
Team Players,” Advances in Human
Performance and Cognitive Engineer-
ing Research, vol. 2, Elsevier Science,
2002, pp. 1–12.
14. P.J. Feltovich et al., Keeping It Too
Simple: How the Reductive Tendency
Affects Cog nitive Engineering.” IE EE
Intelligent Systems, vol. 19, no. 3,
2004, pp. 90–94.
15. R.R. Hoffman and D.D. Woods, “Be-
yond Simon’s Slice: Five Fundamental
Tradeoffs That Bound the Performance
of Macrocognitive Work Systems,”
IEEE: Intelligent Systems, vol. 26,
no. 6, 2011, pp. 67–71.
16. J.K. Hawley and A.L. Mares, “Human
Performance Challenges for the Future
Force: Lessons from Patriot after the
Second Gulf War,” Designing Soldier
Systems: Current Issues in Hum an
Factors, Ashgate, 2012 , pp. 3–34.
17. N. Muscettola et al.. “Remote Agent:
To Boldly Go Where No AI System Has
Gone Before,” Articial Intelligence,
vol. 103, nos. 1–2 , 1998, pp. 5-48.
18. J.M. Bradshaw et al., “Sol: An Agent-
Based Framework for Cyber Situation
Awareness,” Künstliche Intelligenz,
vol. 26, no. 2, 2012, pp. 127–140.
19. D.A. Norman, “Cognitive Artifacts,”
Designing Interaction: Psychology at
the Human-Computer Interface, Cam-
bridge Univ. Press.1992, pp. 17–38.
IS-28-03-HCC.indd 8 29/06/13 1:47 PM
MAY/JUNE 2013 www.computer.org/intelligent 9
Selected CS articles and columns
are also available for free at
http://ComputingNow.computer.org.
20. R.R. Hoffman et al., “A Rose by Any
Other Name … Would Probably Be
Given an Acronym,” IEEE Intelligent
Systems, vol. 17, no. 4, 2002, pp. 72–80.
21. N. Sarter, D.D. Woods, and C.E. Bill-
ings, “Automation Surprises,” Hand-
book of Human Factors/Ergonomics,
2nd ed., John Wiley, 1997.
22. J.M. Bradshaw et al., “Introduction to
Special Issue on Human-Agent-Robot
Teamwork (HART),” IEEE Intelligent
Systems, vol. 27, no. 2, 2012,
pp. 8–13.
Jeffrey M. Bradshaw a senior research
scientist at the Florida Institute for Human
and Machine Cognition. Contact him at
jbradshaw@ihmc.us.
Robert R. Hoffman is a senior research
scientist at the Florida Institute for Human
and Machine Cognition. Contact him at
rhoffman@ihmc.us.
David D. Woods is a professor at The Ohio
State University in the Institute for Ergo-
nomics. Contact him at woods.2@osu.edu.
Matthew Johnson is a research scientist
at the Florida Institute for Human and Ma-
chine Cognition. Contact him at mjohn-
son@ihmc.us.
IS-28-03-HCC.indd 9 29/06/13 1:47 PM
... By gradually gaining trust in their safety and efficiency, operators can give increasingly more autonomy to digital twins. In line with the observations of Bradshaw et al. [54], we observed that the explainability of the behavior of digital twin is paramount in improving stakeholder trust and accelerating convergence to higher autonomy. We suggest researchers investigate the broader context of explainability, recently researched in great detail in ML and AI [55]. ...
... We suggest researchers investigate the broader context of explainability, recently researched in great detail in ML and AI [55]. We see the involvement of the human in the configuration space exploration process as a promising aid of understanding, underlining the need for human-machine cooperative techniques recommended by Bradshaw et al. [54]. ...
... We recommend breaking down the working of the digital twin into usage scenarios and linking those scenarios with conceptual autonomy levels. However, such roadmapping exercises should be carried out carefully and should consider that perceived levels of autonomy are specific to the particular context [54]. ...
Conference Paper
Full-text available
Digital twinning is gaining popularity in domains outside of traditional engineered systems, including cyber-physical systems (CPS) with biological modalities, or cyber-biophysical systems (CBPS) in short. While digital twinning has well-established practices in CPS settings, it raises special challenges in the context of CBPS. In this paper, we identify such challenges and lessons learned through an industry case of a digital twin for CBPS in controlled environment agriculture.
... In addition, longitudinal studies could examine the long-term effects of SAR use on trust, as well as the potential benefits and challenges of incorporating SARs into older adults' daily lives. Effective communication of the robot's intentions and behaviors is essential to ensure successful coordination and interaction with humans and other smart technologies in the home [25]. ...
Article
Full-text available
Older individuals prefer to maintain their autonomy while maintaining social connection and engagement with their family, peers, and community. Though individuals can encounter barriers to these goals, socially assistive robots (SARs) hold the potential for promoting aging in place and independence. Such domestic robots must be trusted, easy to use, and capable of behaving within the scope of accepted social norms for successful adoption to scale. We investigated perceived associations between robot sociability and trust in domestic robot support for instrumental activities of daily living (IADLs). In our multi-study approach, we collected responses from adults aged 65 years and older using two separate online surveys (Study 1, N = 51; Study 2, N = 43). We assessed the relationship between perceived robot sociability and robot trust. Our results consistently demonstrated a strong positive relationship between perceived robot sociability and robot trust for IADL tasks. These data have design implications for promoting robot trust and acceptance of SARs for use in the home by older adults.
... There are many assumptions that the introduction of AI-driven technologies will ease the decisionmaking burdens of command, control, and communications in complex systems (Bradshaw et al, 2013). Specific to navigation and traffic safety, it has been proposed that the introduction of automation and artificial intelligence technologies could: ...
... This is a phenomenon which has been denoted the out-of-the-loop problem [2]. Interaction with autonomous systems may also create new types of cognitive work, as well as needs for new knowledge and skills [3]. Furthermore, use of automation may require people to work more, faster or in more complex ways, since new technology tends to put demands on increased capacity, which has been denoted the law of stretched systems [4]. ...
Chapter
Autonomous systems are developed for both civilian and military applications. To investigate the use of semi-autonomous ground and air units in mechanized combat and control by voice commands, a case study with two participants was performed. The study was performed in a simulation environment, and voice commands were implemented by Wizard of Oz (WoZ) procedures. The objective was to compare control of semi-autonomous units by voice commands with control by communication with human operators that controlled non-autonomous units. The results show that control of units by communication with human operators was more efficient and less demanding, since the human operators understood the situation and adapted accordingly. Discussions with the participants indicated that efficient control of autonomous units by voice commands requires higher levels of autonomy and fewer and simpler voice commands than implemented in the present study. The study also demonstrated that the simulation environment and WoZ procedures are applicable to testing control of semi-autonomous units by voice commands.Keywordsautonomous systemsvoice controlsimulationUGVUAV
Article
The application of the latest techniques from artificial intelligence (AI) and machine learning (ML) to improve and automate the decision-making required for solving real-world network security and performance problems (NetAI, for short) has generated great excitement among networking researchers. However, network operators have remained very reluctant when it comes to deploying NetAIbased solutions in their production networks. In Part I of this manifesto, we argue that to gain the operators' trust, researchers will have to pursue a more scientific approach towards NetAI than in the past that endeavors the development of explainable and generalizable learning models. In this paper, we go one step further and posit that this "opening up of NetAI research" will require that the largely self-assured hubris about NetAI gives way to a healthy dose humility. Rather than continuing to extol the virtues and "magic" of black-box models that largely obfuscate the critical role of the utilized data play in training these models, concerted research efforts will be needed to design NetAI-driven agents or systems that can be expected to perform well when deployed in production settings and are also required to exhibit strong robustness properties when faced with ambiguous situations and real-world uncertainties. We describe one such effort that is aimed at developing a new ML pipeline for generating trained models that strive to meet these expectations and requirements.
Article
This essay examines how artificial intelligence (AI) technologies may shape international norms. Following a brief discussion of the ways in which AI technologies pose new governance questions, we reflect on the extent to which norm research in the discipline of International Relations (IR) is equipped to understand how AI technologies shape normative substance. Norm research has typically focused on the impact and failure of norms, offering increasingly diversified models of norm contestation, for instance. But present research has two shortcomings: a near-exclusive focus on modes and contexts of norm emergence and constitution that happen in the public space; and a focus on the workings of a pre-set normativity (ideas of oughtness and justice) that stands in an unclear relationship with normality (ideas of the standard, the average) emerging from practices. Responding to this, we put forward a research programme on AI and practical normativity/normality based on two pillars: first, we argue that operational practices of designing and using AI technologies typically performed outside of the public eye make norms; and second, we emphasise the interplay of normality and normativity as analytically influential in this process. With this, we also reflect on how increasingly relying on AI technologies across diverse policy domains has an under-examined effect on the exercise of human agency. This is important because the normality shaped by AI technologies can lead to forms of non-human generated normativity that risks replacing conventional models about how norms matter in AI-affected policy domains. We conclude that AI technologies are a major, yet still under-researched, challenge for understanding and studying norms. We should therefore reflect on new theoretical perspectives leading to insights that are also relevant for the struggle about top-down forms of AI regulation.
Article
Full-text available
In the past decade, the fields of machine learning and artificial intelligence (AI) have seen unprecedented developments that raise human-machine interactions (HMI) to the next level. Smart machines , i.e., machines endowed with artificially intelligent systems, have lost their character as mere instruments. This, at least, seems to be the case if one considers how humans experience their interactions with them. Smart machines are construed to serve complex functions involving increasing degrees of freedom, and they generate solutions not fully anticipated by humans. Consequently, their performances show a touch of action and even autonomy. HMI is therefore often described as a sort of “cooperation” rather than as a mere application of a tool. Some authors even go as far as subsuming cooperation with smart machines under the label of partnership , akin to cooperation between human agents sharing a common goal. In this paper, we explore how far the notion of shared agency and partnership can take us in our understanding of human interaction with smart machines. Discussing different topoi related to partnerships in general, we suggest that different kinds of “partnership” depending on the form of interaction between agents need to be kept apart. Building upon these discussions, we propose a tentative taxonomy of different kinds of HMI distinguishing coordination, collaboration, cooperation, and social partnership.
Preprint
Full-text available
With the growing capabilities and pervasiveness of AI systems, societies must collectively choose between reduced human autonomy, endangered democracies and limited human rights, and AI that is aligned to human and social values, nurturing collaboration, resilience, knowledge and ethical behaviour. In this chapter, we introduce the notion of self-reflective AI systems for meaningful human control over AI systems. Focusing on decision support systems, we propose a framework that integrates knowledge from psychology and philosophy with formal reasoning methods and machine learning approaches to create AI systems responsive to human values and social norms. We also propose a possible research approach to design and develop self-reflective capability in AI systems. Finally, we argue that self-reflective AI systems can lead to self-reflective hybrid systems (human + AI), thus increasing meaningful human control and empowering human moral reasoning by providing comprehensible information and insights on possible human moral blind spots.
Chapter
The issues associated with remote work-life balance became apparent during the quarantine protocols that were put in place because of the COVID-19 pandemic. This led to additional stress associated with working from home when under quarantine. The start of the 2022–23 flu season saw the newly coined term tripledemic being used to describe the COVID-19, RSV and flu diseases affecting large portions of the population. This indicates there will always be a challenge to reduce the stressors associated with work-from-home arrangements – especially in households with at-risk family members. A possible solution to this problem is the use of robotic avatars, which is a “system that can transport your senses, actions and presence to a remote location in real time and feel as if you’re actually there.” Typical applications of robotic avatars include disaster relief in dangerous places; avatar tourism; remote teaching; remote collaborations and remote surgeries. This paper investigates the idea of a psychosocial robotic surrogate by using a companion robot to address issues that occur in psychosocial contexts. We address these psychosocial aspects of the human-robot relationship by having the companion robot act as a psychosocial surrogate instead of as a physical avatar. The paper discusses previous work on using avatars in social contexts; the architecture we have developed to facilitate psychosocial robotic surrogacy using a companion robot; and the results obtained so fare with the architecture.KeywordsPsychosocial robotsRobotic surrogatesSocial robot architectureHRI
Article
—What is the number to contact an HR advisor?” asks Paul to the chatbot.—I don’t understand your question,” replies the chatbot, smiling beatifically.This admission illustrates the difficulty of “intelligent” technical devices to help employees and to understand more widely what characterizes human life situations. But how do these chatbots, developed in companies, constitute new resources for activities? How do they transform work? To answer this question, this article proposes a holistic look at the deployment of chatbots in a large French group - the Company. The survey first explores the context that is pushing manufacturers to innovate at all costs. It then shows how the group’s employees manage to appropriate these technical devices, in a different way than the one planned by the managers and the designers.
Book
Full-text available
Our fascination with new technologies is based on the assumption that more powerful automation will overcome human limitations and make our systems 'faster, better, cheaper,' resulting in simple, easy tasks for people. But how does new technology and more powerful automation change our work? Research in Cognitive Systems Engineering (CSE) looks at the intersection of people, technology, and work. What it has found is not stories of simplification through more automation, but stories of complexity and adaptation. When work changed through new technology, practitioners had to cope with new complexities and tighter constraints. They adapted their strategies and the artifacts to work around difficulties and accomplish their goals as responsible agents. The surprise was that new powers had transformed work, creating new roles, new decisions, and new vulnerabilities. Ironically, more autonomous machines have created the requirement for more sophisticated forms of coordination across people, and across people and machines, to adapt to new demands and pressures. This book synthesizes these emergent Patterns though stories about coordination and mis-coordination, resilience and brittleness, affordance and clumsiness in a variety of settings, from a hospital intensive care unit, to a nuclear power control room, to a space shuttle control center. The stories reveal how new demands make work difficult, how people at work adapt but get trapped by complexity, and how people at a distance from work oversimplify their perceptions of the complexities, squeezing practitioners. The authors explore how CSE observes at the intersection of people, technology, and work, how CSE abstracts patterns behind the surface details and wide variations, and how CSE discovers promising new directions to help people cope with complexities. The stories of CSE show that one key to well-adapted work is the ability to be prepared to be surprised. Are you ready?.
Chapter
Full-text available
This chapter discusses the cognitive systems engineering. To build a cognitive description of a problem solving world, it is necessary to understand how representations of the world interact with different cognitive demands imposed by the application world in question and with characteristics of the cognitive agents, both for existing and prospective changes in the world. Building a cognitive description is part of a problem driven approach to the application of computational power. In tool-driven approaches, knowledge acquisition focuses on describing domain knowledge in terms of the syntax of computational mechanisms, that is, the language of implementation is used as a cognitive language. Semantic questions are displaced either to whoever selects the computational mechanisms or to the domain expert who enters knowledge. The alternative is to provide an umbrella structure of domain semantics that organizes and makes explicit what particular pieces of knowledge mean about problem solving in the domain. Acquiring and using domain semantics is essential to be capable of avoiding potential errors and specifying performance boundaries when building intelligent machines.
Conference Paper
Full-text available
Teamwork has become a widely accepted metaphor for describing the nature of multi-robot and multi-agent cooperation. By virtue of teamwork models, team members attempt to manage general responsibilities and commitments to each other in a coherent fashion that both enhances performance and facilitates recovery when unanticipated problems arise. Whereas early research on teamwork focused mainly on interaction within groups of autonomous agents or robots, there is a growing interest in leveraging human participation effectively. Unlike autonomous systems designed primarily to take humans out of the loop, many important applications require people, agents, and robots to work together in close and relatively continuous interaction. For software agents and robots to participate in teamwork alongside people in carrying out complex real-world tasks, they must have some of the capabilities that enable natural and effective teamwork among groups of people. Just as important, developers of such systems need tools and methodologies to assure that such systems will work together reliably and safely, even when they have been designed independently.
Article
Full-text available
This essay focuses on trust in the automation within macrocognitive work systems. The authors emphasize the dynamics of trust. They consider numerous different meanings or kinds of trust, and different modes of operation in which trust dynamics play a role. Their goal is to contribute to the development of a methodology for designing and analyzing collaborative human-centered work systems, a methodology that might promote both trust "calibration" and appropriate reliance. The analysis suggests an ontology for what the authors call "active exploration for trusting" (AET).
Article
Full-text available
There is a common belief that making systems more autonomous will improve the system and is therefore a desirable goal. Though small scale simple tasks can often benefit from automation, this does not necessarily generalize to more complex joint activity. When designing today’s more sophisticated systems to work closely with humans, it is important not only to consider the machine’s ability to work independently through autonomy, but also its ability to support interdependence with those involved in the joint activity. We posit that to truly improve systems and have them reach their full potential, designing systems that support interdependent activity between participants is the key. Our claim is that increasing autonomy, even in a simple and benign environment, does not always result in an improved system. We will show results from an experiment in which we demonstrate this phenomena and explain why increasing autonomy can sometimes negatively impact performance.
Article
Full-text available
Teamwork has become a widely accepted metaphor for describing the nature of multi-robot and multi-agent cooperation. By virtue of teamwork models, team members attempt to manage general responsibilities and commitments to each other in a coherent fashion that both enhances performance and facilitates recovery when unanticipated problems arise. Whereas early research on teamwork focused mainly on interaction within groups of autonomous agents or robots, there is a growing interest in leveraging human participation effectively. Unlike autonomous systems designed primarily to take humans out of the loop, many important applications require people, agents, and robots to work together in close and relatively continuous interaction. For software agents and robots to participate in teamwork alongside people in carrying out complex real-world tasks, they must have some of the capabilities that enable natural and effective teamwork among groups of people. Just as important, developers of such systems need tools and methodologies to assure that such systems will work together reliably and safely, even when they have been designed independently. The purpose of the HART workshop is to explore theories, methods, and tools in support of humans, agents and robots working together in teams. Position papers that combine findings from fields such as computer science, artificial intelligence, cognitive science, anthropology, social and organizational psychology, human-computer interaction to address the problem of HART are strongly encouraged. The workshop will formulate perspectives on the current state-of-the-art, identify key challenges and opportunities for future studies, and promote community-building among researchers and practitioners. The workshop will be structured around four two-hour sessions on themes relevant to HART. Each session will consist of presentations and questions on selected position papers, followed by a whole-group discussion of the current state-of-the-art and the key challenges and research opportunities relevant to the theme. During the final hour, the workshop organizers will facilitate a discussion to determine next steps. The workshop will be deemed a success when collaborative scientific projects for the coming year are defined, and publication venues are explored. For example, results from the most recent HART workshop (Lorentz Center, Leiden, The Netherlands, December 2010) will be reflected in a special issue of IEEE Intelligent Systems on HART that is slated to appear in January/February 2012.