ArticlePDF Available

Abstract and Figures

As designers conceive and implement what are commonly (but mistakenly) called autonomous systems, they adhere to certain myths of autonomy that are not only damaging in their own right, but also by their continued propagation. This article busts such myths and gives reasons why each of these myths should be called out and cast aside.
Content may be subject to copyright.
HISTORIES AND FUTURES
21541-1672/13/$31.00 © 2013 IEEE IEEE INTELLIGENT SYSTEMS
Published by the IEE E Computer Socie ty
HUMAN-CENTERED COMPUTING
The Seven Deadly Myths
of “Autonomous Systems”
department,1 is a recent US Defense Science Board
(DSB) Task Force Report on “The Role of Auton-
omy in DoD Systems.” This report affords an op-
portunity to examine the concept of autonomous
systems in light of the new DSB  ndings. This
theme will continue in a future column, in which
we’ll outline a constructive approach to design au-
tonomous capabilities based on a human-centered
computing perspective. But to set the stage, in this
essay we bust some “myths” of autonomy.
Myths of Autonomy
The reference in our title to the “seven deadly
myths” of autonomous systems alludes to the
seven deadly sins. The latter were so named not
only because of their intrinsic seriousness but also
because the commission of one of them would
engender further acts of wrongdoing. As design-
ers conceive and implement what are commonly
(but mistakenly) called autonomous systems, they
have succumbed to myths of autonomy that are
not only damaging in their own right but are also
damaging by their continued propagation—that
is, because they engender a host of other serious
misconceptions and consequences. Here, we pro-
vide reasons why each of these myths should be
called out and cast aside.
Myth 1: “Autonomy” is unidimensional. There
is a myth that autonomy is some single thing and
that everyone understands what it is. However,
the word is employed with different meanings and
intentions.2 “Autonomy” is straightforwardly de-
rived from a combination of Greek terms signi-
fying self (auto) governance (nomos), but it has
two different senses in everyday usage. In the  rst
sense, it denotes self-suf ciency—the capability of
an entity to take care of itself. This sense is pres-
ent in the French term autonome when, for ex-
ample, it’s applied to an individual who is capable
of independent living. The second sense refers to
the quality of self-directedness, or freedom from
outside control, as we might say of a political dis-
trict that has been identi ed as an “autonomous
region.”
The two different senses affect the way autonomy
is conceptualized, and in uence tacit claims about
what “autonomous” machines can do. For exam-
ple, in a chapter from a classic volume on agent au-
tonomy, Sviatoslav Brainov and Henry Hexmoor3
emphasize how varying degrees of autonomy serve
as a relative measure of self- directedness—that is,
independence of an agent from its physical envi-
ronment or social group. On the other hand, in the
same volume Michael Luck and his colleagues,4
unsatis ed with de ning autonomy in such rela-
tive terms, argue that the self-generation of goals
should be the de ning characteristic of autonomy.
Such a perspective characterizes the machine in ab-
solute terms that re ect the belief of these research-
ers in autonomy as self-suf ciency.
It should be evident that independence from
outside control doesn’t entail the self-suf ciency
of an autonomous machine. Nor do a machine’s
autonomous capabilities guarantee that it will be
allowed to operate in a self-directed manner. In
fact, human-machine systems involve a dynamic
balance of self-suf ciency and self-directedness.
We will now elaborate on some of the subtleties
relating to this balance.
Figure 1 illustrates some of the challenges faced
by designers of machine capabilities. A major mo-
tivation for such capabilities is to reduce the bur-
den on human operators by increasing a machine’s
self-suf ciency to the point that it can be trusted
to operate in a self-directed manner. However,
when the self-suf ciency of the machine capabil-
ities is seen as inadequate for the circumstances,
particularly in situations where the consequences
In this article, we explore some widespread mis-
conceptions surrounding the topic of “autono-
mous systems.” The immediate catalyst for this
article, like a previous article that appeared in this
Jeffrey M. Bradshaw, Robert R. Hoffman, Matthew Johnson, and David D. Woods
Editors: Ro bert R . Hoffman, Florida Institute for Human and Machine Cognition,
rhoffman@ihmc.us
IS-28-03-HCC.indd 2 29/06/13 1:47 PM
MAY/JUNE 2013 www.computer.org/intelligent 3
of error may be disastrous, it is com-
mon to limit the self-directedness of
the machine. For example, in such
circumstances a human may take
control manually, or an automated
policy may come into play that pre-
vents the machine from doing harm
to itself or others through faulty ac-
tions. Such a scenario brings to mind
early NASA Mars rovers whose capa-
bilities for autonomous action weren’t
fully exercised due to concerns about
the high cost of failure. Because their
advanced capabilities for autonomous
action weren’t fully trusted, NASA
decided to micromanage the rov-
ers through a sizeable team of engi-
neers. This example also highlights
that the capabilities machines have
for autonomous action interact with
the responsibility for outcomes and
delegation of authority. Only people
are held responsible for consequences
(that is, only people can act as prob-
lem holders) and only people de-
cide on how authority is delegated to
automata.5
When self-directedness is reduced
to the point that the machine is pre-
vented from fully exercising its ca-
pabilities (as in the Mars rover ex-
ample), the result can be described
as under-reliance on the technology.
That is, although a machine may be
sufciently competent to perform a
set of actions in the current situation,
human practice or policy may prevent
it from doing so. The ipside of this
error is to allow a machine to operate
too freely in a situation that outstrips
its capabilities (such as high self-di-
rectedness with low self- sufciency).
This error can be described as
over-trust.
In light of these observations, we
can characterize the primary chal-
lenge for the designers of autono-
mous machine capabilities as a mat-
ter of moving upward in a 45-degree
diagonal on Figure 1—increasing
machine capabilities while main-
taining a ( dynamic) balance between
self- directedness and self-sufciency.
However, even when the self-direct-
edness and self-sufciency of au-
tonomous capabilities are balanced
appropriately for the demands of
the situation, humans and machines
working together frequently encoun-
ter potentially debilitating problems
relating to insufcient observabil-
ity or understandability (upper right
quadrant of Figure 1). When highly
autonomous machine capabilities
aren’t well understood by people or
other machines working with them,
work effectiveness suffers.7, 8
Whether human or machine, a
“team player” must be able to ob-
serve, understand, and predict the
state and actions of others.9 Many
examples can be found in the litera-
ture of inadequate observability and
understandability as a problem in
human-machine interaction.5,10 The
problem with what David Woods
calls “strong silent automation” is
that it fails to communicate effec-
tively those things that would allow
humans to work interdependently
with it—signals that allow opera-
tors to predict, control, understand,
and anticipate what the machine is
or will be doing. As anyone who has
wrestled with automation can at-
test, there’s nothing worse than a
so-called smart machine that can’t
tell you what it’s doing, why it’s do-
ing something, or when it will nish.
Even more frustrating—or danger-
ous—is a machine that’s incapable of
responding to human direction when
something (inevitably) goes wrong.
To sum up our discussion of the
rst myth: First, “autonomy” isn’t a
unidimensional concept—it’s more
useful to describe autonomous sys-
tems at least in terms of the two di-
mensions of self-directedness and
self-sufciency. Second, aspects of
self-directedness and self- sufciency
must be balanced appropriately.
Third, to maintain desirable proper-
ties of human-machine teamwork,
particularly when advanced machine
capabilities exhibit a signicant
degree of competence and self-gover-
nance, team players must be able to
communicate effectively those aspects
of their behavior that allow others to
understand them and to work inter-
dependently with them.
Myth 2: The conceptualization
of “levels of autonomy” is a use-
ful scientic grounding for the de-
velopment of autonomous system
roadmaps. Since we’ve just argued
for discarding the myth that auton-
omy is unidimensional, we shouldn’t
have to belabor the related myth that
machine autonomy can be measured
on a single ordered scale of increasing
levels. However, because this second
myth is so pervasive, it merits sepa-
rate discussion.
A recent survey of human-robot in-
teraction concluded that “perhaps
the most strongly human-centered
application of the concept of auton-
omy is in the notion of level of au-
tonomy.11 However, one of the most
striking recommendations of the DSB
report on the role of autonomy is its
Figure 1. Challenges faced by designers
of autonomous machine capabilities.6
When striving to maintain an effective
balance between self-sufciency and
self-directedness for highly capable
machines, designers encounter the
additional challenge of making the
machine understandable.
Burden
Not well
understood
Over-trust
Under-
reliance
Self-sufficiency
Self-directedness
High
High
Low
IS-28-03-HCC.indd 3 29/06/13 1:47 PM
4 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS
recommendation that the Department
of Defense (DoD) should abandon the
debate over denitions of levels of au-
tonomy.12 The committee received in-
put from multiple organizations on
how some variation of denitions
across levels of autonomy could guide
new designs. The retired ag ofcers,
technologists, and academics on the
task force overwhelmingly and unani-
mously found the denitions irrelevant
to the real problems, cases of success,
and missed opportunities for effectively
utilizing increases in autonomous ca-
pabilities for defense missions.
The two paragraphs (from pp. 23–
24) summarizing the DSB’s rationale
for this recommendation are worth
citing verbatim:
An … unproductive course has been the
numerous attempts to transform concep-
tualizations of autonomy made in the
1970s into developmental roadmaps. ...
Sheridan’s taxonomy [of levels of auto-
mation] ... is often incorrectly interpreted
as implying that autonomy is simply a
delegation of a complete task to a com-
puter, that a vehicle operates at a single
level of autonomy and that these levels are
discrete and represent scaffolds of increas-
ing difculty. Though attractive, the con-
ceptualization of levels of autonomy as a
scientic grounding for a developmental
roadmap has been unproductive. ... The
levels served as a tool to capture what was
occurring in a system to make it autono-
mous; these linguistic descriptions are not
suitable to describe specic milestones
of an autonomous system. ... Research
shows that a mission consists of dynami-
cally changing functions, many of which
can be executing concurrently as well as
sequentially. Each of these functions can
have a different allocation scheme to the
human or computer at a given time.12
There are additional reasons why
the levels of automation notion are
problematic.
First, functional differences matter.
The common understanding of the
levels assumes that signicantly dif-
ferent kinds of work can be handled
equivalently (such as task work and
teamwork; reasoning, decisions, and
actions). This reinforces the errone-
ous notion that “automation activities
simply can be substituted for human
activities without otherwise affecting
the operation of the system.13
Second, levels aren’t consistently
ordinal. It isn’t always clear whether
a given action should be character-
ized as “lower” or “higher” than
another on the scale of autonomy.
Moreover, a given machine capabil-
ity operating in a specic situation
may simultaneously be “low” on self-
sufciency while being “high” on
self-directedness.7
Third, autonomy is relative to the
context of activity. Functions can’t
be automated effectively in isolation
from an understanding of the task,
the goals, and the context.
Fourth, levels of autonomy encour-
age reductive thinking. For example,
they facilitate the perspective that ac-
tivity is sequential when it’s actually
simultaneous.14
Fifth, the concept of levels of au-
tonomy is insufcient to meet both
current and future challenges. This
was one of the most signicant nd-
ings of the DoD report. For exam-
ple, many challenges facing human-
machine interaction designers involve
teamwork rather than the separa-
tion of duties between the human
and the machine.9 Effective team-
work involves more than effective
task distribution; it looks for ways
to support and enhance each mem-
ber’s performance6this need isn’t
addressed by the levels of autonomy
conceptualization.
Sixth, the concept of levels of auton-
omy isn’t “human-centered. If it were,
it wouldn’t force us to recapitulate
the requirement that technologies be
useable, useful, understandable, and
observable.
Last, the levels provide insufcient
guidance to the designer. The chal-
lenge of bridging the gap from cog-
nitive engineering products to soft-
ware engineering results is one of the
most daunting of current challenges
and the concept of levels of autonomy
provides no assistance in dealing with
this issue.
Myth 3: Autonomy is a widget. The
DSB report points (on p. 23) to the
fallacy of “treating autonomy as a
wid g e t ”:
The competing denitions for autonomy
have led to confusion among develop-
ers and acquisition ofcers, as well as
among operators and commanders. The
attempt to dene autonomy has resulted
in a waste of both time and money spent
debating and reconciling different terms
and may be contributing to fears of un-
bounded autonomy. The denitions have
been unsatisfactory because they typi-
cally try to express autonomy as a wid-
get or discrete component, rather than
a capabilit y of the larger system enabled
by the integration of human and machine
abilities.12
In other words, autonomy isn’t a
discrete property of a work system,
nor is it a particular kind of technol-
ogy; it’s an idealized characterization
of observed or anticipated interac-
tions between the machine, the work
to be accomplished, and the situation.
To the degree that autonomy is actu-
ally realized in practice, it’s through
the combination of these interactions.
The myth of autonomy as a widget
engenders the misunderstandings im-
plicit in the next myth.
Myth 4: Autonomous systems are
autonomous. Strictly speaking, the
IS-28-03-HCC.indd 4 29/06/13 1:47 PM
MAY/JUNE 2013 www.computer.org/intelligent 5
term “autonomous system” is a mis-
nomer. No entity—and, for that mat-
ter, no person—is capable enough
to be able to perform competently
in every task and situation. On the
other hand, even the simplest ma-
chine can seem to function “autono-
mously” if the task and context are
sufciently constrained. A thermostat
exercises an admirable degree of self-
sufciency and self-directedness with
respect to the limited tasks it’s de-
signed to perform through the use of
a simple form of automation (at least
until it becomes miscalibrated).
The DSB report wisely observes
that “… there are no fully autono-
mous systems just as there are no
fully autonomous soldiers, sailors,
airmen, or Marines. … Perhaps the
most important message for com-
manders is that all machines are su-
pervised by humans to some degree,
and the best capabilities result from
the coordination and collaboration of
humans and machines” (p. 24).12
Machine designs are always created
with respect to a context of design as-
sumptions, task goals, and boundary
conditions. At the boundaries of the
operating context for which the ma-
chine was designed, maintaining ad-
equate performance might become
a challenge. For instance, a typical
home thermostat isn’t designed to
work as an outdoor sensor in the sig-
nicantly subzero climate of Antarc-
tica. Consider also the work context
of a Navy Seal whose job it is to per-
form highly sensitive operations that
require human knowledge and rea-
soning skills. A Seal doing his job is
usually thought of as being highly au-
tonomous. However, a more careful
examination reveals his interdepen-
dence with other members of his Seal
team to conduct team functions that
can’t be performed by a single indi-
vidual, just as the team is interdepen-
dent with the overall Navy mission
and with the operations of other co-
located military or civilian units.
What’s the result of belief in this
fourth myth? People in positions of
responsibility and authority might
focus too much on autonomy-related
problems and xes while failing to
understand that self-sufciency is al-
ways relative to a situation. Sadly,
in most cases machine capabilities
are not only relative to a set of pre-
dened tasks and goals, they are
relative to a set of xed tasks and
goals. A software system might per-
form gloriously without supervision
in circumstances within its compe-
tence envelope (itself a reection of
the designer’s intent), but fail misera-
bly when the context changes to some
circumstance that pushes the larger
work system over the edge.15 Al-
though some tasks might be worked
with high efciency and accuracy, the
potential for disastrous fragility is
ever present.16 Speaking of autonomy
without adequately characterizing as-
sumptions about how the task is em-
bedded in the situation is dangerously
misguided.
Myth 5: Once achieved, full auton-
omy obviates the need for human-
machine collaboration. Much of the
early research on autonomous sys-
tems was motivated by situations in
which autonomous systems were re-
quired to replace humans, in theory
minimizing the need for consider-
ing the human aspects of such solu-
tions. For example, one of the earliest
high-consequence applications of so-
phisticated agent technologies was in
NASA’s Remote Agent Architecture,
designed to direct the activities of un-
manned spacecraft engaged in distant
planetary exploration.17 The Remote
Agent Architecture was expressly de-
signed for use in human-out-of-the-
loop situations where response laten-
cies in the transmission of round-trip
control sequences from earth would
have impaired a spacecraft’s ability
to respond to urgent problems or to
take advantage of unexpected science
opportunities.
Since those early days, most auton-
omy research has been pursued in a
technology-centric fashion, as if full
machine autonomy—complete inde-
pendence and self-sufciency—were
a holy grail. A primary, ostensible
reason for the quest is to reduce man-
ning needs, since salaries are the larg-
est fraction of the costs of sociotech-
nical systems. An example is the US
Navy’s “Human Systems Integration”
program, initially founded on a belief
that an increase in autonomous ma-
chine capabilities (typically developed
without adequate consideration for
the complexities of interdependence
in mixed human-machine teams)
would enable the Navy to crew large
vessels with smaller human comple-
ments. However, reection on the
nature of human work reveals the
shortsightedness of such a singular
and short-term focus: What could
be more troublesome to a group of
individuals engaged in dynamic, fast-
paced, real-world collaboration cop-
ing with complex tasks and shifting
goals than a colleague who is per-
fectly able to perform tasks alone but
lacks the expertise required to coor-
dinate his or her activities with those
of others?
Of course, there are situations
where the goal of minimizing hu-
man involvement with autonomous
systems can be argued effectively—
for example, some jobs in industrial
manufacturing. However, it should
be noted that virtually all of the most
challenging deployments of autono-
mous systems to date—such as mili-
tary unmanned air vehicles, NASA
rovers, unmanned underwater ve-
hicles, and disaster inspection ro-
bots—have involved people in crucial
IS-28-03-HCC.indd 5 29/06/13 1:47 PM
6 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS
roles where expertise is a must. Such
involvement hasn’t been merely to
make up for the current limitations
on machine capabilities, but also be-
cause their jointly coordinated efforts
with humans were—or should have
been—intrinsically part of the mis-
sion planning and operations itself.
What’s the result of belief in this
myth? Researchers and their sponsors
begin to assume that “all we need is
more autonomy.” This kind of sim-
plistic thinking engenders the even
more grandiose myth that human
factors can be avoided in the design
and deployment of machines. Care-
ful consideration will reveal that, in
addition to more machine capabilities
for task work, there’s a need for the
kinds of breakthroughs in human-
machine teamwork that would en-
able autonomous systems not merely
to do things for people, but also to
work together with people and other
systems. This capacity for teamwork,
not merely the potential for expanded
task work, is the inevitable next leap
forward required for more effective
design and deployment of autono-
mous systems operating in a world
full of people.18
Myth 6: As machines acquire more
autonomy, they will work as simple
substitutes (or multipliers) of human
capability. The concept of automa-
tion began with the straightforward
objective of replacing whenever fea-
sible any tedious, repetitive, dirty, or
dangerous task currently performed
by a human with a machine that
could do the same task better, faster,
or cheaper. This was a core concept
of the Industrial Revolution. The en-
tire eld of human factors emerged
circa World War 1 in recognition of
the need to consider the human oper-
ator in industrial design. Automation
became one of the rst issues to at-
tract the notice of cyberneticists and
human factors researchers during and
immediately after World War II. Pio-
neering researchers attempted to sys-
tematically characterize the general
strengths and weaknesses of humans
and machines. The resulting disci-
pline of “function allocation” aimed
to provide a rational means of deter-
mining which system-level functions
should be carried out by humans and
which by machines.
Obviously, the suitability of a par-
ticular human or machine to take on a
particular task will vary over time and
in different situations. Hence, the con-
cepts of adaptive or dynamic function
allocation and adjustable autonomy
emerged with the hope that shifting
responsibilities between humans and
machines would lead to machine and
work designs more appropriate for the
emerging sociotechnical workplace.2
Of course, certain tasks, such as those
requiring sophisticated judgment,
couldn’t be shifted to machines, and
other tasks, such as those requiring ul-
tra-precise movement, couldn’t be per-
formed by humans. But with regard
to tasks where human and machine
capabilities overlapped—the area of
variable task assignment—software-
based decision-making schemes were
proposed to allow tasks to be allo-
cated according to the potential per-
former’s availability.
Over time, it became plain to re-
searchers that things weren’t this sim-
ple. For example, many functions in
complex systems are shared by hu-
mans and machines; hence, the need
to consider synergies and goal con-
icts among the various performers
of joint actions. Function allocation
isn’t a simple process of transfer-
ring responsibilities from one com-
ponent to another. When system de-
signers automate a subtask, what
they’re really doing is performing a
type of task distribution and, as such,
have introduced novel elements of
interdependence within the work sys-
tem.7 This is the lesson to be learned
from studies of the “substitution
my t h ,”13 which conclude that reduc-
ing or expanding the role of automa-
tion in joint human-machine systems
may change the nature of interdepen-
dent and mutually adapted activities
in complex ways. To effectively ex-
ploit the capabilities that automation
provides (versus merely increasing au-
tomation), the task work—and the
interdependent teamwork it induces
among players in a given situation—
must be understood and coordinated
as a whole.
It’s easy to fall prey to the fallacy
that automated assistance is a simple
substitute or multiplier of human ca-
pability because, from the point of
view of an outsider observing the as-
sisted humans, it seems that—in suc-
cessful cases, at least—the people are
able to perform the task better or
faster than they could without help.
In reality, however, help of whatever
kind doesn’t simply enhance our abil-
ity to perform the task: it changes
the nature of the task.13,19 To take
a simple example, the use of a com-
puter rather than a pencil to compose
a document can speed up the task of
writing an essay in some respects,
but sometimes can slow it down in
other respects—for example, when
electrical power goes out. The es-
sential point is that it requires a dif-
ferent conguration of human skills.
Similarly, a robot used to perform a
household task might be able to do
many things “on its own,” but this
doesn’t eliminate the human’s role, it
changes that role. The human respon-
sibility is now the cognitive task of
goal setting, monitoring, and control-
ling the robot’s progress (or regress).16
Increasing the autonomy of autono-
mous systems requires different kinds
of human expertise and not always
fewer humans. Humans and articial
IS-28-03-HCC.indd 6 29/06/13 1:47 PM
MAY/JUNE 2013 www.computer.org/intelligent 7
agents are two disparate kinds of enti-
ties that exist in very different sorts of
worlds. Humans have rich knowledge
about the world that they’re trying to
understand and inuence, while ma-
chines are much more limited in their
understanding of the world that they
model and affect. This isn’t a matter
of distinguishing ways that machines
can compensate for things that hu-
mans are bad at. Rather, it’s a mat-
ter of characterizing interdependence:
things that machines are good at and
ways in which they depend on hu-
mans (and other agents) in joint activ-
ity; and things that humans are good
at and ways in which they depend on
the machines (and other humans).20
For the foreseeable future this fun-
damental asymmetry, or duality, will
remain. The brightest machine agents
will be limited in the generality, if not
the depth, of their inferential, adap-
tive, social, and sensory capabilities.
Humans, though fallible, are function-
ally rich in reasoning strategies and
their powers of observation, learn-
ing, and sensitivity to context. These
are the things that make adaptability
and resilience of work systems possi-
ble. Adapting to appropriate mutually
interdependent roles that take advan-
tage of the respective strengths of hu-
mans and machines—and crafting
natural and effective modes of inter-
action—are key challenges for tech-
nology—not merely the creation of in-
creasingly capable widgets.
What’s the result of belief in the
myth of machines as simple multi-
pliers of human ability? Because de-
sign approaches based on this myth
don’t adequately take into consider-
ation the signicant ways in which
the introduction of autonomous capa-
bilities can change the nature of the
work itself; they lead to “clumsy au-
tomation.” And trying to solve this
problem by adding more poorly de-
signed autonomous capabilities is, in
effect, adding more clumsy automa-
tion onto clumsy automation, thereby
exacerbating the problem that the in-
creased autonomy was intended to
solve.
Myth 7: “Full autonomy” is not only
possible, but is always desirable. In
refutation of the substitution myth,
Table 1 contrasts the putative benets
of automated assistance with the em-
pirical results. Ironically, even when
technology succeeds in making tasks
more efcient, the human work-
load isn’t reduced accordingly. Da-
vid Woods and Eric Hollnagel5 sum-
marized this phenomenon as the law
of stretched systems: “every system
is stretched to operate at its capac-
ity; as soon as there is some improve-
ment, for example in the form of new
technology, it will be exploited to
achieve a new intensity and tempo of
ac t ivit y.”
As Table 1 shows, the decision to
increase the role of automation in
general, and autonomous capabili-
ties in particular, is one that should
be made in light of its complex ef-
fects along a variety of dimensions.
In this article, we’ve tried to make
the case that full autonomy, the sim-
plistic sense in which the term is usu-
ally employed, is barely possible. This
table summarizes the reasons why
increased automation isn’t always
desirable.
Although continuing research to
make machines more active, adap-
tive, and functional is essential, the
point of increasing such prociencies
Table 1. Putative benets of automation versus actual experience.21
Putative benet Real complexity
Increased per formance is obtained from “substitution
of machine activity for human activity.
Practice is transformed; the roles of people change; old and sometimes beloved habits
and familiar features are altered— the envisioned world problem.
Frees up human by of floading work to the machine. Creates new kinds of cognitive work for the human, often at the wrong times; every
automation advance will be exploited to require people to do more, do it faster, or in
more complex ways—the law of stretched systems.
Frees up limited attention by focusing someone on the
correct answer.
Creates more threads to track; makes it harder for people to remain aware of and
integrate all of the activities and changes around them—with coordination costs,
continuously.
Less human knowledge is required. New knowledge and skill demands are imposed on the human and the human might no
longer have a sufficient context to make decisions, because they have been left out of
the loop—automation surprise.
Agent will function autonomously. Team play with people and other agent s is critical to success—principles of
interdependence.
Same feedback to human will be required. New levels and types of feedback are needed to support peoples’ new roles—with
coordination costs, continuously.
Agent enables more flex ibility to the system in a
generic way.
Resulting explosion of features, options, and modes creates new demands, types of
errors, and paths toward failure—automation surprises.
Human errors are reduced. Both agents and people are fallible; new problems are associated with human-agent
coordination breakdowns; agents now obscure information necessary for human
decision making—principles of complexity.
IS-28-03-HCC.indd 7 29/06/13 1:47 PM
8 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS
isn’t merely to make the machines
more independent during times
when unsupervised activity is desir-
able or necessary (autonomous), but
also to make them more capable of
sophisticated interdependent activ-
ity with people and other machines
when such is required (teamwork).
Research in joint activity highlights
the need for autonomous systems to
support not only the uid orchestra-
tion of task handoffs among people
and machines, but also combined
participation on shared tasks requir-
ing continuous and close interaction
(coactivity).6,9 Indeed, in situations
of simultaneous human-agent col-
laboration on shared tasks, people
and machines might be so tightly in-
tegrated in the performance of their
work that interdependence is a con-
tinuous phenomenon, and the very
idea of task handoffs becomes in-
congruous. We see this, for exam-
ple, in the design of work systems
to support cyber sensemaking, that
aim to combine the efforts of human
analysts with software agents in un-
derstanding, anticipating, and re-
sponding to unfolding events in near
real-time.22
The points mentioned here, like
the ndings of the DSB, focus on
how to make effective use of the ex-
panding power of machines. The
myths we’ve discussed lead devel-
opers to introduce new machine ca-
pabilities in ways that predictably
lead to unintended negative conse-
quences and user-hostile technolo-
gies. We need to discard the myths
and focus on developing coordina-
tion and adaptive mechanisms that
turn platform capabilities into new
levels of mission effectivenessen-
abled through genuine human-cen-
teredness. In complex and domains
characterized by uncertainty, ma-
chines that are merely capable of
performing independent work aren’t
enough. Instead, we need machines
that are also capable of working in-
terdependently.6 We commend the
thoughtful work of the DSB in rec-
ognizing and exemplifying some of
the signicant problems caused by
the seven deadly myths of auton-
omy, and hope these and similar
efforts will lead all of us to sincere
repentance and reformation.
References
1. R.R. Hoffman et al., “Trust in Automa-
ti on ,” IEEE: Intelligent Systems, vol.
28, no. 1, pp. 84 –88.
2. J.M. Bradshaw et al., “Dimensions
of Adjustable Autonomy and Mixed-
Initiative Interaction.” Agents and
Computational Autonomy: Potential,
Risks, and Solutions, LNC S, vol. 2969,
Springer-Verlag, 2004, pp. 17–39.
3. S. Brainov and H. Hexmoor, “Quan-
tifying Autonomy,” Agent Autonomy,
Kluwer, 2002, pp. 43–56.
4. M. Luck, M. D’Inverno, and S. Mun-
roe, “Autonomy: Variable and Genera-
tive,” Agent Autonomy, Kluwer, 2002,
pp. 9–22.
5. D.D. Woods and E. Hollnagel, Joint
Cognitive Systems: Patterns in Cogni-
tive Systems Engineering, Taylor &
Francis, 2006, chapter 11.
6. M. Johnson et al., “Autonomy and In-
terdependence in Human-Agent-Robot
Tea m s.” IEEE Intelligent Systems,
vol. 27, no. 2, 2012, pp. 43–51.
7. M. Johnson et al., “Beyond Coop-
erative Robotics: The Central Role of
Interdependence in Coactive Design,”
IEEE: Intelligent Systems, vol. 26, no.
3, 2011, pp. 81–88.
8. D.D. Woods and E.M. Roth, “Cogni-
tive Systems Engineering,” Handbook
of Human-Computer Interaction,
North-Holland, 1988.
9. G. Klein et al., “Ten Challenges for
Making Automation a “Team Player”
in Joint Human-Agent Activity,” IEEE
Intelligent Systems, vol. 19, no. 6,
2004, pp. 91–95.
10. D.A. Norman, “The ‘Problem’ of Au-
tomation: Inappropriate Feedback and
Interaction, Not ‘Over-Automation.’”
Philosophical Trans. Royal Soc. of
London B, vol. 327, 1990,
pp. 585–593.
11. M.A . Good rich and A.C. Schultz,
“Human-Robot Interaction: A Survey,”
Foundations and Trends in Human-
Computer Interaction, vol. 1, no. 3,
2007, pp. 203–275.
12. R. Murphy et al., The Role of Auton-
omy in DoD Systems, Defense Science
Board Task Force Report, July 2012,
Washi ng to n, DC .
13. K. Christofferson and DD. Woods,
“How to Make Automated Systems
Team Players,” Advances in Human
Performance and Cognitive Engineer-
ing Research, vol. 2, Elsevier Science,
2002, pp. 1–12.
14. P.J. Feltovich et al., Keeping It Too
Simple: How the Reductive Tendency
Affects Cog nitive Engineering.” IE EE
Intelligent Systems, vol. 19, no. 3,
2004, pp. 90–94.
15. R.R. Hoffman and D.D. Woods, “Be-
yond Simon’s Slice: Five Fundamental
Tradeoffs That Bound the Performance
of Macrocognitive Work Systems,”
IEEE: Intelligent Systems, vol. 26,
no. 6, 2011, pp. 67–71.
16. J.K. Hawley and A.L. Mares, “Human
Performance Challenges for the Future
Force: Lessons from Patriot after the
Second Gulf War,” Designing Soldier
Systems: Current Issues in Hum an
Factors, Ashgate, 2012 , pp. 3–34.
17. N. Muscettola et al.. “Remote Agent:
To Boldly Go Where No AI System Has
Gone Before,” Articial Intelligence,
vol. 103, nos. 1–2 , 1998, pp. 5-48.
18. J.M. Bradshaw et al., “Sol: An Agent-
Based Framework for Cyber Situation
Awareness,” Künstliche Intelligenz,
vol. 26, no. 2, 2012, pp. 127–140.
19. D.A. Norman, “Cognitive Artifacts,”
Designing Interaction: Psychology at
the Human-Computer Interface, Cam-
bridge Univ. Press.1992, pp. 17–38.
IS-28-03-HCC.indd 8 29/06/13 1:47 PM
MAY/JUNE 2013 www.computer.org/intelligent 9
Selected CS articles and columns
are also available for free at
http://ComputingNow.computer.org.
20. R.R. Hoffman et al., “A Rose by Any
Other Name … Would Probably Be
Given an Acronym,” IEEE Intelligent
Systems, vol. 17, no. 4, 2002, pp. 72–80.
21. N. Sarter, D.D. Woods, and C.E. Bill-
ings, “Automation Surprises,” Hand-
book of Human Factors/Ergonomics,
2nd ed., John Wiley, 1997.
22. J.M. Bradshaw et al., “Introduction to
Special Issue on Human-Agent-Robot
Teamwork (HART),” IEEE Intelligent
Systems, vol. 27, no. 2, 2012,
pp. 8–13.
Jeffrey M. Bradshaw a senior research
scientist at the Florida Institute for Human
and Machine Cognition. Contact him at
jbradshaw@ihmc.us.
Robert R. Hoffman is a senior research
scientist at the Florida Institute for Human
and Machine Cognition. Contact him at
rhoffman@ihmc.us.
David D. Woods is a professor at The Ohio
State University in the Institute for Ergo-
nomics. Contact him at woods.2@osu.edu.
Matthew Johnson is a research scientist
at the Florida Institute for Human and Ma-
chine Cognition. Contact him at mjohn-
son@ihmc.us.
IS-28-03-HCC.indd 9 29/06/13 1:47 PM
... Similarly, in much of current debates, "autonomy" is highlighted as a multidimensional concept, while both humans and advanced technologies are thought to reveal different degrees thereof (Bradshaw et al., 2013, p. 56 f.). It is widely acknowledged that to a varying extent, advanced technologies can be described at least in terms of two dimensions forming part of "autonomy": "self-directedness" and "self-sufficiency, " the former referring to freedom from outside control, the latter denoting "the capability of an entity to take care of itself " (Bradshaw et al., 2013, p. 54 f.). ...
... Similarly, in much of current debates, "autonomy" is highlighted as a multidimensional concept, while both humans and advanced technologies are thought to reveal different degrees thereof (Bradshaw et al., 2013, p. 56 f.). It is widely acknowledged that to a varying extent, advanced technologies can be described at least in terms of two dimensions forming part of "autonomy": "self-directedness" and "self-sufficiency, " the former referring to freedom from outside control, the latter denoting "the capability of an entity to take care of itself " (Bradshaw et al., 2013, p. 54 f.). While human exceptionalist approaches fail to stay abreast of changes in technologies' potentials for autonomous action, the hybrid perspective addresses these developments. ...
... Moreover, it has been emphasized that beyond technologies' potentials for (different levels and degrees of) autonomy (Lüfter, 2024, p. 465), there are further factors that decisively influence conditions of possibility for autonomous action on the part of different entities. Accordingly, as technological development is, in fact, propelled "not by autonomous powers, " but represents a product of human culture (Bradshaw et al., 2013, p. 57 f.; Hornborg, 2021, p. 758 ff.), the anthropologist Alf Hornborg (2021, p. 761;2023, p. 18) points out that what actually matters for analyzing interactions between people and machines are underlying social structures and relations. However, while much of the debates on autonomy in today's sociotechnical constellations focus on ontological questions and comparisons between humans and technologies, contextual conditions are often neglected (Hornborg, 2023, p. 70). ...
Article
In the humanities and social sciences, there is a long tradition of discourses on the relationship between automats and human autonomy. Socio-technological transformation processes of the past decades have revitalized related discussions. At the same time, based on ideas of either human exceptionalism or hybridity, current debates tend to focus on ontological questions and comparisons between humans and machines. This paper aims to widen recent discursive foci by introducing Ivan Illich’s work which highlights institutions, power structures, and the social shaping of technology as key factors of humanartifact relations and autonomous action. It will be argued that Illich’s approach contributes to rethinking autonomy in the digital age by integrating issues of technology design and regulation and by providing a normative framework that allows for assessing autonomy as conviviality in sociotechnical constellations.
... Participants conceive automation as an enabler of this future, expecting machines to perform current operations independently, replace human operators, and outperform them. These are common misconceptions of automation, as described in prior work [2,8,13,14,37]. First, human labor and involvement in automation continues beyond its design and development in supervision, maintenance, commanding, or repair activities [13,22,63]. As such, human involvement is often opaque and distributed [13]. ...
... As such, the authors emphasize the need to make human labor visible, to understand what aspects may be lost if an overly reductionist approach to automation is applied. These two points coincide with two of the "7 deadly myths of autonomous systems" that Bradshaw et al. [14] point at, namely with Myth 5 (i.e., "Once achieved, full autonomy obviates the need for human-machine collaboration") and Myth 6 (i.e., "As machines acquire more autonomy, they will work as simple substitutes (or multipliers) of human capability. ") The concept of "safety operator" emerged quite often in the interviews. ...
Conference Paper
Full-text available
In organizations, the interest in automation is long-standing. However , adopting automated processes remains challenging, even in environments that appear highly standardized and technically suitable for it. Through a case study in Amsterdam Airport Schiphol, this paper investigates automation as a broader sociotechnical system influenced by a complex network of actors and contextual factors. We study practitioners' collective understandings of automation and subsequent efforts taken to implement it. Using imaginaries as a lens, we report findings from a qualitative interview study with 16 practitioners involved in airside automation projects. Our findings illustrate the organizational dynamics and complexities surrounding automation adoption, as reflected in the captured problem formulations, conceptions of the technology, envisioned human roles in autonomous operations, and perspectives on automation fit in the airside ecosystem. Ultimately, we advocate for contextual automation design, which carefully considers human roles, accounts for existing organizational politics, and avoids techno-solutionist approaches.
... Apart from these two necessary criteria for artificial agency and autonomy, Bradshaw et al. (2013) emphasize that it makes sense to speak of autonomous capabilities rather than of autonomous systems as such since there will always be some activities or capabilities of one system that are autonomous while others may not. We agree with this shift of perspective. ...
Article
Full-text available
This paper establishes a connection between the fields of machine learning (ML) and philosophy concerning the phenomenon of behaving neutrally. It investigates a specific class of ML systems capable of delivering a neutral response to a given task, referred to as abstaining machine learning systems, that has not yet been studied from a philosophical perspective. The paper introduces and explains various abstaining machine learning systems, and categorizes them into distinct types. An examination is conducted on how abstention in the different machine learning system types aligns with the epistemological counterpart of suspended judgment, addressing both the nature of suspension and its normative profile. Additionally, a philosophical analysis is suggested on the autonomy and explainability of the abstaining response. It is argued, specifically, that one of the distinguished types of abstaining systems is preferable as it aligns more closely with our criteria for suspended judgment. Moreover, it is better equipped to autonomously generate abstaining outputs and offer explanations for abstaining outputs when compared to the other type.
... Despite the o -repeated sentiment that advanced AIAs will 'soon' be able to operate with li le or no human involvement, those who have more practical experience with AIAs are much more skeptical of this claim, and point out that it is highly unrealistic to expect AIAs to ever function 'perfectly out of the box' with true total autonomy [15]. A popular and promising avenue for surmounting the inevitable shortcomings of AIAs, and thus engendering trust in users, has therefore been to put the users 'in-the-loop' (or 'on-the-loop') as collaborative partners who can augment (or supervise) AIA capabilities. ...
Preprint
People who design, use, and are affected by autonomous artificially intelligent agents want to be able to \emph{trust} such agents -- that is, to know that these agents will perform correctly, to understand the reasoning behind their actions, and to know how to use them appropriately. Many techniques have been devised to assess and influence human trust in artificially intelligent agents. However, these approaches are typically ad hoc, and have not been formally related to each other or to formal trust models. This paper presents a survey of \emph{algorithmic assurances}, i.e. programmed components of agent operation that are expressly designed to calibrate user trust in artificially intelligent agents. Algorithmic assurances are first formally defined and classified from the perspective of formally modeled human-artificially intelligent agent trust relationships. Building on these definitions, a synthesis of research across communities such as machine learning, human-computer interaction, robotics, e-commerce, and others reveals that assurance algorithms naturally fall along a spectrum in terms of their impact on an agent's core functionality, with seven notable classes ranging from integral assurances (which impact an agent's core functionality) to supplemental assurances (which have no direct effect on agent performance). Common approaches within each of these classes are identified and discussed; benefits and drawbacks of different approaches are also investigated.
... A recent report by the National Academies of Sciences, Engineering, and Medicine (2021) describes the need for AI to work effectively as a part of a distributed team. Other research has cited machines being "team players" as an essential component of success for coupling human and automated/robot systems (Behymer & Flach, 2016;Bradshaw et al., 2013;Klein et al., 2004). From this, (and other pertinent work, see Sarter et al., 1997) it is clear that technological components and capabilities cannot be isolated from the broader work context and relationships with human operators. ...
Thesis
Full-text available
Robotic technologies have been documented to often fall short of anticipated performance levels when deployed in complex field settings (Harbers et al., 2017). While robots are intended to work safely and efficiently, operators often describe them as slow, difficult, and error-prone (Murphy, 2017). As a result, the performance of robots in the field often relies on the ability of the human supervisor/controller to observe, predict, and direct robot actions (Johnson et al., 2020), introducing substantial overhead for humans to manage, interact, and coordinate with robotic and/or automated system(s) (McGuirl et al., 2009). As robots become increasingly integrated into complex environments, their ability to team effectively with humans will be paramount to reap the intended benefits of this technology without placing significant coordinative costs upon human operators. This thesis explores coordination and performance of two human-robot team designs participating in joint activity in a constrained environment. Temporal dynamics of human-robot teamwork are assessed, identifying trends in human-robot task delegation and role rigidity. Findings indicate that human operators employ multiple coordination strategies over time, dynamically changing human-robot teamwork approaches based on scenario-driven factors and environmental pressures. The results suggest a need to explicitly design robotic agents with diverse teamwork competencies to support a human operator’s ability to employ adaptive and effective teamwork strategies.
... Continuous interaction and coordination between humans and intelligent systems should be taken into consideration, so the systems are not just independently capable but also able to work effectively in collaboration with humans to support joint activities (cf. Bradshaw et al., 2013). When autonomy in systems increases, it changes the nature of tasks and work, which necessitates a reconfiguration of human roles, skills, and responsibilities, the control processes may shift between humans and AS, or there is perhaps a need to have a joint control system, to handle different levels of autonomy interacting in dynamic situations (Cimolino & Graham, 2022;Lundberg & Johansson, 2021;Simon, 1987). ...
Article
Full-text available
As artificial intelligence (AI) is becoming part of complex systems and critical infrastructures, perspectives on resilience may need to be revisited. This paper focuses on the challenges and approaches in engineering design for achieving resilience in complex and increasingly intelligent systems (CoIS). Building on a case study of a system situated in the context of search and rescue (SAR) operations at sea as well as scenarios of SAR operations supported by AI solutions, it outlines challenges for organisational and engineering design in contexts where flexibility, adaptability, and high reliability are important. The findings point at resilience as a system property, made up of the constituent systems, their interaction and coordination in a system‐of‐systems framework. AI and autonomy in CoIS represent potentially a double‐edged sword; while AI and autonomy contribute to system capabilities and resilience, they can also introduce limitations in terms of, for instance, confined operational envelopes. Achieving resilience in CoIS thus requires a holistic approach that considers constituent systems as well as their interplay, organisational factors, and the judicious balance of AI and human‐based solutions.
Article
The potential to create autonomous teammates that work alongside humans has increased with continued advancements in AI and autonomous technology. Research in human–AI teams and human–autonomy teams (HATs) has seen an influx of new and diverse researchers from human factors, computing, and teamwork, yielding one of the most interdisciplinary domains in modern research. However, the HAT domain’s interdisciplinary nature can make the design of research, especially experiments, more complex, and new researchers may not fully grasp the numerous decisions required to perform high-impact HAT research. To aid researchers in designing high-impact experiments, this article itemizes four initial decision points needed to form a HAT experiment: deciding on a research question, deciding on a team composition, deciding on a research environment, and deciding on data collection. For each decision point, this article discusses these decisions in practice, providing related works to guide researchers toward different options available to them. These decision points are then synthesized through actionable recommendations to guide future researchers. The contribution of this article will increase the impact and knowledge of HAT experiments.
Technical Report
Full-text available
Recent accidents such as the B737-MAX8 crashes highlight the need to address and improve the current aircraft certification process. This includes understanding how design characteristics of automated systems influence flightcrew behavior and how to evaluate the design of these systems to support robust and resilient outcomes. We propose a process which could be used to evaluate the compliance of automated systems looking through the lens of the 3Rs: Reliability, Robustness, and Resilience. This process helps determine where additional evidence is needed in a certification package to support the flightcrew in interacting with automated systems. Two diagrams, the Design Characteristic Diagram (DCD) and the What’s Next diagram, are used to uncover scenarios which complicate flightcrew response. The DCD is used to look at the relationship between characteristics in design and potential vulnerabilities which commonly occur when design does not support the flightcrew. The What’s Next diagram looks at the ability of the design to support the flightcrew in anticipating what will happen next. In our process, claims surrounding the 3Rs that are present in a certification package are systematically evaluated using these two diagrams to uncover additional areas of support for the flightcrew. Questions about when these claims may break down which are identified using the DCD can be tested using scenarios developed on the What’s Next diagram. Further vignettes looking at different versions of a scenario can be assessed to increase the robustness in the design. The FAA has sponsored this research through the Center of Excellence for Technical Training and Human Performance. However, the FAA neither endorses nor rejects the findings of this research. The dissemination of this research is in the interest of invoking academic or technical community comments on the results and conclusions of the research.
Article
What shapes warfighters’ trust in military technologies augmented with Artificial Intelligence (AI)? Research often assumes that junior military personnel, including cadets training to become officers, will trust AI during future wars, and at higher levels than senior officers. We test these claims by fielding a survey experiment among a representative sample of cadets assigned to the Reserve Officers’ Training Corps (ROTC) program in the United States. Our analysis reveals that cadets are more trusting of AI-enhanced military technologies than senior officers, but that their trust is shaped by a more conservative understanding of the appropriate use and oversight of AI. We also find that cadets’ trust is shaped by a complex set of instrumental, normative, and operational considerations, including ongoing cognitive development, education, and professional enculturation. These results provide the first experimental evidence of cadets’ trust in AI-enhanced military technologies and have implications for future research, policy, and military modernization.
Book
Full-text available
Our fascination with new technologies is based on the assumption that more powerful automation will overcome human limitations and make our systems 'faster, better, cheaper,' resulting in simple, easy tasks for people. But how does new technology and more powerful automation change our work? Research in Cognitive Systems Engineering (CSE) looks at the intersection of people, technology, and work. What it has found is not stories of simplification through more automation, but stories of complexity and adaptation. When work changed through new technology, practitioners had to cope with new complexities and tighter constraints. They adapted their strategies and the artifacts to work around difficulties and accomplish their goals as responsible agents. The surprise was that new powers had transformed work, creating new roles, new decisions, and new vulnerabilities. Ironically, more autonomous machines have created the requirement for more sophisticated forms of coordination across people, and across people and machines, to adapt to new demands and pressures. This book synthesizes these emergent Patterns though stories about coordination and mis-coordination, resilience and brittleness, affordance and clumsiness in a variety of settings, from a hospital intensive care unit, to a nuclear power control room, to a space shuttle control center. The stories reveal how new demands make work difficult, how people at work adapt but get trapped by complexity, and how people at a distance from work oversimplify their perceptions of the complexities, squeezing practitioners. The authors explore how CSE observes at the intersection of people, technology, and work, how CSE abstracts patterns behind the surface details and wide variations, and how CSE discovers promising new directions to help people cope with complexities. The stories of CSE show that one key to well-adapted work is the ability to be prepared to be surprised. Are you ready?.
Chapter
Full-text available
This chapter discusses the cognitive systems engineering. To build a cognitive description of a problem solving world, it is necessary to understand how representations of the world interact with different cognitive demands imposed by the application world in question and with characteristics of the cognitive agents, both for existing and prospective changes in the world. Building a cognitive description is part of a problem driven approach to the application of computational power. In tool-driven approaches, knowledge acquisition focuses on describing domain knowledge in terms of the syntax of computational mechanisms, that is, the language of implementation is used as a cognitive language. Semantic questions are displaced either to whoever selects the computational mechanisms or to the domain expert who enters knowledge. The alternative is to provide an umbrella structure of domain semantics that organizes and makes explicit what particular pieces of knowledge mean about problem solving in the domain. Acquiring and using domain semantics is essential to be capable of avoiding potential errors and specifying performance boundaries when building intelligent machines.
Conference Paper
Full-text available
Teamwork has become a widely accepted metaphor for describing the nature of multi-robot and multi-agent cooperation. By virtue of teamwork models, team members attempt to manage general responsibilities and commitments to each other in a coherent fashion that both enhances performance and facilitates recovery when unanticipated problems arise. Whereas early research on teamwork focused mainly on interaction within groups of autonomous agents or robots, there is a growing interest in leveraging human participation effectively. Unlike autonomous systems designed primarily to take humans out of the loop, many important applications require people, agents, and robots to work together in close and relatively continuous interaction. For software agents and robots to participate in teamwork alongside people in carrying out complex real-world tasks, they must have some of the capabilities that enable natural and effective teamwork among groups of people. Just as important, developers of such systems need tools and methodologies to assure that such systems will work together reliably and safely, even when they have been designed independently.
Article
Full-text available
This essay focuses on trust in the automation within macrocognitive work systems. The authors emphasize the dynamics of trust. They consider numerous different meanings or kinds of trust, and different modes of operation in which trust dynamics play a role. Their goal is to contribute to the development of a methodology for designing and analyzing collaborative human-centered work systems, a methodology that might promote both trust "calibration" and appropriate reliance. The analysis suggests an ontology for what the authors call "active exploration for trusting" (AET).
Article
Full-text available
There is a common belief that making systems more autonomous will improve the system and is therefore a desirable goal. Though small scale simple tasks can often benefit from automation, this does not necessarily generalize to more complex joint activity. When designing today’s more sophisticated systems to work closely with humans, it is important not only to consider the machine’s ability to work independently through autonomy, but also its ability to support interdependence with those involved in the joint activity. We posit that to truly improve systems and have them reach their full potential, designing systems that support interdependent activity between participants is the key. Our claim is that increasing autonomy, even in a simple and benign environment, does not always result in an improved system. We will show results from an experiment in which we demonstrate this phenomena and explain why increasing autonomy can sometimes negatively impact performance.
Article
Full-text available
Teamwork has become a widely accepted metaphor for describing the nature of multi-robot and multi-agent cooperation. By virtue of teamwork models, team members attempt to manage general responsibilities and commitments to each other in a coherent fashion that both enhances performance and facilitates recovery when unanticipated problems arise. Whereas early research on teamwork focused mainly on interaction within groups of autonomous agents or robots, there is a growing interest in leveraging human participation effectively. Unlike autonomous systems designed primarily to take humans out of the loop, many important applications require people, agents, and robots to work together in close and relatively continuous interaction. For software agents and robots to participate in teamwork alongside people in carrying out complex real-world tasks, they must have some of the capabilities that enable natural and effective teamwork among groups of people. Just as important, developers of such systems need tools and methodologies to assure that such systems will work together reliably and safely, even when they have been designed independently. The purpose of the HART workshop is to explore theories, methods, and tools in support of humans, agents and robots working together in teams. Position papers that combine findings from fields such as computer science, artificial intelligence, cognitive science, anthropology, social and organizational psychology, human-computer interaction to address the problem of HART are strongly encouraged. The workshop will formulate perspectives on the current state-of-the-art, identify key challenges and opportunities for future studies, and promote community-building among researchers and practitioners. The workshop will be structured around four two-hour sessions on themes relevant to HART. Each session will consist of presentations and questions on selected position papers, followed by a whole-group discussion of the current state-of-the-art and the key challenges and research opportunities relevant to the theme. During the final hour, the workshop organizers will facilitate a discussion to determine next steps. The workshop will be deemed a success when collaborative scientific projects for the coming year are defined, and publication venues are explored. For example, results from the most recent HART workshop (Lorentz Center, Leiden, The Netherlands, December 2010) will be reflected in a special issue of IEEE Intelligent Systems on HART that is slated to appear in January/February 2012.