ArticlePDF Available

Abstract and Figures

As designers conceive and implement what are commonly (but mistakenly) called autonomous systems, they adhere to certain myths of autonomy that are not only damaging in their own right, but also by their continued propagation. This article busts such myths and gives reasons why each of these myths should be called out and cast aside.
Content may be subject to copyright.
HISTORIES AND FUTURES
21541-1672/13/$31.00 © 2013 IEEE IEEE INTELLIGENT SYSTEMS
Published by the IEE E Computer Socie ty
HUMAN-CENTERED COMPUTING
The Seven Deadly Myths
of “Autonomous Systems”
department,1 is a recent US Defense Science Board
(DSB) Task Force Report on “The Role of Auton-
omy in DoD Systems.” This report affords an op-
portunity to examine the concept of autonomous
systems in light of the new DSB  ndings. This
theme will continue in a future column, in which
we’ll outline a constructive approach to design au-
tonomous capabilities based on a human-centered
computing perspective. But to set the stage, in this
essay we bust some “myths” of autonomy.
Myths of Autonomy
The reference in our title to the “seven deadly
myths” of autonomous systems alludes to the
seven deadly sins. The latter were so named not
only because of their intrinsic seriousness but also
because the commission of one of them would
engender further acts of wrongdoing. As design-
ers conceive and implement what are commonly
(but mistakenly) called autonomous systems, they
have succumbed to myths of autonomy that are
not only damaging in their own right but are also
damaging by their continued propagation—that
is, because they engender a host of other serious
misconceptions and consequences. Here, we pro-
vide reasons why each of these myths should be
called out and cast aside.
Myth 1: “Autonomy” is unidimensional. There
is a myth that autonomy is some single thing and
that everyone understands what it is. However,
the word is employed with different meanings and
intentions.2 “Autonomy” is straightforwardly de-
rived from a combination of Greek terms signi-
fying self (auto) governance (nomos), but it has
two different senses in everyday usage. In the  rst
sense, it denotes self-suf ciency—the capability of
an entity to take care of itself. This sense is pres-
ent in the French term autonome when, for ex-
ample, it’s applied to an individual who is capable
of independent living. The second sense refers to
the quality of self-directedness, or freedom from
outside control, as we might say of a political dis-
trict that has been identi ed as an “autonomous
region.”
The two different senses affect the way autonomy
is conceptualized, and in uence tacit claims about
what “autonomous” machines can do. For exam-
ple, in a chapter from a classic volume on agent au-
tonomy, Sviatoslav Brainov and Henry Hexmoor3
emphasize how varying degrees of autonomy serve
as a relative measure of self- directedness—that is,
independence of an agent from its physical envi-
ronment or social group. On the other hand, in the
same volume Michael Luck and his colleagues,4
unsatis ed with de ning autonomy in such rela-
tive terms, argue that the self-generation of goals
should be the de ning characteristic of autonomy.
Such a perspective characterizes the machine in ab-
solute terms that re ect the belief of these research-
ers in autonomy as self-suf ciency.
It should be evident that independence from
outside control doesn’t entail the self-suf ciency
of an autonomous machine. Nor do a machine’s
autonomous capabilities guarantee that it will be
allowed to operate in a self-directed manner. In
fact, human-machine systems involve a dynamic
balance of self-suf ciency and self-directedness.
We will now elaborate on some of the subtleties
relating to this balance.
Figure 1 illustrates some of the challenges faced
by designers of machine capabilities. A major mo-
tivation for such capabilities is to reduce the bur-
den on human operators by increasing a machine’s
self-suf ciency to the point that it can be trusted
to operate in a self-directed manner. However,
when the self-suf ciency of the machine capabil-
ities is seen as inadequate for the circumstances,
particularly in situations where the consequences
In this article, we explore some widespread mis-
conceptions surrounding the topic of “autono-
mous systems.” The immediate catalyst for this
article, like a previous article that appeared in this
Jeffrey M. Bradshaw, Robert R. Hoffman, Matthew Johnson, and David D. Woods
Editors: Ro bert R . Hoffman, Florida Institute for Human and Machine Cognition,
rhoffman@ihmc.us
IS-28-03-HCC.indd 2 29/06/13 1:47 PM
MAY/JUNE 2013 www.computer.org/intelligent 3
of error may be disastrous, it is com-
mon to limit the self-directedness of
the machine. For example, in such
circumstances a human may take
control manually, or an automated
policy may come into play that pre-
vents the machine from doing harm
to itself or others through faulty ac-
tions. Such a scenario brings to mind
early NASA Mars rovers whose capa-
bilities for autonomous action weren’t
fully exercised due to concerns about
the high cost of failure. Because their
advanced capabilities for autonomous
action weren’t fully trusted, NASA
decided to micromanage the rov-
ers through a sizeable team of engi-
neers. This example also highlights
that the capabilities machines have
for autonomous action interact with
the responsibility for outcomes and
delegation of authority. Only people
are held responsible for consequences
(that is, only people can act as prob-
lem holders) and only people de-
cide on how authority is delegated to
automata.5
When self-directedness is reduced
to the point that the machine is pre-
vented from fully exercising its ca-
pabilities (as in the Mars rover ex-
ample), the result can be described
as under-reliance on the technology.
That is, although a machine may be
sufciently competent to perform a
set of actions in the current situation,
human practice or policy may prevent
it from doing so. The ipside of this
error is to allow a machine to operate
too freely in a situation that outstrips
its capabilities (such as high self-di-
rectedness with low self- sufciency).
This error can be described as
over-trust.
In light of these observations, we
can characterize the primary chal-
lenge for the designers of autono-
mous machine capabilities as a mat-
ter of moving upward in a 45-degree
diagonal on Figure 1—increasing
machine capabilities while main-
taining a ( dynamic) balance between
self- directedness and self-sufciency.
However, even when the self-direct-
edness and self-sufciency of au-
tonomous capabilities are balanced
appropriately for the demands of
the situation, humans and machines
working together frequently encoun-
ter potentially debilitating problems
relating to insufcient observabil-
ity or understandability (upper right
quadrant of Figure 1). When highly
autonomous machine capabilities
aren’t well understood by people or
other machines working with them,
work effectiveness suffers.7, 8
Whether human or machine, a
“team player” must be able to ob-
serve, understand, and predict the
state and actions of others.9 Many
examples can be found in the litera-
ture of inadequate observability and
understandability as a problem in
human-machine interaction.5,10 The
problem with what David Woods
calls “strong silent automation” is
that it fails to communicate effec-
tively those things that would allow
humans to work interdependently
with it—signals that allow opera-
tors to predict, control, understand,
and anticipate what the machine is
or will be doing. As anyone who has
wrestled with automation can at-
test, there’s nothing worse than a
so-called smart machine that can’t
tell you what it’s doing, why it’s do-
ing something, or when it will nish.
Even more frustrating—or danger-
ous—is a machine that’s incapable of
responding to human direction when
something (inevitably) goes wrong.
To sum up our discussion of the
rst myth: First, “autonomy” isn’t a
unidimensional concept—it’s more
useful to describe autonomous sys-
tems at least in terms of the two di-
mensions of self-directedness and
self-sufciency. Second, aspects of
self-directedness and self- sufciency
must be balanced appropriately.
Third, to maintain desirable proper-
ties of human-machine teamwork,
particularly when advanced machine
capabilities exhibit a signicant
degree of competence and self-gover-
nance, team players must be able to
communicate effectively those aspects
of their behavior that allow others to
understand them and to work inter-
dependently with them.
Myth 2: The conceptualization
of “levels of autonomy” is a use-
ful scientic grounding for the de-
velopment of autonomous system
roadmaps. Since we’ve just argued
for discarding the myth that auton-
omy is unidimensional, we shouldn’t
have to belabor the related myth that
machine autonomy can be measured
on a single ordered scale of increasing
levels. However, because this second
myth is so pervasive, it merits sepa-
rate discussion.
A recent survey of human-robot in-
teraction concluded that “perhaps
the most strongly human-centered
application of the concept of auton-
omy is in the notion of level of au-
tonomy.11 However, one of the most
striking recommendations of the DSB
report on the role of autonomy is its
Figure 1. Challenges faced by designers
of autonomous machine capabilities.6
When striving to maintain an effective
balance between self-sufciency and
self-directedness for highly capable
machines, designers encounter the
additional challenge of making the
machine understandable.
Burden
Not well
understood
Over-trust
Under-
reliance
Self-sufficiency
Self-directedness
High
High
Low
IS-28-03-HCC.indd 3 29/06/13 1:47 PM
4 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS
recommendation that the Department
of Defense (DoD) should abandon the
debate over denitions of levels of au-
tonomy.12 The committee received in-
put from multiple organizations on
how some variation of denitions
across levels of autonomy could guide
new designs. The retired ag ofcers,
technologists, and academics on the
task force overwhelmingly and unani-
mously found the denitions irrelevant
to the real problems, cases of success,
and missed opportunities for effectively
utilizing increases in autonomous ca-
pabilities for defense missions.
The two paragraphs (from pp. 23–
24) summarizing the DSB’s rationale
for this recommendation are worth
citing verbatim:
An … unproductive course has been the
numerous attempts to transform concep-
tualizations of autonomy made in the
1970s into developmental roadmaps. ...
Sheridan’s taxonomy [of levels of auto-
mation] ... is often incorrectly interpreted
as implying that autonomy is simply a
delegation of a complete task to a com-
puter, that a vehicle operates at a single
level of autonomy and that these levels are
discrete and represent scaffolds of increas-
ing difculty. Though attractive, the con-
ceptualization of levels of autonomy as a
scientic grounding for a developmental
roadmap has been unproductive. ... The
levels served as a tool to capture what was
occurring in a system to make it autono-
mous; these linguistic descriptions are not
suitable to describe specic milestones
of an autonomous system. ... Research
shows that a mission consists of dynami-
cally changing functions, many of which
can be executing concurrently as well as
sequentially. Each of these functions can
have a different allocation scheme to the
human or computer at a given time.12
There are additional reasons why
the levels of automation notion are
problematic.
First, functional differences matter.
The common understanding of the
levels assumes that signicantly dif-
ferent kinds of work can be handled
equivalently (such as task work and
teamwork; reasoning, decisions, and
actions). This reinforces the errone-
ous notion that “automation activities
simply can be substituted for human
activities without otherwise affecting
the operation of the system.13
Second, levels aren’t consistently
ordinal. It isn’t always clear whether
a given action should be character-
ized as “lower” or “higher” than
another on the scale of autonomy.
Moreover, a given machine capabil-
ity operating in a specic situation
may simultaneously be “low” on self-
sufciency while being “high” on
self-directedness.7
Third, autonomy is relative to the
context of activity. Functions can’t
be automated effectively in isolation
from an understanding of the task,
the goals, and the context.
Fourth, levels of autonomy encour-
age reductive thinking. For example,
they facilitate the perspective that ac-
tivity is sequential when it’s actually
simultaneous.14
Fifth, the concept of levels of au-
tonomy is insufcient to meet both
current and future challenges. This
was one of the most signicant nd-
ings of the DoD report. For exam-
ple, many challenges facing human-
machine interaction designers involve
teamwork rather than the separa-
tion of duties between the human
and the machine.9 Effective team-
work involves more than effective
task distribution; it looks for ways
to support and enhance each mem-
ber’s performance6this need isn’t
addressed by the levels of autonomy
conceptualization.
Sixth, the concept of levels of auton-
omy isn’t “human-centered. If it were,
it wouldn’t force us to recapitulate
the requirement that technologies be
useable, useful, understandable, and
observable.
Last, the levels provide insufcient
guidance to the designer. The chal-
lenge of bridging the gap from cog-
nitive engineering products to soft-
ware engineering results is one of the
most daunting of current challenges
and the concept of levels of autonomy
provides no assistance in dealing with
this issue.
Myth 3: Autonomy is a widget. The
DSB report points (on p. 23) to the
fallacy of “treating autonomy as a
wid g e t ”:
The competing denitions for autonomy
have led to confusion among develop-
ers and acquisition ofcers, as well as
among operators and commanders. The
attempt to dene autonomy has resulted
in a waste of both time and money spent
debating and reconciling different terms
and may be contributing to fears of un-
bounded autonomy. The denitions have
been unsatisfactory because they typi-
cally try to express autonomy as a wid-
get or discrete component, rather than
a capabilit y of the larger system enabled
by the integration of human and machine
abilities.12
In other words, autonomy isn’t a
discrete property of a work system,
nor is it a particular kind of technol-
ogy; it’s an idealized characterization
of observed or anticipated interac-
tions between the machine, the work
to be accomplished, and the situation.
To the degree that autonomy is actu-
ally realized in practice, it’s through
the combination of these interactions.
The myth of autonomy as a widget
engenders the misunderstandings im-
plicit in the next myth.
Myth 4: Autonomous systems are
autonomous. Strictly speaking, the
IS-28-03-HCC.indd 4 29/06/13 1:47 PM
MAY/JUNE 2013 www.computer.org/intelligent 5
term “autonomous system” is a mis-
nomer. No entity—and, for that mat-
ter, no person—is capable enough
to be able to perform competently
in every task and situation. On the
other hand, even the simplest ma-
chine can seem to function “autono-
mously” if the task and context are
sufciently constrained. A thermostat
exercises an admirable degree of self-
sufciency and self-directedness with
respect to the limited tasks it’s de-
signed to perform through the use of
a simple form of automation (at least
until it becomes miscalibrated).
The DSB report wisely observes
that “… there are no fully autono-
mous systems just as there are no
fully autonomous soldiers, sailors,
airmen, or Marines. … Perhaps the
most important message for com-
manders is that all machines are su-
pervised by humans to some degree,
and the best capabilities result from
the coordination and collaboration of
humans and machines” (p. 24).12
Machine designs are always created
with respect to a context of design as-
sumptions, task goals, and boundary
conditions. At the boundaries of the
operating context for which the ma-
chine was designed, maintaining ad-
equate performance might become
a challenge. For instance, a typical
home thermostat isn’t designed to
work as an outdoor sensor in the sig-
nicantly subzero climate of Antarc-
tica. Consider also the work context
of a Navy Seal whose job it is to per-
form highly sensitive operations that
require human knowledge and rea-
soning skills. A Seal doing his job is
usually thought of as being highly au-
tonomous. However, a more careful
examination reveals his interdepen-
dence with other members of his Seal
team to conduct team functions that
can’t be performed by a single indi-
vidual, just as the team is interdepen-
dent with the overall Navy mission
and with the operations of other co-
located military or civilian units.
What’s the result of belief in this
fourth myth? People in positions of
responsibility and authority might
focus too much on autonomy-related
problems and xes while failing to
understand that self-sufciency is al-
ways relative to a situation. Sadly,
in most cases machine capabilities
are not only relative to a set of pre-
dened tasks and goals, they are
relative to a set of xed tasks and
goals. A software system might per-
form gloriously without supervision
in circumstances within its compe-
tence envelope (itself a reection of
the designer’s intent), but fail misera-
bly when the context changes to some
circumstance that pushes the larger
work system over the edge.15 Al-
though some tasks might be worked
with high efciency and accuracy, the
potential for disastrous fragility is
ever present.16 Speaking of autonomy
without adequately characterizing as-
sumptions about how the task is em-
bedded in the situation is dangerously
misguided.
Myth 5: Once achieved, full auton-
omy obviates the need for human-
machine collaboration. Much of the
early research on autonomous sys-
tems was motivated by situations in
which autonomous systems were re-
quired to replace humans, in theory
minimizing the need for consider-
ing the human aspects of such solu-
tions. For example, one of the earliest
high-consequence applications of so-
phisticated agent technologies was in
NASA’s Remote Agent Architecture,
designed to direct the activities of un-
manned spacecraft engaged in distant
planetary exploration.17 The Remote
Agent Architecture was expressly de-
signed for use in human-out-of-the-
loop situations where response laten-
cies in the transmission of round-trip
control sequences from earth would
have impaired a spacecraft’s ability
to respond to urgent problems or to
take advantage of unexpected science
opportunities.
Since those early days, most auton-
omy research has been pursued in a
technology-centric fashion, as if full
machine autonomy—complete inde-
pendence and self-sufciency—were
a holy grail. A primary, ostensible
reason for the quest is to reduce man-
ning needs, since salaries are the larg-
est fraction of the costs of sociotech-
nical systems. An example is the US
Navy’s “Human Systems Integration”
program, initially founded on a belief
that an increase in autonomous ma-
chine capabilities (typically developed
without adequate consideration for
the complexities of interdependence
in mixed human-machine teams)
would enable the Navy to crew large
vessels with smaller human comple-
ments. However, reection on the
nature of human work reveals the
shortsightedness of such a singular
and short-term focus: What could
be more troublesome to a group of
individuals engaged in dynamic, fast-
paced, real-world collaboration cop-
ing with complex tasks and shifting
goals than a colleague who is per-
fectly able to perform tasks alone but
lacks the expertise required to coor-
dinate his or her activities with those
of others?
Of course, there are situations
where the goal of minimizing hu-
man involvement with autonomous
systems can be argued effectively—
for example, some jobs in industrial
manufacturing. However, it should
be noted that virtually all of the most
challenging deployments of autono-
mous systems to date—such as mili-
tary unmanned air vehicles, NASA
rovers, unmanned underwater ve-
hicles, and disaster inspection ro-
bots—have involved people in crucial
IS-28-03-HCC.indd 5 29/06/13 1:47 PM
6 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS
roles where expertise is a must. Such
involvement hasn’t been merely to
make up for the current limitations
on machine capabilities, but also be-
cause their jointly coordinated efforts
with humans were—or should have
been—intrinsically part of the mis-
sion planning and operations itself.
What’s the result of belief in this
myth? Researchers and their sponsors
begin to assume that “all we need is
more autonomy.” This kind of sim-
plistic thinking engenders the even
more grandiose myth that human
factors can be avoided in the design
and deployment of machines. Care-
ful consideration will reveal that, in
addition to more machine capabilities
for task work, there’s a need for the
kinds of breakthroughs in human-
machine teamwork that would en-
able autonomous systems not merely
to do things for people, but also to
work together with people and other
systems. This capacity for teamwork,
not merely the potential for expanded
task work, is the inevitable next leap
forward required for more effective
design and deployment of autono-
mous systems operating in a world
full of people.18
Myth 6: As machines acquire more
autonomy, they will work as simple
substitutes (or multipliers) of human
capability. The concept of automa-
tion began with the straightforward
objective of replacing whenever fea-
sible any tedious, repetitive, dirty, or
dangerous task currently performed
by a human with a machine that
could do the same task better, faster,
or cheaper. This was a core concept
of the Industrial Revolution. The en-
tire eld of human factors emerged
circa World War 1 in recognition of
the need to consider the human oper-
ator in industrial design. Automation
became one of the rst issues to at-
tract the notice of cyberneticists and
human factors researchers during and
immediately after World War II. Pio-
neering researchers attempted to sys-
tematically characterize the general
strengths and weaknesses of humans
and machines. The resulting disci-
pline of “function allocation” aimed
to provide a rational means of deter-
mining which system-level functions
should be carried out by humans and
which by machines.
Obviously, the suitability of a par-
ticular human or machine to take on a
particular task will vary over time and
in different situations. Hence, the con-
cepts of adaptive or dynamic function
allocation and adjustable autonomy
emerged with the hope that shifting
responsibilities between humans and
machines would lead to machine and
work designs more appropriate for the
emerging sociotechnical workplace.2
Of course, certain tasks, such as those
requiring sophisticated judgment,
couldn’t be shifted to machines, and
other tasks, such as those requiring ul-
tra-precise movement, couldn’t be per-
formed by humans. But with regard
to tasks where human and machine
capabilities overlapped—the area of
variable task assignment—software-
based decision-making schemes were
proposed to allow tasks to be allo-
cated according to the potential per-
former’s availability.
Over time, it became plain to re-
searchers that things weren’t this sim-
ple. For example, many functions in
complex systems are shared by hu-
mans and machines; hence, the need
to consider synergies and goal con-
icts among the various performers
of joint actions. Function allocation
isn’t a simple process of transfer-
ring responsibilities from one com-
ponent to another. When system de-
signers automate a subtask, what
they’re really doing is performing a
type of task distribution and, as such,
have introduced novel elements of
interdependence within the work sys-
tem.7 This is the lesson to be learned
from studies of the “substitution
my t h ,”13 which conclude that reduc-
ing or expanding the role of automa-
tion in joint human-machine systems
may change the nature of interdepen-
dent and mutually adapted activities
in complex ways. To effectively ex-
ploit the capabilities that automation
provides (versus merely increasing au-
tomation), the task work—and the
interdependent teamwork it induces
among players in a given situation—
must be understood and coordinated
as a whole.
It’s easy to fall prey to the fallacy
that automated assistance is a simple
substitute or multiplier of human ca-
pability because, from the point of
view of an outsider observing the as-
sisted humans, it seems that—in suc-
cessful cases, at least—the people are
able to perform the task better or
faster than they could without help.
In reality, however, help of whatever
kind doesn’t simply enhance our abil-
ity to perform the task: it changes
the nature of the task.13,19 To take
a simple example, the use of a com-
puter rather than a pencil to compose
a document can speed up the task of
writing an essay in some respects,
but sometimes can slow it down in
other respects—for example, when
electrical power goes out. The es-
sential point is that it requires a dif-
ferent conguration of human skills.
Similarly, a robot used to perform a
household task might be able to do
many things “on its own,” but this
doesn’t eliminate the human’s role, it
changes that role. The human respon-
sibility is now the cognitive task of
goal setting, monitoring, and control-
ling the robot’s progress (or regress).16
Increasing the autonomy of autono-
mous systems requires different kinds
of human expertise and not always
fewer humans. Humans and articial
IS-28-03-HCC.indd 6 29/06/13 1:47 PM
MAY/JUNE 2013 www.computer.org/intelligent 7
agents are two disparate kinds of enti-
ties that exist in very different sorts of
worlds. Humans have rich knowledge
about the world that they’re trying to
understand and inuence, while ma-
chines are much more limited in their
understanding of the world that they
model and affect. This isn’t a matter
of distinguishing ways that machines
can compensate for things that hu-
mans are bad at. Rather, it’s a mat-
ter of characterizing interdependence:
things that machines are good at and
ways in which they depend on hu-
mans (and other agents) in joint activ-
ity; and things that humans are good
at and ways in which they depend on
the machines (and other humans).20
For the foreseeable future this fun-
damental asymmetry, or duality, will
remain. The brightest machine agents
will be limited in the generality, if not
the depth, of their inferential, adap-
tive, social, and sensory capabilities.
Humans, though fallible, are function-
ally rich in reasoning strategies and
their powers of observation, learn-
ing, and sensitivity to context. These
are the things that make adaptability
and resilience of work systems possi-
ble. Adapting to appropriate mutually
interdependent roles that take advan-
tage of the respective strengths of hu-
mans and machines—and crafting
natural and effective modes of inter-
action—are key challenges for tech-
nology—not merely the creation of in-
creasingly capable widgets.
What’s the result of belief in the
myth of machines as simple multi-
pliers of human ability? Because de-
sign approaches based on this myth
don’t adequately take into consider-
ation the signicant ways in which
the introduction of autonomous capa-
bilities can change the nature of the
work itself; they lead to “clumsy au-
tomation.” And trying to solve this
problem by adding more poorly de-
signed autonomous capabilities is, in
effect, adding more clumsy automa-
tion onto clumsy automation, thereby
exacerbating the problem that the in-
creased autonomy was intended to
solve.
Myth 7: “Full autonomy” is not only
possible, but is always desirable. In
refutation of the substitution myth,
Table 1 contrasts the putative benets
of automated assistance with the em-
pirical results. Ironically, even when
technology succeeds in making tasks
more efcient, the human work-
load isn’t reduced accordingly. Da-
vid Woods and Eric Hollnagel5 sum-
marized this phenomenon as the law
of stretched systems: “every system
is stretched to operate at its capac-
ity; as soon as there is some improve-
ment, for example in the form of new
technology, it will be exploited to
achieve a new intensity and tempo of
ac t ivit y.”
As Table 1 shows, the decision to
increase the role of automation in
general, and autonomous capabili-
ties in particular, is one that should
be made in light of its complex ef-
fects along a variety of dimensions.
In this article, we’ve tried to make
the case that full autonomy, the sim-
plistic sense in which the term is usu-
ally employed, is barely possible. This
table summarizes the reasons why
increased automation isn’t always
desirable.
Although continuing research to
make machines more active, adap-
tive, and functional is essential, the
point of increasing such prociencies
Table 1. Putative benets of automation versus actual experience.21
Putative benet Real complexity
Increased per formance is obtained from “substitution
of machine activity for human activity.
Practice is transformed; the roles of people change; old and sometimes beloved habits
and familiar features are altered— the envisioned world problem.
Frees up human by of floading work to the machine. Creates new kinds of cognitive work for the human, often at the wrong times; every
automation advance will be exploited to require people to do more, do it faster, or in
more complex ways—the law of stretched systems.
Frees up limited attention by focusing someone on the
correct answer.
Creates more threads to track; makes it harder for people to remain aware of and
integrate all of the activities and changes around them—with coordination costs,
continuously.
Less human knowledge is required. New knowledge and skill demands are imposed on the human and the human might no
longer have a sufficient context to make decisions, because they have been left out of
the loop—automation surprise.
Agent will function autonomously. Team play with people and other agent s is critical to success—principles of
interdependence.
Same feedback to human will be required. New levels and types of feedback are needed to support peoples’ new roles—with
coordination costs, continuously.
Agent enables more flex ibility to the system in a
generic way.
Resulting explosion of features, options, and modes creates new demands, types of
errors, and paths toward failure—automation surprises.
Human errors are reduced. Both agents and people are fallible; new problems are associated with human-agent
coordination breakdowns; agents now obscure information necessary for human
decision making—principles of complexity.
IS-28-03-HCC.indd 7 29/06/13 1:47 PM
8 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS
isn’t merely to make the machines
more independent during times
when unsupervised activity is desir-
able or necessary (autonomous), but
also to make them more capable of
sophisticated interdependent activ-
ity with people and other machines
when such is required (teamwork).
Research in joint activity highlights
the need for autonomous systems to
support not only the uid orchestra-
tion of task handoffs among people
and machines, but also combined
participation on shared tasks requir-
ing continuous and close interaction
(coactivity).6,9 Indeed, in situations
of simultaneous human-agent col-
laboration on shared tasks, people
and machines might be so tightly in-
tegrated in the performance of their
work that interdependence is a con-
tinuous phenomenon, and the very
idea of task handoffs becomes in-
congruous. We see this, for exam-
ple, in the design of work systems
to support cyber sensemaking, that
aim to combine the efforts of human
analysts with software agents in un-
derstanding, anticipating, and re-
sponding to unfolding events in near
real-time.22
The points mentioned here, like
the ndings of the DSB, focus on
how to make effective use of the ex-
panding power of machines. The
myths we’ve discussed lead devel-
opers to introduce new machine ca-
pabilities in ways that predictably
lead to unintended negative conse-
quences and user-hostile technolo-
gies. We need to discard the myths
and focus on developing coordina-
tion and adaptive mechanisms that
turn platform capabilities into new
levels of mission effectivenessen-
abled through genuine human-cen-
teredness. In complex and domains
characterized by uncertainty, ma-
chines that are merely capable of
performing independent work aren’t
enough. Instead, we need machines
that are also capable of working in-
terdependently.6 We commend the
thoughtful work of the DSB in rec-
ognizing and exemplifying some of
the signicant problems caused by
the seven deadly myths of auton-
omy, and hope these and similar
efforts will lead all of us to sincere
repentance and reformation.
References
1. R.R. Hoffman et al., “Trust in Automa-
ti on ,” IEEE: Intelligent Systems, vol.
28, no. 1, pp. 84 –88.
2. J.M. Bradshaw et al., “Dimensions
of Adjustable Autonomy and Mixed-
Initiative Interaction.” Agents and
Computational Autonomy: Potential,
Risks, and Solutions, LNC S, vol. 2969,
Springer-Verlag, 2004, pp. 17–39.
3. S. Brainov and H. Hexmoor, “Quan-
tifying Autonomy,” Agent Autonomy,
Kluwer, 2002, pp. 43–56.
4. M. Luck, M. D’Inverno, and S. Mun-
roe, “Autonomy: Variable and Genera-
tive,” Agent Autonomy, Kluwer, 2002,
pp. 9–22.
5. D.D. Woods and E. Hollnagel, Joint
Cognitive Systems: Patterns in Cogni-
tive Systems Engineering, Taylor &
Francis, 2006, chapter 11.
6. M. Johnson et al., “Autonomy and In-
terdependence in Human-Agent-Robot
Tea m s.” IEEE Intelligent Systems,
vol. 27, no. 2, 2012, pp. 43–51.
7. M. Johnson et al., “Beyond Coop-
erative Robotics: The Central Role of
Interdependence in Coactive Design,”
IEEE: Intelligent Systems, vol. 26, no.
3, 2011, pp. 81–88.
8. D.D. Woods and E.M. Roth, “Cogni-
tive Systems Engineering,” Handbook
of Human-Computer Interaction,
North-Holland, 1988.
9. G. Klein et al., “Ten Challenges for
Making Automation a “Team Player”
in Joint Human-Agent Activity,” IEEE
Intelligent Systems, vol. 19, no. 6,
2004, pp. 91–95.
10. D.A. Norman, “The ‘Problem’ of Au-
tomation: Inappropriate Feedback and
Interaction, Not ‘Over-Automation.’”
Philosophical Trans. Royal Soc. of
London B, vol. 327, 1990,
pp. 585–593.
11. M.A . Good rich and A.C. Schultz,
“Human-Robot Interaction: A Survey,”
Foundations and Trends in Human-
Computer Interaction, vol. 1, no. 3,
2007, pp. 203–275.
12. R. Murphy et al., The Role of Auton-
omy in DoD Systems, Defense Science
Board Task Force Report, July 2012,
Washi ng to n, DC .
13. K. Christofferson and DD. Woods,
“How to Make Automated Systems
Team Players,” Advances in Human
Performance and Cognitive Engineer-
ing Research, vol. 2, Elsevier Science,
2002, pp. 1–12.
14. P.J. Feltovich et al., Keeping It Too
Simple: How the Reductive Tendency
Affects Cog nitive Engineering.” IE EE
Intelligent Systems, vol. 19, no. 3,
2004, pp. 90–94.
15. R.R. Hoffman and D.D. Woods, “Be-
yond Simon’s Slice: Five Fundamental
Tradeoffs That Bound the Performance
of Macrocognitive Work Systems,”
IEEE: Intelligent Systems, vol. 26,
no. 6, 2011, pp. 67–71.
16. J.K. Hawley and A.L. Mares, “Human
Performance Challenges for the Future
Force: Lessons from Patriot after the
Second Gulf War,” Designing Soldier
Systems: Current Issues in Hum an
Factors, Ashgate, 2012 , pp. 3–34.
17. N. Muscettola et al.. “Remote Agent:
To Boldly Go Where No AI System Has
Gone Before,” Articial Intelligence,
vol. 103, nos. 1–2 , 1998, pp. 5-48.
18. J.M. Bradshaw et al., “Sol: An Agent-
Based Framework for Cyber Situation
Awareness,” Künstliche Intelligenz,
vol. 26, no. 2, 2012, pp. 127–140.
19. D.A. Norman, “Cognitive Artifacts,”
Designing Interaction: Psychology at
the Human-Computer Interface, Cam-
bridge Univ. Press.1992, pp. 17–38.
IS-28-03-HCC.indd 8 29/06/13 1:47 PM
MAY/JUNE 2013 www.computer.org/intelligent 9
Selected CS articles and columns
are also available for free at
http://ComputingNow.computer.org.
20. R.R. Hoffman et al., “A Rose by Any
Other Name … Would Probably Be
Given an Acronym,” IEEE Intelligent
Systems, vol. 17, no. 4, 2002, pp. 72–80.
21. N. Sarter, D.D. Woods, and C.E. Bill-
ings, “Automation Surprises,” Hand-
book of Human Factors/Ergonomics,
2nd ed., John Wiley, 1997.
22. J.M. Bradshaw et al., “Introduction to
Special Issue on Human-Agent-Robot
Teamwork (HART),” IEEE Intelligent
Systems, vol. 27, no. 2, 2012,
pp. 8–13.
Jeffrey M. Bradshaw a senior research
scientist at the Florida Institute for Human
and Machine Cognition. Contact him at
jbradshaw@ihmc.us.
Robert R. Hoffman is a senior research
scientist at the Florida Institute for Human
and Machine Cognition. Contact him at
rhoffman@ihmc.us.
David D. Woods is a professor at The Ohio
State University in the Institute for Ergo-
nomics. Contact him at woods.2@osu.edu.
Matthew Johnson is a research scientist
at the Florida Institute for Human and Ma-
chine Cognition. Contact him at mjohn-
son@ihmc.us.
IS-28-03-HCC.indd 9 29/06/13 1:47 PM
... contexte national et international fortement concurrentiel. L'automatisation des tâches par des dispositifs techniques autonomes peut alors apparaître comme la voie inéluctable dans une logique de réduction des coûts et, par voie de conséquence, des effectifs humains (Bradshaw et al., 2013). 6 Enfin, la popularité des assistants vocaux à base d'IA a pu conduire les entreprises à vouloir se saisir de nouveaux canaux d'interaction (Budiu, 2018). ...
... Une démarche de conception tournée vers le pouvoir d'agir plutôt que vers l'innovation 95 L'une des raisons ostensibles de l'introduction des chatbots dans l'entreprise est la réduction des effectifs humains (Bradshaw et al., 2013). Pour les industriels, les enjeux financiers et stratégiques relatifs à la mise en oeuvre des chatbots sont extrêmement élevés (Budiu, 2018) : non seulement les chatbots constitueraient des solutions techniques susceptibles d'optimiser les gains en matière de temps et d'argent, mais ils sont aussi le signe d'une entreprise tournée vers l'avenir et l'innovation. ...
... L'analyse de l'activité médiatisée par le chatbot propose de repositionner ces dispositifs techniques interactifs en posant la question de l'aide à l'activité humaine. Il est alors possible de voir que les chatbots ne suppriment pas l'activité des « experts » métier, mais qu'ils la transforment et introduisent de nouvelles tâches « invisibles », comme l'ont déjà montré des travaux sur l'automation (Bradshaw et al., 2013 ;Dekker & Woods, 2002) et plus particulièrement sur les agents conversationnels (Lahoual & Fréjus, 2019 ;Velkovska & Zouinar, 2018). 96 Dans ce contexte, remettre l'activité des sujets au coeur du processus de conception est urgent, car les organisations, animées par une vision déformée de l'automatisation des tâches, continuent de réduire les ressources humaines. ...
Article
Full-text available
In what way are chatbots new resources for humans? What are the conditions for their successful implementation? How do chatbots transform human activity? To answer these questions, this paper investigates how four chatbots were implemented and utilized in a professional context based on real user activity. We first show that the current design approach does not take sufficient account of the real activity and its multiple determinants, or of the users' point of view on their activity. Yet these interactive devices participate in a pre-existing socio-technical system made up of a diversity of subjects engaged in activities with multiple purposes. Through the prism of instrumental geneses, we therefore propose to identify how these chatbots can help or hinder human activity, so as to document, for design purposes, the conditions leading to the emergence of chatbot uses. After a synthetic presentation of the empirical results, we discuss the relevance of the instrumental approach when characterizing the contributions and limitations of chatbots. Finally, we argue that a design perspective supporting the postulate of human-machine asymmetry is prolific in supporting a design approach that is more focused on human power to act than on innovation.
... The meteorological community must maintain clarity concerning the language that is typically used to discuss computers and AI, as well as understand the nature of the human-computer relationship. A number of misconceptions in terminology have arisen (Bradshaw et al. 2013;Hoffman et al. 2014). One misconception surrounds the word "automation." ...
Article
Full-text available
A series of webinars and panel discussions were conducted on the topic of the evolving role of humans in weather prediction and communication, in recognition of the 100th anniversary of the founding of the AMS. One main theme that arose was the inevitability that new tools using artificial intelligence will improve data analysis, forecasting, and communication. We discussed what tools are being created, how they are being created, and how the tools will potentially affect various duties for operational meteorologists in multiple sectors of the profession. Even as artificial intelligence increases automation, humans will remain a vital part of the forecast process as that process changes over time. Additionally, both university training and professional development must be revised to accommodate the evolving forecasting process, including addressing the need for computing and data skills (including artificial intelligence and visualization), probabilistic and ensemble forecasting, decision support, and communication skills. These changing skill sets necessitate that both the U.S. government’s Meteorologist General Schedule-1340 requirements and the AMS standards for a bachelor’s degree need to be revised. Seven recommendations are presented for student and forecaster preparation and career planning, highlighting the need for students and operational meteorologists to be flexible life-long learners, acquire new skills, and be engaged in the changes to forecast technology in order to best serve the user community throughout their careers. The article closes with our vision for the ways that humans can maintain an essential role in weather prediction and communication, highlighting the interdependent relationship between computers and humans.
... This article aims to contribute to closing the gap between the theory of meaningful human control, as proposed by 1 "Level of autonomy" is a complex construct. In line with Bradshaw's seven deadly myths of autonomy [10], we acknowledge that measuring autonomy on a single ordered scale of increasing levels is insufficient because it lacks context, is not human-centred, and disregard functional differences, among other reasons. 2 Meaningful human control relates not only to the engineering of the AI agent, but also to the design of the socio-technical environment that surrounds it, including social and institutional practices [11,27,28]. As [12] elaborate, "[intelligent] devices themselves play an important role but cannot be considered without accounting for the numerous human agents, their physical environment, and the social, political and legal infrastructures in which they are embedded." ...
Article
Full-text available
How can humans remain in control of artificial intelligence (AI)-based systems designed to perform tasks autonomously? Such systems are increasingly ubiquitous, creating benefits - but also undesirable situations where moral responsibility for their actions cannot be properly attributed to any particular person or group. The concept of meaningful human control has been proposed to address responsibility gaps and mitigate them by establishing conditions that enable a proper attribution of responsibility for humans; however, clear requirements for researchers, designers, and engineers are yet inexistent, making the development of AI-based systems that remain under meaningful human control challenging. In this paper, we address the gap between philosophical theory and engineering practice by identifying, through an iterative process of abductive thinking, four actionable properties for AI-based systems under meaningful human control, which we discuss making use of two applications scenarios: automated vehicles and AI-based hiring. First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations within which the system ought to operate. Second, humans and AI agents within the system should have appropriate and mutually compatible representations. Third, responsibility attributed to a human should be commensurate with that human’s ability and authority to control the system. Fourth, there should be explicit links between the actions of the AI agents and actions of humans who are aware of their moral responsibility. We argue that these four properties will support practically minded professionals to take concrete steps toward designing and engineering for AI systems that facilitate meaningful human control.
... Another similar case of an embodied explainer would be the challenge of self-explanatory machines, for instance, those with self-diagnosys capabilities. In the event of an incident, an autonomous car with diagnostic capacity would check whether it is your responsibility (leaving aside for the moment the supposition that the idea of a full autonomy obviating the need for human-machine collaboration is very arguable (Bradshaw et al., 2013)). For carrying out the task, the agent must work within a very complex system of responsibilities relationships, and role-taking modules (Kridalukmana et al., 2020). ...
Article
Full-text available
A widespread need to explain the behavior and outcomes of AI-based systems has emerged, due to their ubiquitous presence. Thus, providing renewed momentum to the relatively new research area of eXplainable AI (XAI). Nowadays, the importance of XAI lies in the fact that the increasing control transference to this kind of system for decision making -or, at least, its use for assisting executive stakeholders- already affects many sensitive realms (as in Politics, Social Sciences, or Law). The decision-making power handover to opaque AI systems makes mandatory explaining those, primarily in application scenarios where the stakeholders are unaware of both the high technology applied and the basic principles governing the technological solutions. The issue should not be reduced to a merely technical problem; the explainer would be compelled to transmit richer knowledge about the system (including its role within the informational ecosystem where he/she works). To achieve such an aim, the explainer could exploit, if necessary, practices from other scientific and humanistic areas. The first aim of the paper is to emphasize and justify the need for a multidisciplinary approach that is beneficiated from part of the scientific and philosophical corpus on Explaining, underscoring the particular nuances of the issue within the field of Data Science. The second objective is to develop some arguments justifying the authors’ bet by a more relevant role of ideas inspired by, on the one hand, formal techniques from Knowledge Representation and Reasoning, and on the other hand, the modeling of human reasoning when facing the explanation. This way, explaining modeling practices would seek a sound balance between the pure technical justification and the explainer-explainee agreement.
... The insertion of domain knowledge is likewise hindered without a clear understanding of what is happening internally [180]. Indeed, many have hinted at the dangers of an autonomous system that cannot show stakeholders what they are doing, why they are doing it, whether an action can be averted, and how a human may avert such an action [35]. Arguably, if humans are not allowed to provide inputs to an AutoML system, the bare minimum of interactivity should be the provision of outputs, i.e. explanatory mechanisms that allow stakeholders to understand system behaviours and the rationale behind decision-making processes. ...
Preprint
Full-text available
As automated machine learning (AutoML) systems continue to progress in both sophistication and performance, it becomes important to understand the `how' and `why' of human-computer interaction (HCI) within these frameworks, both current and expected. Such a discussion is necessary for optimal system design, leveraging advanced data-processing capabilities to support decision-making involving humans, but it is also key to identifying the opportunities and risks presented by ever-increasing levels of machine autonomy. Within this context, we focus on the following questions: (i) How does HCI currently look like for state-of-the-art AutoML algorithms, especially during the stages of development, deployment, and maintenance? (ii) Do the expectations of HCI within AutoML frameworks vary for different types of users and stakeholders? (iii) How can HCI be managed so that AutoML solutions acquire human trust and broad acceptance? (iv) As AutoML systems become more autonomous and capable of learning from complex open-ended environments, will the fundamental nature of HCI evolve? To consider these questions, we project existing literature in HCI into the space of AutoML; this connection has, to date, largely been unexplored. In so doing, we review topics including user-interface design, human-bias mitigation, and trust in artificial intelligence (AI). Additionally, to rigorously gauge the future of HCI, we contemplate how AutoML may manifest in effectively open-ended environments. This discussion necessarily reviews projected developmental pathways for AutoML, such as the incorporation of reasoning, although the focus remains on how and why HCI may occur in such a framework rather than on any implementational details. Ultimately, this review serves to identify key research directions aimed at better facilitating the roles and modes of human interactions with both current and future AutoML systems.
... The need for a focus on such topics is frequently articulated. For example, Bradshaw et al. [4] have argued that "there is need for kinds of breakthroughs in human-machine teamwork that would enable systems to not merely do things for people, but with people and other systems". ...
Conference Paper
Full-text available
The increasing capabilities of Artificial Intelligence enable the support of users in a continuously growing number of applications. Current systems typically dictate that interaction between user input and AI output unfolds in discrete steps, as is the case with, for example, conversational agents. Novel scenarios require AI systems to adapt and respond to continuous user input, e.g., image-guided surgery and AI-supported text entry. In and across these applications, AI systems need to support more varied and dynamic interactions in which users and AI interact continuously and in parallel. Current methods and guidelines are often inadequate and sometimes even detrimental to user needs when considering continuous usage scenarios. Realizing a continuous interaction between users and AI requires a substantial change in perspective when designing Human-AI systems. In this SIG, we support the exchange of cutting-edge research contributing to a better understanding and improved methods and tools to design continuous Human-AI interaction.
... An analysis called ''the seven deadly myths of autonomous systems'' conducted by Bradshaw et al. 25 states that one myth is that ''once achieved, full autonomy obviates the need for human-machine collaboration.'' They add that most research on autonomy has been pursued in a technology-centric fashion and that the belief in this myth is not without consequences. ...
Automation and an increasing level of autonomy are not novelties in aircraft cockpits. One of the main goals of automation in aviation is to increase pilots' situation awareness (SA) and reduce their workload-both of high importance for a safe flight. Increasing automation has historically reduced aviation accident rates and improved efficiency. Yet, currently implemented systems have also contributed to accidents, when failed, or were insufficient to increase pilots' SA for improving their decision-making. This paper discusses the need for an enhanced decision-support system and its potential benefits for safer aviation. A model-based decision support system that leverages artificial intelligence, Integrated Flight Advisory System (IFAS), is presented. Further, the conceptual design of this system is described. The overview and roadmap provided in this paper consist of an effort toward a more holistic, pilot-centered, AI-based decision-support that can contribute to safer aviation.
... A system's autonomy is relative to a given environment, and changing that environment can have important consequences on the system's autonomy, on its ability to adapt. That is why Bradshaw et al. [20] insist that autonomy is not a discrete property of a system but "an idealized characterization of observed or anticipated interactions between the machine, the work to be accomplished, and the situation" (p. 4). ...
Article
Full-text available
Focusing on social robots, this article argues that the form of embodiment or presence in the world of agents, whether natural or artificial, is fundamental to their vulnerability and ability to learn. My goal is to compare two different types of artificial social agents, not on the basis of whether they are more or less “social” or “intelligent”, but on that of the different ways in which they are embodied, or made present in the world. One type may be called ‘true robots’. That is, machines that are three dimensional physical objects, with three required characteristics: individuality, environmental manipulation and mobility in physical space. The other type may be defined as "analytic agents", for example ‘bots’ and ‘apps’, which in social contexts can act in the world only when embedded in complex systems that include heterogeneous technologies. These two ways of being in the world are quite different from each other, and also from the way human persons are present. This difference in ways of embodiment, which is closely related to the agents’ vulnerability and ability to learn, conditions in part the way artificial agents can interact with humans, and therefore it has major consequences for the ethics (and politics) of these technologies.
... Although it is quite difficult to provide a governing definition of autonomy without the situational context of the application, Fisher et al. [20] defined autonomous systems as those systems that decide for themselves what to do and when to do it. In explaining this idea, Bradshaw et al. [21] emphasized that autonomy entails at least two dimensions: 1) self-directedness, which describes independence of an agent from its physical environment and social context; and 2) self-sufficiency, which describes self-generation of goals by the agent. ...
Article
Future deep-space crewed exploration plans include long duration missions (>1000 days) that will be constrained by lengthy transmission delays and potential occultations in communications, as well as infrequent resupply opportunities and likely periods of habitat unoccupancy. In order to meet the high level of autonomy needed for these missions, many essential capabilities and knowledge previously accomplished through ground support and human operators must now be designed into onboard systems to enable increasing self-reliance. Emergent technologies, including autonomous systems, have the potential to be mission enabling in deep space; however, as these technologies are often low-TRL and without defined mass, power, or volume, their net impact to the design must be assessed through alternative means, especially during the early planning phases. This paper proposes the concept of designing for self-reliant space habitats as the foundation for assessing potential contributions from the integration of emergent technologies. The term ‘self-reliance’ can be thought of as a combination of the spacecraft system and onboard crew's knowledge (self-awareness) and capabilities (self-sufficiency) independent of external intervention. In order to provide context for human spaceflight, these terms are first derived from related terrestrial applications. Subsequently, a methodology for characterizing the degree of self-awareness and self-sufficiency in a space habitat is outlined to provide designers with logic for assessing the contributions of emergent technologies to the overall self-reliance of the habitat as needed to allow future Earth-independence. The definitions and characterization logic provided in this work offer a systematic process for designing toward self-reliance in future deep space missions.
Chapter
Full-text available
This chapter discusses the cognitive systems engineering. To build a cognitive description of a problem solving world, it is necessary to understand how representations of the world interact with different cognitive demands imposed by the application world in question and with characteristics of the cognitive agents, both for existing and prospective changes in the world. Building a cognitive description is part of a problem driven approach to the application of computational power. In tool-driven approaches, knowledge acquisition focuses on describing domain knowledge in terms of the syntax of computational mechanisms, that is, the language of implementation is used as a cognitive language. Semantic questions are displaced either to whoever selects the computational mechanisms or to the domain expert who enters knowledge. The alternative is to provide an umbrella structure of domain semantics that organizes and makes explicit what particular pieces of knowledge mean about problem solving in the domain. Acquiring and using domain semantics is essential to be capable of avoiding potential errors and specifying performance boundaries when building intelligent machines.
Conference Paper
Full-text available
Teamwork has become a widely accepted metaphor for describing the nature of multi-robot and multi-agent cooperation. By virtue of teamwork models, team members attempt to manage general responsibilities and commitments to each other in a coherent fashion that both enhances performance and facilitates recovery when unanticipated problems arise. Whereas early research on teamwork focused mainly on interaction within groups of autonomous agents or robots, there is a growing interest in leveraging human participation effectively. Unlike autonomous systems designed primarily to take humans out of the loop, many important applications require people, agents, and robots to work together in close and relatively continuous interaction. For software agents and robots to participate in teamwork alongside people in carrying out complex real-world tasks, they must have some of the capabilities that enable natural and effective teamwork among groups of people. Just as important, developers of such systems need tools and methodologies to assure that such systems will work together reliably and safely, even when they have been designed independently.
Article
Full-text available
This essay focuses on trust in the automation within macrocognitive work systems. The authors emphasize the dynamics of trust. They consider numerous different meanings or kinds of trust, and different modes of operation in which trust dynamics play a role. Their goal is to contribute to the development of a methodology for designing and analyzing collaborative human-centered work systems, a methodology that might promote both trust "calibration" and appropriate reliance. The analysis suggests an ontology for what the authors call "active exploration for trusting" (AET).
Article
Full-text available
There is a common belief that making systems more autonomous will improve the system and is therefore a desirable goal. Though small scale simple tasks can often benefit from automation, this does not necessarily generalize to more complex joint activity. When designing today’s more sophisticated systems to work closely with humans, it is important not only to consider the machine’s ability to work independently through autonomy, but also its ability to support interdependence with those involved in the joint activity. We posit that to truly improve systems and have them reach their full potential, designing systems that support interdependent activity between participants is the key. Our claim is that increasing autonomy, even in a simple and benign environment, does not always result in an improved system. We will show results from an experiment in which we demonstrate this phenomena and explain why increasing autonomy can sometimes negatively impact performance.
Article
Full-text available
Teamwork has become a widely accepted metaphor for describing the nature of multi-robot and multi-agent cooperation. By virtue of teamwork models, team members attempt to manage general responsibilities and commitments to each other in a coherent fashion that both enhances performance and facilitates recovery when unanticipated problems arise. Whereas early research on teamwork focused mainly on interaction within groups of autonomous agents or robots, there is a growing interest in leveraging human participation effectively. Unlike autonomous systems designed primarily to take humans out of the loop, many important applications require people, agents, and robots to work together in close and relatively continuous interaction. For software agents and robots to participate in teamwork alongside people in carrying out complex real-world tasks, they must have some of the capabilities that enable natural and effective teamwork among groups of people. Just as important, developers of such systems need tools and methodologies to assure that such systems will work together reliably and safely, even when they have been designed independently. The purpose of the HART workshop is to explore theories, methods, and tools in support of humans, agents and robots working together in teams. Position papers that combine findings from fields such as computer science, artificial intelligence, cognitive science, anthropology, social and organizational psychology, human-computer interaction to address the problem of HART are strongly encouraged. The workshop will formulate perspectives on the current state-of-the-art, identify key challenges and opportunities for future studies, and promote community-building among researchers and practitioners. The workshop will be structured around four two-hour sessions on themes relevant to HART. Each session will consist of presentations and questions on selected position papers, followed by a whole-group discussion of the current state-of-the-art and the key challenges and research opportunities relevant to the theme. During the final hour, the workshop organizers will facilitate a discussion to determine next steps. The workshop will be deemed a success when collaborative scientific projects for the coming year are defined, and publication venues are explored. For example, results from the most recent HART workshop (Lorentz Center, Leiden, The Netherlands, December 2010) will be reflected in a special issue of IEEE Intelligent Systems on HART that is slated to appear in January/February 2012.
Article
Full-text available
In this article, we describe how we augment human perception and cognition through Sol, an agent-based framework for distributed sensemaking. We describe how our visualization approach, based on IHMC’s OZ flight display, has been leveraged and extended in our development of the Flow Capacitor, an analyst display for maintaining cyber situation awareness, and in the Parallel Coordinates 3D Observatory (PC3O or Observatory), a generalization of the Flow Capacitor that provides capabilities for developing and exploring lines of inquiry. We then introduce the primary implementation frameworks that provide the core capabilities of Sol: the Luna Software Agent Framework, the VIA Cross-Layer Communications Substrate, and the KAoS Policy Services Framework. We show how policy-governed agents can perform much of the tedious high-tempo tasks of analysts and facilitate collaboration. Much of the power of Sol lies in the concept of coactive emergence, whereby a comprehension of complex situations is achieved through the collaboration of analysts and agents working together in tandem. Not only can the approach embodied in Sol lead to a qualitative improvement in cyber situation awareness, but its approach is equally relevant to applications of distributed sensemaking for other kinds of complex high-tempo tasks.
Article
Full-text available
Trust is arguably the most crucial aspect of agent acceptability. Many aspects of trust can be addressed through policy. Policies are a means to dynamically regulate the behavior of system components without changing code or requiring the cooperation of the components being governed. By changing policies, a system can be continuously adjusted to accommodate variations in externally imposed constraints and environmental conditions. In this paper we describe some important dimensions relating to autonomy and give examples of how these dimensions might be adjusted in order to enhance performance of human-agent teams. We then offer a definition of mixed-initiative interaction and give examples of relevant policies. We introduce Kaa, the KAoS adjustable autonomy component. Finally, we provide a brief comparison with two other implementations of adjustable autonomy concepts.