Content uploaded by David D Woods
Author content
All content in this area was uploaded by David D Woods on Dec 12, 2017
Content may be subject to copyright.
Keywords: autonomy, complexity, human automation
interaction, resilience engineering, topics
As a participant in multiple recent national
advi-sory activities, I have listened to many
technol-ogy advocates present briefings that
envisioned the future after deployments of
increasingly autonomous technologies
(e.g., Abbott, McKenney, & Railsback,
2013; Murphy & Shields, 2012; National
Research Council, 2014). The briefings
uniformly focused on the benefits that will
flow from additional invest-ments in
autonomous technologies. The message is
consistent: In the near future we will be able to
delegate authority to networks of vehicles
that will then carry out a widening range of
activities autonomously. Even though these
activities serve the purposes of various human
stakehold-ers, the presenters take it for granted
that humans’ involvement will decrease
and, eventually, become unnecessary. These
same beliefs about the impact of new
technology have accompanied past advances
even though the actual impacts have been
quite different than those envisioned (Sarter,
Woods, & Billings, 1997).
Envisioning the future is a precarious enter-
prise that is subject to biases. As past work has
shown, claims about the effects of future tech-
nology change are underspecified, ungrounded,
and overconfident, whereas new risks are
missed, ignored, or downplayed (Woods &
Dekker, 2000). The new capabilities trigger a
much wider and more complex set of reverbera-
tions, including new forms of complexity and
new risks. Failure to anticipate and design for
the new challenges that are certain to arise fol-
lowing periods of technology change leads to
automation surprises when advocates are sur-
prised by negative unintended consequences
that offset apparent benefits (Woods, 1996).
Today’s common beliefs about
increasingly autonomous capabilities replay
what has been observed in previous cycles
of technology change. Risks associated
with autonomy are ignored and downplayed,
setting the stage for future automation
surprises.
XXX10.1177/1555343416653562Journal of Cognitive Engineering and Decision MakingThe Risks of Autonomy
2016
Address correspondence to David D. Woods, The Ohio
State University, Institute for Ergonomics, 210 Baker
Syst., 1971 Neil Ave, Columbus, OH 43210-1271, USA,
woods.2@osu.edu.
Theory or Review Paper
The Risks of Autonomy: Doyle’s Catch
David D. Woods, The Ohio State University
Journal of Cognitive Engineering and Decision Making
2016, Volume 10, Number 2, Month 2016, pp. 131 –133
DOI: 10.1177/1555343416653562
Copyright © 2016, Human Factors and Ergonomics Society.
A new risk of autonomy has arisen as a
result of the power of today’s technologies and
is captured in Doyle’s Catch (Alderson &
Doyle, 2010):
Computer-based simulation and rapid
pro-totyping tools are now broadly
available and powerful enough that it is
relatively easy to demonstrate almost
anything, provided that conditions are
made sufficiently idealized. However,
the real world is typically far from
idealized, and thus a system must have
enough robustness in order to close the
gap between demonstration and the real
thing.
(J. Doyle / D. Alderson, personal
communication, January 4, 2013)
2 Month XXXX - Journal of Cognitive Engineering and Decision Making
The technology advocates I witnessed fell
directly into Doyle’s Catch. They presumed that
because capabilities could be demonstrated
under some conditions, extending the prototypes
to handle the full range of complexities that
emerge and change over life cycles would be
straightforward. Across all the briefings, when
the listeners pointed to gaps, the response was
the same: “With investment, engineering devel-
opments on their own momentum will address
these concerns, but outside organizations can
slow progress and add costs.” When the listeners
identified issues that admittedly lack solutions
today, the response was “these solutions will
come with application of sufficient engineering
and innovation energy, but this energy can be
released only if organizational and regulatory
barriers are removed.”
Doyle’s Catch shows that this optimism is
insufficient. Emerging capabilities, because they
are powerful, produce new technical challenges,
which if not addressed will produce negative
unintended consequences. Doyle’s Catch poses
a new technical challenge: How can design and
testing “close the gap between the demonstra-
tion and the real thing?” This challenge is not
trivial and has not been addressed in the devel-
opment of increasingly autonomous systems.
Doyle’s Catch contains three main technical
challenges: complexity, life cycle, and testing.
Complexity
Increasingly autonomous things such as
road or airborne vehicles are not “things” at
all but instead are complex networks of
multiple algorithms, control loops, sensors,
and human roles that interact over different
time scales and changing conditions. Some
parts of the network are onboard the vehicle
or inside the vehicle’s “skin,” whereas others
are offboard. For example, one briefing
described a vehicle entered in one of Defense
Advanced Research Projects Agency’s Grand
Challenges that, based on the presentation,
used about 18 sensor packages, 20 algorithms
(not counting basic sensor processing/actuator
controllers), and an undetermined number of
modes. Types of algorithm included temporal
logic; sensor fusion; multiple path, traffic, and
mission planners; conflict man-agement; health
monitoring; fault management; optimization;
classifiers; models of the environ-ment (maps);
obstacle detection; road finding; vehicle
finding; and sensor validation checks.
Preparing the vehicle for the Grand Challenge
involved 75 engineers from one of the top
engi-neering universities in the world over 18
calendar months. Despite this effort, the
vehicle did not perform all that well in the
competition. Extending the performance
envelope of this vehicle will, according to the
presenters, be addressed by even more sensors,
more algorithms, and more computation.
There appears to be no limit to the
complexity of interacting and interdependent
computational elements in this program. Closing
the gap between the demonstration and the real
thing requires the development of new methods
to manage creeping complexity and the associ-
ated costs.
life CyCle
Doyle’s Catch forces us to wrestle with how
to design systems that will need to change
continuously over their life cycles. The
architecture needs to be “poised to change”
especially as the new systems provide valuable
capabilities to stakeholders (Woods, 2015; Cook
2016). The systems will need to be able to adapt,
or be adapted, to handle new tasks, in new
contexts, participate in new relationships, and
function under new pressures as stakeholders and
problems holders adapt to take advantage of the
newcapabilities and to work around the new gaps
that emerge. As a software- intensive network,
increasingly autonomous systems, over life
cycles, will face::
•challenges to assumptions and boundary condi-
tions,
•surprise events,
•changed conditions and contexts of use and rever-
berating effects,
• adaptive shortfalls that will require responsible peo-
ple to step into the breach to innovate locally, and
•
resource fluctuations that change organizational
resilience and produce brittleness.
The Risks of AuTonomy
3
How are we to model/analyze the dynamic pat-
terns arising as software-intensive, technology-
based systems operate at scale with changing
autonomous capabilities in changing contexts?
Handling life cycle dynamics will require an
architecture equipped with the capacity to adjust
performance over a wide dynamic range (Doyle
& Csete, 2011). This is, in part, the target of
extensible critical digital services (Allspaw,
2012) and closely related to resilience engineer-
ing (Woods, 2015). Closing the gap between the
demonstration and the real thing requires the
development of new ways to design systems to
be manageable and extensible over life cycles.
In particular, it will also require
reinventing certification and V&V to make
them continuous activities, starting early in
design and continuing as the system is
implemented, rather than one-time acceptance
hurdles.
testing for Brittleness
rather than feasiBility
Doyle’s Catch highlights how demonstra-
tions mask what really works and what is vapor.
To check for vapor, one can use the turnaround
test—How much work does it take to get a
system ready to handle the next mission/case/
environment, when the next is not a simple para-
metric variation of the previous demonstration?
Existing autonomous system prototypes would
likely score poorly on such a test viz. the vehicle
described earlier. The maturation of a system
as it moves from novel capability to special-
ized capability to generally available/routinized
capability is marked by improved scores on
turnaround tests.
Doyle’s Catch highlights how demonstra-
tions can be brittle in ways that are unappreci-
ated. But when the demonstration encounters the
full complexity and scale of real-world deploy-
ments, the forms of brittleness undermine the
viability of a system and require people in vari-
ous roles to adapt to fill the gaps. As a result,
there is a need to assess the brittleness of envi-
sioned systems as they move from demonstra-
tion to deployment and across its life cycle. This
finding was stark across multiple briefings, as
organizations developing autonomous vehicles
showed little awareness or consideration of the
brittleness problem. The standard assumption
was that normal reliability engineering
approaches will be sufficient (think of all of
redundant sensors and modules in the vehicle
described earlier).
Across multiple briefings organizations,
developing autonomous vehicles showed little
awareness or consideration of the brittleness
problem. Instead, proponents assumed that con-
ventional reliability engineering approaches
would suffice, despite the proliferation of sen-
sors, algorithms, computations, and interdepen-
dencies noted for the vehicle described earlier.
Closing the gap between the demonstration
and the real thing requires the development of
new methods to assess brittleness and to incor-
porate forms of resilience into design. Doyle’s
Catch points out some of the new risks that
emerge as people search for advantage by
deploying increasingly autonomous technolo-
gies. Doyle’s Catch also points to new opportu-
nities for innovations to tame and manage the
growth in complexity that accompanies deploy-
ing autonomous technologies into today’s inter-
connected world.
referenCes
Abbott, K., McKenney, D., & Railsback, P. (2013).
Operational Use of Flight Path Management Systems
(final report of the Flight Deck Automation Working
Group, Performance-Based Operations Aviation Rulemaking
Committee / Commercial Aviation Safety Team / FAA
Retrieved from http://www.faa.gov/about/office_org/
headquarters_offices/avs/offices/afs/afs400/parc/parc_reco/
media/2013/130908_PARC_FltDAWG_Final_Report_Rec
ommendations.pdf
Alderson, D. L., & Doyle, J. C. (2010). Contrasting views of com-
plexity and their implications for network-centric infrastruc-
tures. IEEE SMC—Part A, 40, 839–852.
Allspaw, J. (2012). Fault injection in production: Making the
case for resilience testing. ACM Queue, 10(8), 30–35. doi:
10.1145/2346916.2353017
Doyle, J. C., & Csete, M. E. (2011). Architecture, constraints, and
behavior. Proceedings of the National Academy of Sciences
USA, 108(Suppl. 3), 15624–15630.
Murphy, R. R., & Shields, J. (2012). The role of autonomy in DoD
systems, task force report, Office of the Secretary of Defense,
July. Retrieved from http://fas.org/irp/agency/dod/dsb/auton
omy.pdf
National Research Council. (2014). Autonomy research for
civil aviation: Toward a new era of flight. Washington, DC:
National Academies Press. http://www.nap.edu/catalog
.php?record_id=18815
4 Month XXXX - Journal of Cognitive Engineering and Decision Making
Sarter, N., Woods, D. D., & Billings, C. (1997). Automation sur-
prises. In G. Salvendy (Ed.), Handbook of human factors/
ergonomics (2nd ed., pp. 1926–1943). New York: Wiley.
Woods, D. D. (1996). Decomposing automation: Apparent simplic-
ity, real complexity. In R. Parasuraman & M. Mouloula (Eds.),
Automation technology and human performance: Theory
and applications (pp. 3–17). Hillsdale NJ: Erlbaum.
Woods, D. D. (2015). Four concepts for resilience and their impli-
cations for systems safety in the face of complexity. Reliabil-
ity Engineering and System Safety, 141, 5–9. doi: 10.1016/j.
ress.2015.03.018
Woods, D. D., & Dekker, S. W. A. (2000). Anticipating the effects
of technological change: A new era of dynamics for human fac-
tors. Theoretical Issues in Ergonomic Science, 1(3), 272–282.
David D. Woods is a professor in the Department of
Integrated Systems Engineering at The Ohio State
University and is past president of the Human Fac-
tors and Ergonomics Society and of the Resilience
Engineering Association.
capacity. Velocity DevOps & Web Performance Conference
2016, Santa Clara CA, O’Reilly Media, June 22, 2016.
Presentation video available at http://
conferences.oreilly.com/velocity/devops-web-performance-ca
Cook, R.I. (2016). Poised to deploy: the C-suite and adaptive