ArticlePDF Available
Keywords: autonomy, complexity, human automation
interaction, resilience engineering, topics
As a participant in multiple recent national
advi-sory activities, I have listened to many
technol-ogy advocates present briefings that
envisioned the future after deployments of
increasingly autonomous technologies
(e.g., Abbott, McKenney, & Railsback,
2013; Murphy & Shields, 2012; National
Research Council, 2014). The briefings
uniformly focused on the benefits that will
flow from additional invest-ments in
autonomous technologies. The message is
consistent: In the near future we will be able to
delegate authority to networks of vehicles
that will then carry out a widening range of
activities autonomously. Even though these
activities serve the purposes of various human
stakehold-ers, the presenters take it for granted
that humans’ involvement will decrease
and, eventually, become unnecessary. These
same beliefs about the impact of new
technology have accompanied past advances
even though the actual impacts have been
quite different than those envisioned (Sarter,
Woods, & Billings, 1997).
Envisioning the future is a precarious enter-
prise that is subject to biases. As past work has
shown, claims about the effects of future tech-
nology change are underspecified, ungrounded,
and overconfident, whereas new risks are
missed, ignored, or downplayed (Woods &
Dekker, 2000). The new capabilities trigger a
much wider and more complex set of reverbera-
tions, including new forms of complexity and
new risks. Failure to anticipate and design for
the new challenges that are certain to arise fol-
lowing periods of technology change leads to
automation surprises when advocates are sur-
prised by negative unintended consequences
that offset apparent benefits (Woods, 1996).
Today’s common beliefs about
increasingly autonomous capabilities replay
what has been observed in previous cycles
of technology change. Risks associated
with autonomy are ignored and downplayed,
setting the stage for future automation
XXX10.1177/1555343416653562Journal of Cognitive Engineering and Decision MakingThe Risks of Autonomy
Address correspondence to David D. Woods, The Ohio
State University, Institute for Ergonomics, 210 Baker
Syst., 1971 Neil Ave, Columbus, OH 43210-1271, USA,
Theory or Review Paper
The Risks of Autonomy: Doyle’s Catch
David D. Woods, The Ohio State University
Journal of Cognitive Engineering and Decision Making
2016, Volume 10, Number 2, Month 2016, pp. 131133
DOI: 10.1177/1555343416653562
Copyright © 2016, Human Factors and Ergonomics Society.
A new risk of autonomy has arisen as a
result of the power of today’s technologies and
is captured in Doyle’s Catch (Alderson &
Doyle, 2010):
Computer-based simulation and rapid
pro-totyping tools are now broadly
available and powerful enough that it is
relatively easy to demonstrate almost
anything, provided that conditions are
made sufficiently idealized. However,
the real world is typically far from
idealized, and thus a system must have
enough robustness in order to close the
gap between demonstration and the real
(J. Doyle / D. Alderson, personal
communication, January 4, 2013)
2 Month XXXX - Journal of Cognitive Engineering and Decision Making
The technology advocates I witnessed fell
directly into Doyle’s Catch. They presumed that
because capabilities could be demonstrated
under some conditions, extending the prototypes
to handle the full range of complexities that
emerge and change over life cycles would be
straightforward. Across all the briefings, when
the listeners pointed to gaps, the response was
the same: “With investment, engineering devel-
opments on their own momentum will address
these concerns, but outside organizations can
slow progress and add costs.” When the listeners
identified issues that admittedly lack solutions
today, the response was “these solutions will
come with application of sufficient engineering
and innovation energy, but this energy can be
released only if organizational and regulatory
barriers are removed.”
Doyle’s Catch shows that this optimism is
insufficient. Emerging capabilities, because they
are powerful, produce new technical challenges,
which if not addressed will produce negative
unintended consequences. Doyle’s Catch poses
a new technical challenge: How can design and
testing “close the gap between the demonstra-
tion and the real thing?” This challenge is not
trivial and has not been addressed in the devel-
opment of increasingly autonomous systems.
Doyle’s Catch contains three main technical
challenges: complexity, life cycle, and testing.
Increasingly autonomous things such as
road or airborne vehicles are not “things” at
all but instead are complex networks of
multiple algorithms, control loops, sensors,
and human roles that interact over different
time scales and changing conditions. Some
parts of the network are onboard the vehicle
or inside the vehicle’s “skin,” whereas others
are offboard. For example, one briefing
described a vehicle entered in one of Defense
Advanced Research Projects Agency’s Grand
Challenges that, based on the presentation,
used about 18 sensor packages, 20 algorithms
(not counting basic sensor processing/actuator
controllers), and an undetermined number of
modes. Types of algorithm included temporal
logic; sensor fusion; multiple path, traffic, and
mission planners; conflict man-agement; health
monitoring; fault management; optimization;
classifiers; models of the environ-ment (maps);
obstacle detection; road finding; vehicle
finding; and sensor validation checks.
Preparing the vehicle for the Grand Challenge
involved 75 engineers from one of the top
engi-neering universities in the world over 18
calendar months. Despite this effort, the
vehicle did not perform all that well in the
competition. Extending the performance
envelope of this vehicle will, according to the
presenters, be addressed by even more sensors,
more algorithms, and more computation.
There appears to be no limit to the
complexity of interacting and interdependent
computational elements in this program. Closing
the gap between the demonstration and the real
thing requires the development of new methods
to manage creeping complexity and the associ-
ated costs.
life CyCle
Doyle’s Catch forces us to wrestle with how
to design systems that will need to change
continuously over their life cycles. The
architecture needs to be “poised to change”
especially as the new systems provide valuable
capabilities to stakeholders (Woods, 2015; Cook
2016). The systems will need to be able to adapt,
or be adapted, to handle new tasks, in new
contexts, participate in new relationships, and
function under new pressures as stakeholders and
problems holders adapt to take advantage of the
newcapabilities and to work around the new gaps
that emerge. As a software- intensive network,
increasingly autonomous systems, over life
cycles, will face::
challenges to assumptions and boundary condi-
surprise events,
changed conditions and contexts of use and rever-
berating effects,
adaptive shortfalls that will require responsible peo-
ple to step into the breach to innovate locally, and
resource fluctuations that change organizational
resilience and produce brittleness.
The Risks of AuTonomy
How are we to model/analyze the dynamic pat-
terns arising as software-intensive, technology-
based systems operate at scale with changing
autonomous capabilities in changing contexts?
Handling life cycle dynamics will require an
architecture equipped with the capacity to adjust
performance over a wide dynamic range (Doyle
& Csete, 2011). This is, in part, the target of
extensible critical digital services (Allspaw,
2012) and closely related to resilience engineer-
ing (Woods, 2015). Closing the gap between the
demonstration and the real thing requires the
development of new ways to design systems to
be manageable and extensible over life cycles.
In particular, it will also require
reinventing certification and V&V to make
them continuous activities, starting early in
design and continuing as the system is
implemented, rather than one-time acceptance
testing for Brittleness
rather than feasiBility
Doyle’s Catch highlights how demonstra-
tions mask what really works and what is vapor.
To check for vapor, one can use the turnaround
test—How much work does it take to get a
system ready to handle the next mission/case/
environment, when the next is not a simple para-
metric variation of the previous demonstration?
Existing autonomous system prototypes would
likely score poorly on such a test viz. the vehicle
described earlier. The maturation of a system
as it moves from novel capability to special-
ized capability to generally available/routinized
capability is marked by improved scores on
turnaround tests.
Doyle’s Catch highlights how demonstra-
tions can be brittle in ways that are unappreci-
ated. But when the demonstration encounters the
full complexity and scale of real-world deploy-
ments, the forms of brittleness undermine the
viability of a system and require people in vari-
ous roles to adapt to fill the gaps. As a result,
there is a need to assess the brittleness of envi-
sioned systems as they move from demonstra-
tion to deployment and across its life cycle. This
finding was stark across multiple briefings, as
organizations developing autonomous vehicles
showed little awareness or consideration of the
brittleness problem. The standard assumption
was that normal reliability engineering
approaches will be sufficient (think of all of
redundant sensors and modules in the vehicle
described earlier).
Across multiple briefings organizations,
developing autonomous vehicles showed little
awareness or consideration of the brittleness
problem. Instead, proponents assumed that con-
ventional reliability engineering approaches
would suffice, despite the proliferation of sen-
sors, algorithms, computations, and interdepen-
dencies noted for the vehicle described earlier.
Closing the gap between the demonstration
and the real thing requires the development of
new methods to assess brittleness and to incor-
porate forms of resilience into design. Doyle’s
Catch points out some of the new risks that
emerge as people search for advantage by
deploying increasingly autonomous technolo-
gies. Doyle’s Catch also points to new opportu-
nities for innovations to tame and manage the
growth in complexity that accompanies deploy-
ing autonomous technologies into today’s inter-
connected world.
Abbott, K., McKenney, D., & Railsback, P. (2013).
Operational Use of Flight Path Management Systems
(final report of the Flight Deck Automation Working
Group, Performance-Based Operations Aviation Rulemaking
Committee / Commercial Aviation Safety Team / FAA
Retrieved from
Alderson, D. L., & Doyle, J. C. (2010). Contrasting views of com-
plexity and their implications for network-centric infrastruc-
tures. IEEE SMC—Part A, 40, 839–852.
Allspaw, J. (2012). Fault injection in production: Making the
case for resilience testing. ACM Queue, 10(8), 30–35. doi:
Doyle, J. C., & Csete, M. E. (2011). Architecture, constraints, and
behavior. Proceedings of the National Academy of Sciences
USA, 108(Suppl. 3), 15624–15630.
Murphy, R. R., & Shields, J. (2012). The role of autonomy in DoD
systems, task force report, Office of the Secretary of Defense,
July. Retrieved from
National Research Council. (2014). Autonomy research for
civil aviation: Toward a new era of flight. Washington, DC:
National Academies Press.
4 Month XXXX - Journal of Cognitive Engineering and Decision Making
Sarter, N., Woods, D. D., & Billings, C. (1997). Automation sur-
prises. In G. Salvendy (Ed.), Handbook of human factors/
ergonomics (2nd ed., pp. 1926–1943). New York: Wiley.
Woods, D. D. (1996). Decomposing automation: Apparent simplic-
ity, real complexity. In R. Parasuraman & M. Mouloula (Eds.),
Automation technology and human performance: Theory
and applications (pp. 3–17). Hillsdale NJ: Erlbaum.
Woods, D. D. (2015). Four concepts for resilience and their impli-
cations for systems safety in the face of complexity. Reliabil-
ity Engineering and System Safety, 141, 5–9. doi: 10.1016/j.
Woods, D. D., & Dekker, S. W. A. (2000). Anticipating the effects
of technological change: A new era of dynamics for human fac-
tors. Theoretical Issues in Ergonomic Science, 1(3), 272–282.
David D. Woods is a professor in the Department of
Integrated Systems Engineering at The Ohio State
University and is past president of the Human Fac-
tors and Ergonomics Society and of the Resilience
Engineering Association.
capacity. Velocity DevOps & Web Performance Conference
2016, Santa Clara CA, O’Reilly Media, June 22, 2016.
Presentation video available at http://
Cook, R.I. (2016). Poised to deploy: the C-suite and adaptive
... This necessitates human driver to actively monitor the driver assistance automation and take control when it is needed. Autonomy is a system that can behave with intention, set goals, learn, and respond to situations with greater independence and even without direct human direction (Shively, et al., 2016;Woods, 2016). For example, human-in-theloop autonomous convoys with leader-follower configuration consist of human-operated leader and autonomy-operated follower vehicles which share the same travel route while maintaining safe following distance. ...
... Early work identified the ironies of automation (Bainbridge, 1983), where for instance automation can increase in already-high workload tasks, or the automation chooses inappropriate actions due to a failure to understand the context of the situation. Automation can also cause other human factors issues such as decreases in situational awareness (Kaber et al., 1999), poor function allocation design (Dorneich et al., 2003), lack of automation system transparency (Woods, 2016;Dorneich, et al., 2017), and miscalibrated trust (Lee and See, 2004). These traditional automation-related human factors issues are exacerbated when the automation because increasingly capable, intelligent, and autonomous. ...
Full-text available
This paper developed human-autonomy teaming (HAT) characteristics and requirements by comparing and synthesizing two aerospace case studies (Single Pilot Operations/Reduced Crew Operations and Long-Distance Human Space Operations) and the related recent HAT empirical studies. Advances in sensors, machine learning, and machine reasoning have enabled increasingly autonomous system technology to work more closely with human(s), often with decreasing human direction. As increasingly autonomous systems become more capable, their interactions with humans may evolve into a teaming relationship. However, humans and autonomous systems have asymmetric teaming capabilities, which introduces challenges when designing a teaming interaction paradigm in HAT. Additionally, developing requirements for HAT can be challenging for future operations concepts, which are not yet well-defined. Two case studies conducted previously document analysis of past literature and interviews with subject matter experts to develop domain knowledge models and requirements for future operations. Prototype delegation interfaces were developed to perform summative evaluation studies for the case studies. In this paper, a review of recent literature on HAT empirical studies was conducted to augment the document analysis for the case studies. The results of the two case studies and the literature review were compared and synthesized to suggest the common characteristics and requirements for HAT in future aerospace operations. The requirements and characteristics were grouped into categories of team roles, autonomous teammate types, interaction paradigms, and training. For example, human teammates preferred the autonomous teammate to have human-like characteristics (e.g., dialog-based conversation, social skills, and body gestures to provide cue-based information). Even though more work is necessary to verify and validate the requirements for HAT development, the case studies and recent empirical literature enumerate the types of functions and capabilities needed for increasingly autonomous systems to act as a teammate to support future operations.
... AI systems are often envisioned to support complex cognitive tasks, like making complex decisions or writing sophisticated texts. Due to the complexity of these tasks, it is unlikely that designers of AI systems can foresee or even model every eventuality that could occur during usage [28]. The struggle of the autonomous driving industry to reach market maturity is maybe the most prominent case in point. ...
Conference Paper
Full-text available
Calibrating users' trust on AI to an appropriate level is widely considered one of the key mechanisms to manage brittle AI performance. However, trust calibration is hard to achieve, with numerous interacting factors that can tip trust into one direction or the other. In this position paper, we argue that instead of focusing on trust calibration to achieve resilient human-AI interactions, it might be helpful to design AI systems for appropriation first, i.e. allowing users to use an AI system according to their intention, beyond what was explicitly considered by the designer. We observe that rather than suggesting end results without human involvement, appropriable AI systems tend to offer users incremental support. Such systems do not eliminate the need for trust calibration, but we argue that they may calibrate users' trust as a side effect and thereby achieve an appropriate level of trust by design.
... Diversions are highly complex, but intelligent systems currently often cannot deal with every facet of this real-world complexity, and some would argue that they never will [59]. The challenge is therefore to design the joint human-machine system to be resilient, even though the intelligent system on its own is brittle. ...
Conference Paper
Intelligent decision support tools (DSTs) hold the promise to improve the quality of human decision-making in challenging situations like diversions in aviation. To achieve these improvements, a common goal in DST design is to calibrate decision makers' trust in the system. However, this perspective is mostly informed by controlled studies and might not fully reflect the real-world complexity of diversions. In order to understand how DSTs can be beneficial in the view of those who have the best understanding of the complexity of diversions, we interviewed professional pilots. To facilitate discussions, we built two low-fidelity prototypes, each representing a different role a DST could assume: (a) actively suggesting and ranking airports based on pilot-specified criteria, and (b) unobtrusively hinting at data points the pilot should be aware of. We find that while pilots would not blindly trust a DST, they at the same time reject deliberate trust calibration in the moment of the decision. We revisit appropriation as a lens to understand this seeming contradiction as well as a range of means to enable appropriation. Aside from the commonly considered need for transparency, these include directability and continuous support throughout the entire decision process. Based on our design exploration, we encourage to expand the view on DST design beyond trust calibration at the point of the actual decision.
... According to resilience engineering, such organizations can adapt by continuously responding, monitoring, learning, and anticipating changes in context (Linkov et al., 2018). The capacity to respond means that the organization is ready to take different actions depending on how it makes sense of a given change in context (i.e., it is flexible and can adapt; Lay et al., 2015;Woods, 2016). Knowing what to do implies trade-offs between different alternatives of action and then applying the solutions that best fit the specific event as decision makers perceive it. ...
This article presents a model for business continuity capacity, which shows how organizations can analyze possible gaps in their business continuity capability and thereby increase their capacity to recover value‐adding critical activities. Using an example of a flooded mine on Svalbard, the study investigated how the mining company Store Norske Spitsbergen Coal Company (SNSK), with considerable experience with similar events and an excellent safety record, could fail to manage a well‐known event and reduce recovery times of its critical activities. The analysis explored how experience in safety and incident management does not necessarily mean that these abilities are transferable to a new but similar event. The study sought to answer the research question: To what extent does SNSK's systematic work with safety, and experience with flooding events, improve business continuity capacity? In the Arctic, emergency response can take hours or days to arrive after the event. A structured recovery system can support pre‐existing platforms aimed at safety, to include the critical activities needed to ensure an organization's overall survival. Systematic work can improve performance and make the organization engage in a virtuous cycle by implementing management structures, risk identification systems, competency development, and processes for the in situ evaluation of hazards. However, as seen here, the organization needs to pay attention to changes that could affect risk assessments and threat levels well‐known events. These insights can be utilized by other organizations seeking synergy when strengthening their safety and business continuity performance.
... Otherwise, it is not possible to ensure the required levels of safety and to achieve economic feasibility. An important aspect of any of these GNC autonomous systems would be to be able to adapt to a changing environment [9] and to be sufficiently fault tolerant. In addition, if these systems are also model independent, it could significantly help in their development [10]. ...
... • Brittleness: AI will only be capable of performing well in situations that are covered by its programming or its training data (Woods, 2016). When new classes of situations are encountered that require behaviors different from what the AI system has previously learned, it may perform poorly by over-generalizing from previous training. ...
Full-text available
The National Academies recently issued a consensus study report summarizing the state of the art and research needs relating to Human-AI teaming, especially in the context of dynamic, high-risk environments such as multi-domain military operations. This consensus study was conducted by the National Academies Board on Human-Systems Integration (BOHSI). This panel, organized by BOHSI, brings together prominent researchers, including several members of the consensus committee, to discuss the state of the art and research frontiers for development of effective human-AI teams that can operate resiliently in complex, data intensive, and dynamically paced environments.
Objective The objective of our research is to advance the understanding of behavioral responses to a system’s error. By examining trust as a dynamic variable and drawing from attribution theory, we explain the underlying mechanism and suggest how terminology can be used to mitigate the so-called algorithm aversion. In this way, we show that the use of different terms may shape consumers’ perceptions and provide guidance on how these differences can be mitigated. Background Previous research has interchangeably used various terms to refer to a system and results regarding trust in systems have been ambiguous. Methods Across three studies, we examine the effect of different system terminology on consumer behavior following a system failure. Results Our results show that terminology crucially affects user behavior. Describing a system as “AI” (i.e., self-learning and perceived as more complex) instead of as “algorithmic” (i.e., a less complex rule-based system) leads to more favorable behavioral responses by users when a system error occurs. Conclusion We suggest that in cases when a system’s characteristics do not allow for it to be called ”AI,” users should be provided with an explanation of why the system’s error occurred, and task complexity should be pointed out. We highlight the importance of terminology, as this can unintentionally impact the robustness and replicability of research findings. Application This research offers insights for industries utilizing AI and algorithmic systems, highlighting how strategic terminology use can shape user trust and response to errors, thereby enhancing system acceptance.
Humans working with autonomous artificially intelligent systems may not be experts in the inner workings of their machine teammates, but need to understand when to employ, trust, and rely on the system. A critical challenge is to develop machine agents with the capacity to understand their own capabilities and limitations, and the ability to communicate this information to human partners. Self-assessment is an emerging field that tackles this challenge through the development of algorithms that enable autonomous agents to understand and communicate their competency. These methods can engender appropriate trust and align human expectations with autonomous assistant abilities. However, current research in self-assessment is dispersed across many fields, including artificial intelligence, robotics, and human factors. This survey connects work from these disparate areas and reviews state-of-the-art methods for algorithmic self-assessments that enable autonomous agents to estimate, understand, and communicate valuable information pertaining to their competency, with focus on methods that can improve interactions within human-machine teams. To better understand the landscape of self-assessment approaches, we present a framework for categorizing work in self-assessment based on underlying algorithm type: test-based , learning-based , or knowledge-based . We synthesize common features across these approaches and discuss relevant future directions for research in this emerging space.
Designers frequently look toward automation as a way to increase system efficiency and safety by reducing involvement. This approach can disappoint because the contribution of people often becomes more, not less, important as automation becomes more powerful and prevalent. More powerful automation demands greater attention to its design, supervisory responsibilities, system maintenance, software upgrades, and automation coordination. Developing automation without consideration of the human operator can lead to new and more catastrophic failures. For automation to fulfill its promise, designers must avoid a technology-centered approach that often yields strong but silent forms of automation, and instead adopt an approach that considers the joint operator-automation system that yields more collaborative, communicative forms of automation. Automation-related problems arise because introducing automation changes the type and extent of feedback that operators receive, as well as the nature and structure of tasks. Also, operators’ behavioral, cognitive, and emotional responses to these changes can leave the system vulnerable to failure. No single approach can address all of these challenges because automation is a heterogeneous technology. There are many types and forms of automation and each poses different design challenges. This chapter describes how different types of automation place different demands on operators. It also presents strategies that can help designers achieve promised benefits of automation. The chapter concludes with future challenges in automation design and human interaction with increasingly autonomous systems.KeywordsAutomation designVehicle automationMental modelsSupply chainsTrust
Full-text available
Making the case for resilience testing
Full-text available
When we build Web infrastructures at Etsy, we aim to make them resilient. This means designing them carefully so that they can sustain their (increasingly critical) operations in the face of failure. Thankfully, there have been a couple of decades and reams of paper spent on researching how fault tolerance and graceful degradation can be brought to computer systems. That helps the cause. To make sure that the resilience built into Etsy systems is sound and that the systems behave as expected, we have to see the failures being tolerated in production. Why production? Why not simulate this in a QA or staging environment? First, the existence of any differences in those environments brings uncertainty to the exercise, and second, the risk of not recovering has no consequences during testing, which can bring hidden assumptions into the fault-tolerance design and into recovery. The goal is to reduce uncertainty, not increase it. Forcing failures to happen, or even designing systems to fail on their own, generally isn't easily sold to management. Engineers are not conditioned to embrace their ability to respond to emergencies; they aim to avoid them altogether. Taking a detailed look at how to respond better to failure is essentially accepting that failure will happen, which you might think is counter to what you want in engineering, or in business. Take, for example, what you would normally think of as a simple case: the provisioning of a server or cloud instance from zero to production: 1. Bare metal (or cloud-compute instance) is made available. 2. Base operating system is installed via PXE (preboot execution environment) or machine image. 3. Operating-system-level configurations are put into place (via configuration management or machine image). 4. Application-level configurations are put into place (via configuration management, app deployment, or machine image). 5. Application code is put into place and underlying services are started correctly(via configuration management, app deployment, or machine image). 6. Systems integration takes place in the network (load balancers, VLANs, routing, switching, DNS, etc.). This is probably an oversimplification, and each step or layer is likely to represent a multitude of CPU cycles; disk, network and/or memory operations; and various numbers of software mechanisms. All of these come together to bring a node into production. Operability means that you can have confidence in this node coming into production, possibly joining a cluster, and serving live traffic seamlessly every time it happens. Furthermore, you want and expect to have confidence that if the underlying power, configuration, application, or compute resources (CPU, disk, memory, network, etc.) experience a fault, then you can survive such a fault by some means: allowing the application to degrade gracefully, rebuild itself, take itself out of production, and alert on the specifics of the fault, etc.
Full-text available
Human factors studies the intersection between people, technology and work, with the major aim to find areas where design and working conditions produce human error. It relies on the knowledge base and research results of multiple fields of inquiry (ranging from computer science to anthropology) to do so. Technological change at this intersection (1) redefines the relationship between various players (both humans and machines), (2) transforms practice and shifts sources of error and excellence, and (3) often drives up operational requirements and pressures on operators. Human factors needs to predict these reverberations of technological change before a mature system has been built in order to steer design into the direction of cooperative human-machine architectures. The quickening tempo of technology change and the expansion of technological possibilities has largely converted the traditional shortcuts for access to a design process (task analysis, guidelines, verification and validation studies, etc.) into oversimplification fallacies that retard understanding, innovation, and, ultimately, human factors' credibility. There is an enormous need for the development of techniques that gain empirical access to the future-that generate human performance data about systems which have yet to be built.
Full-text available
There exists a widely recognized need to better understand and manage complex “systems of systems,” ranging from biology, ecology, and medicine to network-centric technologies. This is motivating the search for universal laws of highly evolved systems and driving demand for new mathematics and methods that are consistent, integrative, and predictive. However, the theoretical frameworks available today are not merely fragmented but sometimes contradictory and incompatible. We argue that complexity arises in highly evolved biological and technological systems primarily to provide mechanisms to create robustness. However, this complexity itself can be a source of new fragility, leading to “robust yet fragile” tradeoffs in system design. We focus on the role of robustness and architecture in networked infrastructures, and we highlight recent advances in the theory of distributed control driven by network technologies. This view of complexity in highly organized technological and biological systems is fundamentally different from the dominant perspective in the mainstream sciences, which downplays function, constraints, and tradeoffs, and tends to minimize the role of organization and design.
This paper aims to bridge progress in neuroscience involving sophisticated quantitative analysis of behavior, including the use of robust control, with other relevant conceptual and theoretical frameworks from systems engineering, systems biology, and mathematics. Familiar and accessible case studies are used to illustrate concepts of robustness, organization, and architecture (modularity and protocols) that are central to understanding complex networks. These essential organizational features are hidden during normal function of a system but are fundamental for understanding the nature, design, and function of complex biologic and technologic systems.
Operational Use of Flight Path Management Systems (final report of the Flight Deck Automation Working Group, Performance-Based Operations Aviation Rulemaking Committee [PARC]/Commercial Aviation Safety Team [CAST]/Federal Aviation Administration
  • K Abbott
  • D Mckenney
  • P Railsback