ArticlePDF Available
Keywords: autonomy, complexity, human automation
interaction, resilience engineering, topics
As a participant in multiple recent national
advi-sory activities, I have listened to many
technol-ogy advocates present briefings that
envisioned the future after deployments of
increasingly autonomous technologies
(e.g., Abbott, McKenney, & Railsback,
2013; Murphy & Shields, 2012; National
Research Council, 2014). The briefings
uniformly focused on the benefits that will
flow from additional invest-ments in
autonomous technologies. The message is
consistent: In the near future we will be able to
delegate authority to networks of vehicles
that will then carry out a widening range of
activities autonomously. Even though these
activities serve the purposes of various human
stakehold-ers, the presenters take it for granted
that humans’ involvement will decrease
and, eventually, become unnecessary. These
same beliefs about the impact of new
technology have accompanied past advances
even though the actual impacts have been
quite different than those envisioned (Sarter,
Woods, & Billings, 1997).
Envisioning the future is a precarious enter-
prise that is subject to biases. As past work has
shown, claims about the effects of future tech-
nology change are underspecified, ungrounded,
and overconfident, whereas new risks are
missed, ignored, or downplayed (Woods &
Dekker, 2000). The new capabilities trigger a
much wider and more complex set of reverbera-
tions, including new forms of complexity and
new risks. Failure to anticipate and design for
the new challenges that are certain to arise fol-
lowing periods of technology change leads to
automation surprises when advocates are sur-
prised by negative unintended consequences
that offset apparent benefits (Woods, 1996).
Today’s common beliefs about
increasingly autonomous capabilities replay
what has been observed in previous cycles
of technology change. Risks associated
with autonomy are ignored and downplayed,
setting the stage for future automation
XXX10.1177/1555343416653562Journal of Cognitive Engineering and Decision MakingThe Risks of Autonomy
Address correspondence to David D. Woods, The Ohio
State University, Institute for Ergonomics, 210 Baker
Syst., 1971 Neil Ave, Columbus, OH 43210-1271, USA,
Theory or Review Paper
The Risks of Autonomy: Doyle’s Catch
David D. Woods, The Ohio State University
Journal of Cognitive Engineering and Decision Making
2016, Volume 10, Number 2, Month 2016, pp. 131133
DOI: 10.1177/1555343416653562
Copyright © 2016, Human Factors and Ergonomics Society.
A new risk of autonomy has arisen as a
result of the power of today’s technologies and
is captured in Doyle’s Catch (Alderson &
Doyle, 2010):
Computer-based simulation and rapid
pro-totyping tools are now broadly
available and powerful enough that it is
relatively easy to demonstrate almost
anything, provided that conditions are
made sufficiently idealized. However,
the real world is typically far from
idealized, and thus a system must have
enough robustness in order to close the
gap between demonstration and the real
(J. Doyle / D. Alderson, personal
communication, January 4, 2013)
2 Month XXXX - Journal of Cognitive Engineering and Decision Making
The technology advocates I witnessed fell
directly into Doyle’s Catch. They presumed that
because capabilities could be demonstrated
under some conditions, extending the prototypes
to handle the full range of complexities that
emerge and change over life cycles would be
straightforward. Across all the briefings, when
the listeners pointed to gaps, the response was
the same: “With investment, engineering devel-
opments on their own momentum will address
these concerns, but outside organizations can
slow progress and add costs.” When the listeners
identified issues that admittedly lack solutions
today, the response was “these solutions will
come with application of sufficient engineering
and innovation energy, but this energy can be
released only if organizational and regulatory
barriers are removed.”
Doyle’s Catch shows that this optimism is
insufficient. Emerging capabilities, because they
are powerful, produce new technical challenges,
which if not addressed will produce negative
unintended consequences. Doyle’s Catch poses
a new technical challenge: How can design and
testing “close the gap between the demonstra-
tion and the real thing?” This challenge is not
trivial and has not been addressed in the devel-
opment of increasingly autonomous systems.
Doyle’s Catch contains three main technical
challenges: complexity, life cycle, and testing.
Increasingly autonomous things such as
road or airborne vehicles are not “things” at
all but instead are complex networks of
multiple algorithms, control loops, sensors,
and human roles that interact over different
time scales and changing conditions. Some
parts of the network are onboard the vehicle
or inside the vehicle’s “skin,” whereas others
are offboard. For example, one briefing
described a vehicle entered in one of Defense
Advanced Research Projects Agency’s Grand
Challenges that, based on the presentation,
used about 18 sensor packages, 20 algorithms
(not counting basic sensor processing/actuator
controllers), and an undetermined number of
modes. Types of algorithm included temporal
logic; sensor fusion; multiple path, traffic, and
mission planners; conflict man-agement; health
monitoring; fault management; optimization;
classifiers; models of the environ-ment (maps);
obstacle detection; road finding; vehicle
finding; and sensor validation checks.
Preparing the vehicle for the Grand Challenge
involved 75 engineers from one of the top
engi-neering universities in the world over 18
calendar months. Despite this effort, the
vehicle did not perform all that well in the
competition. Extending the performance
envelope of this vehicle will, according to the
presenters, be addressed by even more sensors,
more algorithms, and more computation.
There appears to be no limit to the
complexity of interacting and interdependent
computational elements in this program. Closing
the gap between the demonstration and the real
thing requires the development of new methods
to manage creeping complexity and the associ-
ated costs.
life CyCle
Doyle’s Catch forces us to wrestle with how
to design systems that will need to change
continuously over their life cycles. The
architecture needs to be “poised to change”
especially as the new systems provide valuable
capabilities to stakeholders (Woods, 2015; Cook
2016). The systems will need to be able to adapt,
or be adapted, to handle new tasks, in new
contexts, participate in new relationships, and
function under new pressures as stakeholders and
problems holders adapt to take advantage of the
newcapabilities and to work around the new gaps
that emerge. As a software- intensive network,
increasingly autonomous systems, over life
cycles, will face::
challenges to assumptions and boundary condi-
surprise events,
changed conditions and contexts of use and rever-
berating effects,
adaptive shortfalls that will require responsible peo-
ple to step into the breach to innovate locally, and
resource fluctuations that change organizational
resilience and produce brittleness.
The Risks of AuTonomy
How are we to model/analyze the dynamic pat-
terns arising as software-intensive, technology-
based systems operate at scale with changing
autonomous capabilities in changing contexts?
Handling life cycle dynamics will require an
architecture equipped with the capacity to adjust
performance over a wide dynamic range (Doyle
& Csete, 2011). This is, in part, the target of
extensible critical digital services (Allspaw,
2012) and closely related to resilience engineer-
ing (Woods, 2015). Closing the gap between the
demonstration and the real thing requires the
development of new ways to design systems to
be manageable and extensible over life cycles.
In particular, it will also require
reinventing certification and V&V to make
them continuous activities, starting early in
design and continuing as the system is
implemented, rather than one-time acceptance
testing for Brittleness
rather than feasiBility
Doyle’s Catch highlights how demonstra-
tions mask what really works and what is vapor.
To check for vapor, one can use the turnaround
test—How much work does it take to get a
system ready to handle the next mission/case/
environment, when the next is not a simple para-
metric variation of the previous demonstration?
Existing autonomous system prototypes would
likely score poorly on such a test viz. the vehicle
described earlier. The maturation of a system
as it moves from novel capability to special-
ized capability to generally available/routinized
capability is marked by improved scores on
turnaround tests.
Doyle’s Catch highlights how demonstra-
tions can be brittle in ways that are unappreci-
ated. But when the demonstration encounters the
full complexity and scale of real-world deploy-
ments, the forms of brittleness undermine the
viability of a system and require people in vari-
ous roles to adapt to fill the gaps. As a result,
there is a need to assess the brittleness of envi-
sioned systems as they move from demonstra-
tion to deployment and across its life cycle. This
finding was stark across multiple briefings, as
organizations developing autonomous vehicles
showed little awareness or consideration of the
brittleness problem. The standard assumption
was that normal reliability engineering
approaches will be sufficient (think of all of
redundant sensors and modules in the vehicle
described earlier).
Across multiple briefings organizations,
developing autonomous vehicles showed little
awareness or consideration of the brittleness
problem. Instead, proponents assumed that con-
ventional reliability engineering approaches
would suffice, despite the proliferation of sen-
sors, algorithms, computations, and interdepen-
dencies noted for the vehicle described earlier.
Closing the gap between the demonstration
and the real thing requires the development of
new methods to assess brittleness and to incor-
porate forms of resilience into design. Doyle’s
Catch points out some of the new risks that
emerge as people search for advantage by
deploying increasingly autonomous technolo-
gies. Doyle’s Catch also points to new opportu-
nities for innovations to tame and manage the
growth in complexity that accompanies deploy-
ing autonomous technologies into today’s inter-
connected world.
Abbott, K., McKenney, D., & Railsback, P. (2013).
Operational Use of Flight Path Management Systems
(final report of the Flight Deck Automation Working
Group, Performance-Based Operations Aviation Rulemaking
Committee / Commercial Aviation Safety Team / FAA
Retrieved from
Alderson, D. L., & Doyle, J. C. (2010). Contrasting views of com-
plexity and their implications for network-centric infrastruc-
tures. IEEE SMC—Part A, 40, 839–852.
Allspaw, J. (2012). Fault injection in production: Making the
case for resilience testing. ACM Queue, 10(8), 30–35. doi:
Doyle, J. C., & Csete, M. E. (2011). Architecture, constraints, and
behavior. Proceedings of the National Academy of Sciences
USA, 108(Suppl. 3), 15624–15630.
Murphy, R. R., & Shields, J. (2012). The role of autonomy in DoD
systems, task force report, Office of the Secretary of Defense,
July. Retrieved from
National Research Council. (2014). Autonomy research for
civil aviation: Toward a new era of flight. Washington, DC:
National Academies Press.
4 Month XXXX - Journal of Cognitive Engineering and Decision Making
Sarter, N., Woods, D. D., & Billings, C. (1997). Automation sur-
prises. In G. Salvendy (Ed.), Handbook of human factors/
ergonomics (2nd ed., pp. 1926–1943). New York: Wiley.
Woods, D. D. (1996). Decomposing automation: Apparent simplic-
ity, real complexity. In R. Parasuraman & M. Mouloula (Eds.),
Automation technology and human performance: Theory
and applications (pp. 3–17). Hillsdale NJ: Erlbaum.
Woods, D. D. (2015). Four concepts for resilience and their impli-
cations for systems safety in the face of complexity. Reliabil-
ity Engineering and System Safety, 141, 5–9. doi: 10.1016/j.
Woods, D. D., & Dekker, S. W. A. (2000). Anticipating the effects
of technological change: A new era of dynamics for human fac-
tors. Theoretical Issues in Ergonomic Science, 1(3), 272–282.
David D. Woods is a professor in the Department of
Integrated Systems Engineering at The Ohio State
University and is past president of the Human Fac-
tors and Ergonomics Society and of the Resilience
Engineering Association.
capacity. Velocity DevOps & Web Performance Conference
2016, Santa Clara CA, O’Reilly Media, June 22, 2016.
Presentation video available at http://
Cook, R.I. (2016). Poised to deploy: the C-suite and adaptive
... A few examples of automation would be an autopilot, flight management systems (FMS), and autothrottle. In contrast, autonomy is defined as a system that can behave with intention, can set its own goals, can learn, and is capable of responding to situations with greater independence and even without direct human direction [6,7]. In this context, the IAS will have more authority, responsibility, and capability compared with automation tools [8]. ...
... The move to HAT in SPO/RCO will necessitate design decisions on levels of autonomy, interaction paradigm between human and autonomous teammates, and levels of authority. Tempting as it is to fully automate as many functions as possible with the goal of reducing the single pilot's workload, past research has documented many issues of highly automated systems, including brittleness [13], automation bias [14], decrease in situational awareness [15], poor function allocation design [16], lack of automation system transparency [7,17], skill degradation [18], and miscalibrated trust [19]. Fully automated piloting may not be acceptable to operators and the public due to the perceived increase in unpredictability [20]. ...
... Pilot should be alerted to take over the tasks if necessary [42,54]. 7. Teaming-01, 02, 04, 05, 06, and 12 Because there is no second pilot in the cockpit for cross-checking, the autonomous teammate (copilot) should crosscheck pilot action. ...
... Surprise is continuous and ever-present. There is always the need to close the gap between the demonstration and the real thing (Woods, 2016). This requires new methods to assess brittleness, for instance the turnaround test-How much work does it take to get a system ready to handle the next mission/case/environment, when the next is not a simple parametric variation of the previous demonstration (Woods, 2016)? ...
... There is always the need to close the gap between the demonstration and the real thing (Woods, 2016). This requires new methods to assess brittleness, for instance the turnaround test-How much work does it take to get a system ready to handle the next mission/case/environment, when the next is not a simple parametric variation of the previous demonstration (Woods, 2016)? As a second rebuttal, it has recently been claimed that "no AI is an island" (Johnson & Vera, 2019). ...
... Simultaneously, human flexibility and adaptation will increasingly be required to deal with unanticipated variability and surprise situations. Human expertise will be required to close the gap between the demonstration and the real thing (Woods, 2016). This is in line with recent views on expertise that stress skilled adaptation to complexity and novelty. ...
Full-text available
This chapter presents a brief history of expertise studies and artificial intelligence (AI) from a joint cognitive systems viewpoint. Expertise is currently viewed as a skilled adaptation to complexity and novelty. Artificial intelligence, when restricted to machine learning systems, results in brittle systems that cannot cope with unanticipated variability and hence do not match human experts’ competencies. In order to effectively collaborate with human experts, AI requires collaborative skills, such as being able to explain itself. On the other hand, the introduction of AI also results in a series of new skills that human experts need to develop in order to deal with AI. We argue for a joint cognitive systems perspective, allowing us to see the intricacies of the mutual dependencies between humans and AI, and the constantly evolving distribution of skill sets that are required from an organizational perspective. We illustrate the general principles described above through a case study in radiology.
... Results reflected the attempt to gradually increase complexity during trial operations. Nevertheless, as previous research also states (Woods 2016), trial operations can never completely simulate commercial operations. Specific issues can only be identified after intensive use of the equipment in operations. ...
Full-text available
Graceful extensibility has been recently introduced and can be defined as the ability of a system to extend its capacity to adapt when surprise events challenge its boundaries. It provides basic rules that govern adaptive systems. Railway transportation systems can be considered cyber-physical systems that comprise interacting digital, analog, physical, and human components engineered for safe and reliable railway transport. This enables autonomous driving, new functionalities to achieve higher capacity, greater safety, and real-time health monitoring. New rolling stock introductions require continuous adaptations to meet the challenges of these complex railway systems as an introduction takes several years to complete and deals with changing stakeholder demands, new technologies, and technical constraints which cannot be fully predicted in advance. To sustain adaptability when introducing new rolling stock, the theory of graceful extensibility might be valuable but needs further empirical testing to be useful in the field. This study contributes by assessing the proto-theorems of graceful extensibility in a case study in the railway industry by means of adopting pattern-matching analysis. The results of this study indicate that the majority of theoretical patterns postulated by the theory are corroborated by the data. Guidelines are proposed for further operationalization of the theory in the field. Furthermore, case results indicate the need to adopt management approaches that accept indeterminism as a complement to the prevailing deterministic perspective, to sustain adaptability and deal effectively with surprise events. As such, this study may serve other critical asset introductions dealing with cyber-physical systems in their push for sustained adaptability.
... HAT is defined as one or more humans interacting with increasingly autonomous systems that can function in a partially self-directed manner, collaborating to achieve common mission goals (Demir et al., 2017b;McDermott et al., 2018;McNeese et al., 2018). Autonomy is defined as an autonomous teammate that can behave with intention, set its own goals, learn, and is capable of responding to situations with greater independence and even without direct human direction (Endsley, 2017;Hancock, 2017;Shively et al., 2016;USAF, 2013;De Visser et al., 2018;Woods, 2016). Automation, in contrast, is defined as a system that will do what it is programmed to do, without independent action (Demir et al., 2017a;Vagia et al., 2016). ...
A Playbook delegation approach was evaluated for human-autonomy teaming (HAT) in Single Pilot Operations (SPO). In SPO, a single pilot makes flight decisions, performs flight tasks, and collaborates with an autonomous teammate. The autonomous teammate shares responsibility, authority, and tasks. Challenges include the design of functions, interactions, and teaming skills. HAT often requires the ability to dynamically allocate functions, and timely methods to accurately express intent to teammates. A Playbook delegation interface was developed to enables the pilot to call and modify plays in collaboration with the autonomous teammate. Twenty pilots evaluated the Playbook interface to explore real-time function allocation, and identified teaming skills needed to support HAT. Pilots preferred the Playbook interface for better collaboration with the autonomous teammate. Interviews revealed that supporting human-like communication in HAT is critical to facilitate decision-making. Four major teaming skills (communication, coordination, cooperation, and cognition) are discussed to support HAT in SPO.
Full-text available
This research focused on design concepts for the development of automation to support shared control of multiple unmanned aerial vehicles (UAVs) and the associated sensors used for surveillance in a distributed work system. Scenarios and storyboards were developed to study these design concepts in the context of troops in contact arising during convoy and search and rescue missions where multiple UAVs could be controlled by soldiers locally (at the site of an ambush) or remotely (from a Tactical Operations Center for a battalion or brigade), or by automation. Storyboards were developed for three such scenarios and were used to conduct two cognitive walkthroughs with a total of 9 experienced soldiers. The results were used to complete cognitive task analyses defining roles and responsibilities and associated data and information requirements and to develop requirements for the design of such a distributed system. Generalizations regarding effective design for human-automation interaction in such a distributed work system were identified, emphasizing the need to provide benefit without being intrusive by supporting control and information display at different levels of abstraction through the use of pre-defined,mission-specific plays, by allowing fluid shifts in roles and responsibilities in order to redistribute the work, and by transitioning among different control paradigms (Management by Directive, Management by Permission, Management by Collaboration and Management by Exception).
Conveying the overall uncertainties of automated driving systems was shown to improve trust calibration and situation awareness, resulting in safer takeovers. However, the impact of presenting the uncertainties of multiple system functions has yet to be investigated. Further, existing research lacks recommendations for visualizing uncertainties in a driving context. The first study outlined in this publication investigated the implications of conveying function-specific uncertainties. The results of the driving simulator study indicate that the effects on takeover performance depends on driving experience, with less experienced drivers benefitting most. Interview responses revealed that workload increments are a major inhibitor of these benefits. Based on these findings, the second study explored the suitability of 11 visual variables for an augmented reality-based uncertainty display. The results show that particularly hue and animation-based variables are appropriate for conveying uncertainty changes. The findings inform the design of all displays that show content varying in urgency.
Full-text available
This work examines two human–autonomy team (HAT) training approaches that target communication and trust calibration to improve team effectiveness under degraded conditions. Human–autonomy teaming presents challenges to teamwork, some of which may be addressed through training. Factors vital to HAT performance include communication and calibrated trust. Thirty teams of three, including one confederate acting as an autonomous agent, received either entrainment-based coordination training, trust calibration training, or control training before executing a series of missions operating a simulated remotely piloted aircraft. Automation and autonomy failures simulating degraded conditions were injected during missions, and measures of team communication, trust, and task efficiency were collected. Teams receiving coordination training had higher communication anticipation ratios, took photos of targets faster, and overcame more autonomy failures. Although autonomy failures were introduced in all conditions, teams receiving the calibration training reported that their overall trust in the agent was more robust over time. However, they did not perform better than the control condition. Training based on entrainment of communications, wherein introduction of timely information exchange through one team member has lasting effects throughout the team, was positively associated with improvements in HAT communications and performance under degraded conditions. Training that emphasized the shortcomings of the autonomous agent appeared to calibrate expectations and maintain trust. Team training that includes an autonomous agent that models effective information exchange may positively impact team communication and coordination. Training that emphasizes the limitations of an autonomous agent may help calibrate trust.
Employing insights from past and ongoing critical cases of institutional and individual resilience can most plausibly assure resilience at the societal level and help prepare for an upcoming crisis. This paper considered the Greek public secondary education teachers’ professional crisis following the 2008 financial crisis as a case study to explore the problematisations of teachers’ continuous professional development (CPD) at institutional and individual levels. It drew on the complex systems approach and Actor-Network Theory to explore resilience through the institutional system’s selections and teachers’ individual translations of the issue. Based on secondary data and the results of double quantitative research, the paper explored the problematisation of CPD as a boundary object at individual and institutional levels, and presented statistical data on how teachers perceive themselves and the institutional context to their CPD. Research findings showed bifurcations at the problematisation level that affect the emerging properties and the degrees of resilience at individual and institutional levels. Results suggest the study of these qualitative changes in problematisation can offer valuable insights for policymaking and policy change. Further qualitative research is needed to understand the invisible points of bifurcation that hinder or promote inertia and resilience at institutional and individual levels.
With all of the research and investment dedicated to artificial intelligence and other automation technologies, there is a paucity of evaluation methods for how these technologies integrate into effective joint human-machine teams. Current evaluation methods, which largely were designed to measure performance of discrete representative tasks, provide little information about how the system will perform when operating outside the bounds of the evaluation. We are exploring a method of generating Extensibility Plots, which predicts the ability of the human-machine system to respond to classes of challenges at intensities both within and outside of what was tested. In this paper we test and explore the method, using performance data collected from a healthcare setting in which a machine and nurse jointly detect signs of patient decompensation. We explore the validity and usefulness of these curves to predict the graceful extensibility of the system.
Full-text available
Making the case for resilience testing
Full-text available
When we build Web infrastructures at Etsy, we aim to make them resilient. This means designing them carefully so that they can sustain their (increasingly critical) operations in the face of failure. Thankfully, there have been a couple of decades and reams of paper spent on researching how fault tolerance and graceful degradation can be brought to computer systems. That helps the cause. To make sure that the resilience built into Etsy systems is sound and that the systems behave as expected, we have to see the failures being tolerated in production. Why production? Why not simulate this in a QA or staging environment? First, the existence of any differences in those environments brings uncertainty to the exercise, and second, the risk of not recovering has no consequences during testing, which can bring hidden assumptions into the fault-tolerance design and into recovery. The goal is to reduce uncertainty, not increase it. Forcing failures to happen, or even designing systems to fail on their own, generally isn't easily sold to management. Engineers are not conditioned to embrace their ability to respond to emergencies; they aim to avoid them altogether. Taking a detailed look at how to respond better to failure is essentially accepting that failure will happen, which you might think is counter to what you want in engineering, or in business. Take, for example, what you would normally think of as a simple case: the provisioning of a server or cloud instance from zero to production: 1. Bare metal (or cloud-compute instance) is made available. 2. Base operating system is installed via PXE (preboot execution environment) or machine image. 3. Operating-system-level configurations are put into place (via configuration management or machine image). 4. Application-level configurations are put into place (via configuration management, app deployment, or machine image). 5. Application code is put into place and underlying services are started correctly(via configuration management, app deployment, or machine image). 6. Systems integration takes place in the network (load balancers, VLANs, routing, switching, DNS, etc.). This is probably an oversimplification, and each step or layer is likely to represent a multitude of CPU cycles; disk, network and/or memory operations; and various numbers of software mechanisms. All of these come together to bring a node into production. Operability means that you can have confidence in this node coming into production, possibly joining a cluster, and serving live traffic seamlessly every time it happens. Furthermore, you want and expect to have confidence that if the underlying power, configuration, application, or compute resources (CPU, disk, memory, network, etc.) experience a fault, then you can survive such a fault by some means: allowing the application to degrade gracefully, rebuild itself, take itself out of production, and alert on the specifics of the fault, etc.
Full-text available
Human factors studies the intersection between people, technology and work, with the major aim to find areas where design and working conditions produce human error. It relies on the knowledge base and research results of multiple fields of inquiry (ranging from computer science to anthropology) to do so. Technological change at this intersection (1) redefines the relationship between various players (both humans and machines), (2) transforms practice and shifts sources of error and excellence, and (3) often drives up operational requirements and pressures on operators. Human factors needs to predict these reverberations of technological change before a mature system has been built in order to steer design into the direction of cooperative human-machine architectures. The quickening tempo of technology change and the expansion of technological possibilities has largely converted the traditional shortcuts for access to a design process (task analysis, guidelines, verification and validation studies, etc.) into oversimplification fallacies that retard understanding, innovation, and, ultimately, human factors' credibility. There is an enormous need for the development of techniques that gain empirical access to the future-that generate human performance data about systems which have yet to be built.
Full-text available
There exists a widely recognized need to better understand and manage complex “systems of systems,” ranging from biology, ecology, and medicine to network-centric technologies. This is motivating the search for universal laws of highly evolved systems and driving demand for new mathematics and methods that are consistent, integrative, and predictive. However, the theoretical frameworks available today are not merely fragmented but sometimes contradictory and incompatible. We argue that complexity arises in highly evolved biological and technological systems primarily to provide mechanisms to create robustness. However, this complexity itself can be a source of new fragility, leading to “robust yet fragile” tradeoffs in system design. We focus on the role of robustness and architecture in networked infrastructures, and we highlight recent advances in the theory of distributed control driven by network technologies. This view of complexity in highly organized technological and biological systems is fundamentally different from the dominant perspective in the mainstream sciences, which downplays function, constraints, and tradeoffs, and tends to minimize the role of organization and design.
This paper aims to bridge progress in neuroscience involving sophisticated quantitative analysis of behavior, including the use of robust control, with other relevant conceptual and theoretical frameworks from systems engineering, systems biology, and mathematics. Familiar and accessible case studies are used to illustrate concepts of robustness, organization, and architecture (modularity and protocols) that are central to understanding complex networks. These essential organizational features are hidden during normal function of a system but are fundamental for understanding the nature, design, and function of complex biologic and technologic systems.
Operational Use of Flight Path Management Systems (final report of the Flight Deck Automation Working Group, Performance-Based Operations Aviation Rulemaking Committee [PARC]/Commercial Aviation Safety Team [CAST]/Federal Aviation Administration
  • K Abbott
  • D Mckenney
  • P Railsback