ArticlePDF Available
Keywords: autonomy, complexity, human automation
interaction, resilience engineering, topics
As a participant in multiple recent national
advi-sory activities, I have listened to many
technol-ogy advocates present briefings that
envisioned the future after deployments of
increasingly autonomous technologies
(e.g., Abbott, McKenney, & Railsback,
2013; Murphy & Shields, 2012; National
Research Council, 2014). The briefings
uniformly focused on the benefits that will
flow from additional invest-ments in
autonomous technologies. The message is
consistent: In the near future we will be able to
delegate authority to networks of vehicles
that will then carry out a widening range of
activities autonomously. Even though these
activities serve the purposes of various human
stakehold-ers, the presenters take it for granted
that humans’ involvement will decrease
and, eventually, become unnecessary. These
same beliefs about the impact of new
technology have accompanied past advances
even though the actual impacts have been
quite different than those envisioned (Sarter,
Woods, & Billings, 1997).
Envisioning the future is a precarious enter-
prise that is subject to biases. As past work has
shown, claims about the effects of future tech-
nology change are underspecified, ungrounded,
and overconfident, whereas new risks are
missed, ignored, or downplayed (Woods &
Dekker, 2000). The new capabilities trigger a
much wider and more complex set of reverbera-
tions, including new forms of complexity and
new risks. Failure to anticipate and design for
the new challenges that are certain to arise fol-
lowing periods of technology change leads to
automation surprises when advocates are sur-
prised by negative unintended consequences
that offset apparent benefits (Woods, 1996).
Today’s common beliefs about
increasingly autonomous capabilities replay
what has been observed in previous cycles
of technology change. Risks associated
with autonomy are ignored and downplayed,
setting the stage for future automation
surprises.
XXX10.1177/1555343416653562Journal of Cognitive Engineering and Decision MakingThe Risks of Autonomy
2016
Address correspondence to David D. Woods, The Ohio
State University, Institute for Ergonomics, 210 Baker
Syst., 1971 Neil Ave, Columbus, OH 43210-1271, USA,
woods.2@osu.edu.
Theory or Review Paper
The Risks of Autonomy: Doyle’s Catch
David D. Woods, The Ohio State University
Journal of Cognitive Engineering and Decision Making
2016, Volume 10, Number 2, Month 2016, pp. 131133
DOI: 10.1177/1555343416653562
Copyright © 2016, Human Factors and Ergonomics Society.
A new risk of autonomy has arisen as a
result of the power of today’s technologies and
is captured in Doyle’s Catch (Alderson &
Doyle, 2010):
Computer-based simulation and rapid
pro-totyping tools are now broadly
available and powerful enough that it is
relatively easy to demonstrate almost
anything, provided that conditions are
made sufficiently idealized. However,
the real world is typically far from
idealized, and thus a system must have
enough robustness in order to close the
gap between demonstration and the real
thing.
(J. Doyle / D. Alderson, personal
communication, January 4, 2013)
2 Month XXXX - Journal of Cognitive Engineering and Decision Making
The technology advocates I witnessed fell
directly into Doyle’s Catch. They presumed that
because capabilities could be demonstrated
under some conditions, extending the prototypes
to handle the full range of complexities that
emerge and change over life cycles would be
straightforward. Across all the briefings, when
the listeners pointed to gaps, the response was
the same: “With investment, engineering devel-
opments on their own momentum will address
these concerns, but outside organizations can
slow progress and add costs.” When the listeners
identified issues that admittedly lack solutions
today, the response was “these solutions will
come with application of sufficient engineering
and innovation energy, but this energy can be
released only if organizational and regulatory
barriers are removed.”
Doyle’s Catch shows that this optimism is
insufficient. Emerging capabilities, because they
are powerful, produce new technical challenges,
which if not addressed will produce negative
unintended consequences. Doyle’s Catch poses
a new technical challenge: How can design and
testing “close the gap between the demonstra-
tion and the real thing?” This challenge is not
trivial and has not been addressed in the devel-
opment of increasingly autonomous systems.
Doyle’s Catch contains three main technical
challenges: complexity, life cycle, and testing.
Complexity
Increasingly autonomous things such as
road or airborne vehicles are not “things” at
all but instead are complex networks of
multiple algorithms, control loops, sensors,
and human roles that interact over different
time scales and changing conditions. Some
parts of the network are onboard the vehicle
or inside the vehicle’s “skin,” whereas others
are offboard. For example, one briefing
described a vehicle entered in one of Defense
Advanced Research Projects Agency’s Grand
Challenges that, based on the presentation,
used about 18 sensor packages, 20 algorithms
(not counting basic sensor processing/actuator
controllers), and an undetermined number of
modes. Types of algorithm included temporal
logic; sensor fusion; multiple path, traffic, and
mission planners; conflict man-agement; health
monitoring; fault management; optimization;
classifiers; models of the environ-ment (maps);
obstacle detection; road finding; vehicle
finding; and sensor validation checks.
Preparing the vehicle for the Grand Challenge
involved 75 engineers from one of the top
engi-neering universities in the world over 18
calendar months. Despite this effort, the
vehicle did not perform all that well in the
competition. Extending the performance
envelope of this vehicle will, according to the
presenters, be addressed by even more sensors,
more algorithms, and more computation.
There appears to be no limit to the
complexity of interacting and interdependent
computational elements in this program. Closing
the gap between the demonstration and the real
thing requires the development of new methods
to manage creeping complexity and the associ-
ated costs.
life CyCle
Doyle’s Catch forces us to wrestle with how
to design systems that will need to change
continuously over their life cycles. The
architecture needs to be “poised to change”
especially as the new systems provide valuable
capabilities to stakeholders (Woods, 2015; Cook
2016). The systems will need to be able to adapt,
or be adapted, to handle new tasks, in new
contexts, participate in new relationships, and
function under new pressures as stakeholders and
problems holders adapt to take advantage of the
newcapabilities and to work around the new gaps
that emerge. As a software- intensive network,
increasingly autonomous systems, over life
cycles, will face::
challenges to assumptions and boundary condi-
tions,
surprise events,
changed conditions and contexts of use and rever-
berating effects,
adaptive shortfalls that will require responsible peo-
ple to step into the breach to innovate locally, and
resource fluctuations that change organizational
resilience and produce brittleness.
The Risks of AuTonomy
3
How are we to model/analyze the dynamic pat-
terns arising as software-intensive, technology-
based systems operate at scale with changing
autonomous capabilities in changing contexts?
Handling life cycle dynamics will require an
architecture equipped with the capacity to adjust
performance over a wide dynamic range (Doyle
& Csete, 2011). This is, in part, the target of
extensible critical digital services (Allspaw,
2012) and closely related to resilience engineer-
ing (Woods, 2015). Closing the gap between the
demonstration and the real thing requires the
development of new ways to design systems to
be manageable and extensible over life cycles.
In particular, it will also require
reinventing certification and V&V to make
them continuous activities, starting early in
design and continuing as the system is
implemented, rather than one-time acceptance
hurdles.
testing for Brittleness
rather than feasiBility
Doyle’s Catch highlights how demonstra-
tions mask what really works and what is vapor.
To check for vapor, one can use the turnaround
test—How much work does it take to get a
system ready to handle the next mission/case/
environment, when the next is not a simple para-
metric variation of the previous demonstration?
Existing autonomous system prototypes would
likely score poorly on such a test viz. the vehicle
described earlier. The maturation of a system
as it moves from novel capability to special-
ized capability to generally available/routinized
capability is marked by improved scores on
turnaround tests.
Doyle’s Catch highlights how demonstra-
tions can be brittle in ways that are unappreci-
ated. But when the demonstration encounters the
full complexity and scale of real-world deploy-
ments, the forms of brittleness undermine the
viability of a system and require people in vari-
ous roles to adapt to fill the gaps. As a result,
there is a need to assess the brittleness of envi-
sioned systems as they move from demonstra-
tion to deployment and across its life cycle. This
finding was stark across multiple briefings, as
organizations developing autonomous vehicles
showed little awareness or consideration of the
brittleness problem. The standard assumption
was that normal reliability engineering
approaches will be sufficient (think of all of
redundant sensors and modules in the vehicle
described earlier).
Across multiple briefings organizations,
developing autonomous vehicles showed little
awareness or consideration of the brittleness
problem. Instead, proponents assumed that con-
ventional reliability engineering approaches
would suffice, despite the proliferation of sen-
sors, algorithms, computations, and interdepen-
dencies noted for the vehicle described earlier.
Closing the gap between the demonstration
and the real thing requires the development of
new methods to assess brittleness and to incor-
porate forms of resilience into design. Doyle’s
Catch points out some of the new risks that
emerge as people search for advantage by
deploying increasingly autonomous technolo-
gies. Doyle’s Catch also points to new opportu-
nities for innovations to tame and manage the
growth in complexity that accompanies deploy-
ing autonomous technologies into today’s inter-
connected world.
referenCes
Abbott, K., McKenney, D., & Railsback, P. (2013).
Operational Use of Flight Path Management Systems
(final report of the Flight Deck Automation Working
Group, Performance-Based Operations Aviation Rulemaking
Committee / Commercial Aviation Safety Team / FAA
Retrieved from http://www.faa.gov/about/office_org/
headquarters_offices/avs/offices/afs/afs400/parc/parc_reco/
media/2013/130908_PARC_FltDAWG_Final_Report_Rec
ommendations.pdf
Alderson, D. L., & Doyle, J. C. (2010). Contrasting views of com-
plexity and their implications for network-centric infrastruc-
tures. IEEE SMC—Part A, 40, 839–852.
Allspaw, J. (2012). Fault injection in production: Making the
case for resilience testing. ACM Queue, 10(8), 30–35. doi:
10.1145/2346916.2353017
Doyle, J. C., & Csete, M. E. (2011). Architecture, constraints, and
behavior. Proceedings of the National Academy of Sciences
USA, 108(Suppl. 3), 15624–15630.
Murphy, R. R., & Shields, J. (2012). The role of autonomy in DoD
systems, task force report, Office of the Secretary of Defense,
July. Retrieved from http://fas.org/irp/agency/dod/dsb/auton
omy.pdf
National Research Council. (2014). Autonomy research for
civil aviation: Toward a new era of flight. Washington, DC:
National Academies Press. http://www.nap.edu/catalog
.php?record_id=18815
4 Month XXXX - Journal of Cognitive Engineering and Decision Making
Sarter, N., Woods, D. D., & Billings, C. (1997). Automation sur-
prises. In G. Salvendy (Ed.), Handbook of human factors/
ergonomics (2nd ed., pp. 1926–1943). New York: Wiley.
Woods, D. D. (1996). Decomposing automation: Apparent simplic-
ity, real complexity. In R. Parasuraman & M. Mouloula (Eds.),
Automation technology and human performance: Theory
and applications (pp. 3–17). Hillsdale NJ: Erlbaum.
Woods, D. D. (2015). Four concepts for resilience and their impli-
cations for systems safety in the face of complexity. Reliabil-
ity Engineering and System Safety, 141, 5–9. doi: 10.1016/j.
ress.2015.03.018
Woods, D. D., & Dekker, S. W. A. (2000). Anticipating the effects
of technological change: A new era of dynamics for human fac-
tors. Theoretical Issues in Ergonomic Science, 1(3), 272–282.
David D. Woods is a professor in the Department of
Integrated Systems Engineering at The Ohio State
University and is past president of the Human Fac-
tors and Ergonomics Society and of the Resilience
Engineering Association.
capacity. Velocity DevOps & Web Performance Conference
2016, Santa Clara CA, O’Reilly Media, June 22, 2016.
Presentation video available at http://
conferences.oreilly.com/velocity/devops-web-performance-ca
Cook, R.I. (2016). Poised to deploy: the C-suite and adaptive
... In this case, the narrow perspective meant many claimed "the automation" didn't fail. This tendency to narrow what is "the automation or the autonomy" hides the true integrated system, its complexity, overestimates reliability and underestimates the need for resilient supervisory control (Woods, 2016). ...
... Automated systems with high autonomy and high authority will misbehave when factors combine to create a gap between the internal model of the world and the actual events/ context going on in the world where the automation is deployed. This risk is inescapable and individual incidents or accidents involving misbehavior of strong, silent, difficult to direct automation occur regularly as stakeholders deploy increasingly autonomous systems with high authority in dynamic risky worlds (Woods, 2016). However, the risk is seen as an issue to be handled on a case by case basis for the designers using tools tailored for a specific subsystem/application. ...
Article
Full-text available
Warnings about the risks of literal-minded automation—a system that can’t tell if its model of the world is the world it is actually in—have been sounded for over 70 years. The risk is that a system will do the “right” thing—its actions are appropriate given its model of the world, but it is actually in a different world—producing unexpected/unintended behavior and potentially harmful effects. This risk—wrong, strong, and silent automation—looms larger today as our ability to deploy increasingly autonomous systems and delegate greater authority to such systems expands. It already produces incidents, outages of valued services, financial losses, and fatal accidents across different settings. This paper explores this general and out-of-control risk by examining a pair of fatal aviation accidents which revolved around wrong, strong and silent automation.
... As a result, the system is dependent on a simplified model of the real situation in order to deal with it (cf. Woods, 2016). Systems behave autonomously with regard to these simplified models, but such behavior does not even approximate objective autonomy, that is, autonomy that fully meets all three facets of viability, independence, and self-governance. ...
... Surprise is continuous and ever-present. There is always the need to close the gap between the demonstration and the real thing (Woods, 2016). These are called 'AI blind spots', or conditions for which the system is not robust. ...
Chapter
Full-text available
... While these technologies offer great potential to improve efficiency and safety, in practice, unanticipated difficulties often follow their deployment in complex work domains (Bainbridge, 1983;Sarter et al., 1997). Integration of robotic technologies and automation changes the nature of work and can introduce substantial interaction overhead for humans (Klein et al., 2004;McGuirl et al., 2009;Woods, 2016). Notably, the cognitive and perceptual costs of collaborating with robots have been identified in explosive ordinance disposal (EOD) (Murphy, 2017), robot-assisted surgery (Balkin et al., 2014), and robotic rescue operations during the events following 9/11 (Casper & Murphy, 2003). ...
Thesis
Full-text available
Robotic technologies have been documented to often fall short of anticipated performance levels when deployed in complex field settings (Harbers et al., 2017). While robots are intended to work safely and efficiently, operators often describe them as slow, difficult, and error-prone (Murphy, 2017). As a result, the performance of robots in the field often relies on the ability of the human supervisor/controller to observe, predict, and direct robot actions (Johnson et al., 2020), introducing substantial overhead for humans to manage, interact, and coordinate with robotic and/or automated system(s) (McGuirl et al., 2009). As robots become increasingly integrated into complex environments, their ability to team effectively with humans will be paramount to reap the intended benefits of this technology without placing significant coordinative costs upon human operators. This thesis explores coordination and performance of two human-robot team designs participating in joint activity in a constrained environment. Temporal dynamics of human-robot teamwork are assessed, identifying trends in human-robot task delegation and role rigidity. Findings indicate that human operators employ multiple coordination strategies over time, dynamically changing human-robot teamwork approaches based on scenario-driven factors and environmental pressures. The results suggest a need to explicitly design robotic agents with diverse teamwork competencies to support a human operator’s ability to employ adaptive and effective teamwork strategies.
... This is related to the fact that, as Woods argues, such autonomous systems are "not things at all, but instead are complex networks of multiple algorithms, control loops, sensors, and human roles that interact over different time scales and changing conditions." [4] This is also the case as he points out of autonomous road or air vehicles, for example. We nonetheless identify such robots as independent agents -"things" in that sense -because they are causally responsible for the realization of the task to which the system is dedicated: traveling from one place to another, attacking enemy positions. ...
... AI moved from a search paradigm to a knowledge-based paradigm (Goldstein & Papert, 1977), culminating in the heyday of highly domain-specific expert systems in the 1980s (Feigenbaum et al., 1988). However, expert systems were brittle, meaning they only performed well on the limited scope they were designed for, and with the assistance of human experts who were required to close the gap between the designers' intentions and the real-world application (Woods, 2016). In a particular study on fault diagnosis with an expert system, technicians were required to follow underspecified instructions by the expert system, to infer machine intentions, and to recover from errors that led the expert system off-track (Roth et al., 1987). ...
... of Critically, the technology advance enabled new competencies, but developers failed to see its limits, how success would create new interdependencies at new scales, and how these followon changes would produce new forms of challenges and vulnerabilities to break down (the story has reappeared across multiple waves of new technologies, e.g., Woods [2]). The new capabilities could have been designed to support adaptive capacities that offset brittleness. ...
Chapter
Full-text available
The Command-Adapt Paradox arises from the long-standing tension between two perspectives. The central theme of the centralized control perspective is "plan and conform". The central theme of the guided adaptability perspective is "plan and revise"-being poised to adapt. In the former perspective, operations are pressured to follow rules, procedures and automation with the expectation that success will follow as long as the sharp end personnel work-to-rule, work-to-role, and work-to-plan. The latter perspective recognizes that disrupting events will challenge plans-in-progress, requiring adaptations, reprioritization, and reconfiguration in order to meet key goals given the effects of disturbances and changes. The two perspectives appear to conflict; therefore, organizations must choose one or the other in safety management. Empirical studies, experience, and science all reveal that the paradox is only apparent: "good" systems embedded in the complexities of this universe need to plan and revise-to do both. The paradox dissolves, in part, when one realizes guided adaptability is a capability that builds on plans. The difficulty arises when organizations over-rely on plans. Over-reliance undermines adaptive capacity when beyond-plan challenges arise. Beyond-plan challenges occur regularly for complex systems. The catch is: pressure to comply focuses only on the first and degrades the second. The result is systems with excess brittleness that is evident in the recurring stream of economic and safety failures of complex systems embedded in turbulent worlds.
Article
Full-text available
Two trajectories underway transform human systems. Processes of growth/complexification have accelerated as stakeholders seek advantage from advances in connectivity/autonomy/sensing. Surprising empirical patterns also arise—puzzling collapses of critical valued services occur against a background of growth. In parallel, new scientific foundations have arisen from diverse directions explaining the observed anomalies and breakdowns, highlighting basic weaknesses of automata regardless of technology. Conceptual growth provides laws, theorems, and comprehensive theories that encompass the interplay of autonomy/people and complexity/adaptation across scales. One danger for synchronizing the trajectories is conceptual lag as researchers remain stuck in stale frames unable to keep pace with transformative change. Any approach that does not either build on the new conceptual advances—or provide alternative foundations—is no longer credible to match the scale and stakes of modern distributed layered systems and overcome the limits of automata. The paper examines longstanding challenges by contrasting progress then as the trajectories gathered steam, to situation now as change has accelerated.
Chapter
Full-text available
Organizations, to comply with regulations and growing prosocial demands, develop robust accountability infrastructures : offices, techno-legal experts, programs, operating procedures, technologies, and tools dedicated to keeping the organization’s operations in line with regulations and external standards. Although an organization has a single, unified accountability infrastructure—one program, one set of policies and procedures, and so on for environmental management, or health and safety, or risk management—this infrastructure must produce compliance across a dynamic, complex organization. This happens when and because compliance managers and officers make a single, unified accountability infrastructure multiple and diverse in its day-to-day implementation. This approach to compliance work is pragmatic in the sense that rules and requirements are altered based on a deep understanding of regulatory expectations, local operations, and local work cultures. It depends on the skilled interpretation and adaptation of regulation and narration of compliance.
Chapter
Full-text available
This introductory chapter combines several dimensions which are meant to help frame a complex topic representing a very rich diversity of situations across industries, countries, and epochs. The idea is to sensitize readers to several aspects associated with the topic of rules and autonomy in the domain of safety, and of this book. Its aim is to emphasize the importance of contexts when it comes to (safety) rules. Contexts refer to organizations, to industries, to risks, to histories, to practices, to situations, and to countries. Three sections develop the importance of context: (1) The advent of safety rules as an established narrative , (2) There is more than rules in safety , and (3) Historical trends … a bureaucratization of safety? The last section presents the chapters of this book, grouped in three categories, (1) Finding or losing the balance ; (2) The role, position, and influence of middle-managers and top management , finally; (3) When autonomy, initiative, and resilience take the lead .
Chapter
Full-text available
The criminological study of corporate crime provides a source of insights into the key role of middle-managers in navigating the tensions between compliance-based and initiative-based approaches to safety. From an initial focus on individual and organizational motivations, the discipline has moved to highlight instead the influence of breakdowns in the connections between individual and organization. Three such grounds of disconnection (problems of ambiguity, structural uncoupling, and autonomy deficits) will be explored, and their implications for understandings of middle-managers’ role will be analyzed.
Article
Full-text available
Making the case for resilience testing
Article
Full-text available
When we build Web infrastructures at Etsy, we aim to make them resilient. This means designing them carefully so that they can sustain their (increasingly critical) operations in the face of failure. Thankfully, there have been a couple of decades and reams of paper spent on researching how fault tolerance and graceful degradation can be brought to computer systems. That helps the cause. To make sure that the resilience built into Etsy systems is sound and that the systems behave as expected, we have to see the failures being tolerated in production. Why production? Why not simulate this in a QA or staging environment? First, the existence of any differences in those environments brings uncertainty to the exercise, and second, the risk of not recovering has no consequences during testing, which can bring hidden assumptions into the fault-tolerance design and into recovery. The goal is to reduce uncertainty, not increase it. Forcing failures to happen, or even designing systems to fail on their own, generally isn't easily sold to management. Engineers are not conditioned to embrace their ability to respond to emergencies; they aim to avoid them altogether. Taking a detailed look at how to respond better to failure is essentially accepting that failure will happen, which you might think is counter to what you want in engineering, or in business. Take, for example, what you would normally think of as a simple case: the provisioning of a server or cloud instance from zero to production: 1. Bare metal (or cloud-compute instance) is made available. 2. Base operating system is installed via PXE (preboot execution environment) or machine image. 3. Operating-system-level configurations are put into place (via configuration management or machine image). 4. Application-level configurations are put into place (via configuration management, app deployment, or machine image). 5. Application code is put into place and underlying services are started correctly(via configuration management, app deployment, or machine image). 6. Systems integration takes place in the network (load balancers, VLANs, routing, switching, DNS, etc.). This is probably an oversimplification, and each step or layer is likely to represent a multitude of CPU cycles; disk, network and/or memory operations; and various numbers of software mechanisms. All of these come together to bring a node into production. Operability means that you can have confidence in this node coming into production, possibly joining a cluster, and serving live traffic seamlessly every time it happens. Furthermore, you want and expect to have confidence that if the underlying power, configuration, application, or compute resources (CPU, disk, memory, network, etc.) experience a fault, then you can survive such a fault by some means: allowing the application to degrade gracefully, rebuild itself, take itself out of production, and alert on the specifics of the fault, etc.
Article
Full-text available
Human factors studies the intersection between people, technology and work, with the major aim to find areas where design and working conditions produce human error. It relies on the knowledge base and research results of multiple fields of inquiry (ranging from computer science to anthropology) to do so. Technological change at this intersection (1) redefines the relationship between various players (both humans and machines), (2) transforms practice and shifts sources of error and excellence, and (3) often drives up operational requirements and pressures on operators. Human factors needs to predict these reverberations of technological change before a mature system has been built in order to steer design into the direction of cooperative human-machine architectures. The quickening tempo of technology change and the expansion of technological possibilities has largely converted the traditional shortcuts for access to a design process (task analysis, guidelines, verification and validation studies, etc.) into oversimplification fallacies that retard understanding, innovation, and, ultimately, human factors' credibility. There is an enormous need for the development of techniques that gain empirical access to the future-that generate human performance data about systems which have yet to be built.
Article
Full-text available
There exists a widely recognized need to better understand and manage complex “systems of systems,” ranging from biology, ecology, and medicine to network-centric technologies. This is motivating the search for universal laws of highly evolved systems and driving demand for new mathematics and methods that are consistent, integrative, and predictive. However, the theoretical frameworks available today are not merely fragmented but sometimes contradictory and incompatible. We argue that complexity arises in highly evolved biological and technological systems primarily to provide mechanisms to create robustness. However, this complexity itself can be a source of new fragility, leading to “robust yet fragile” tradeoffs in system design. We focus on the role of robustness and architecture in networked infrastructures, and we highlight recent advances in the theory of distributed control driven by network technologies. This view of complexity in highly organized technological and biological systems is fundamentally different from the dominant perspective in the mainstream sciences, which downplays function, constraints, and tradeoffs, and tends to minimize the role of organization and design.
Article
This paper aims to bridge progress in neuroscience involving sophisticated quantitative analysis of behavior, including the use of robust control, with other relevant conceptual and theoretical frameworks from systems engineering, systems biology, and mathematics. Familiar and accessible case studies are used to illustrate concepts of robustness, organization, and architecture (modularity and protocols) that are central to understanding complex networks. These essential organizational features are hidden during normal function of a system but are fundamental for understanding the nature, design, and function of complex biologic and technologic systems.
Operational Use of Flight Path Management Systems (final report of the Flight Deck Automation Working Group, Performance-Based Operations Aviation Rulemaking Committee [PARC]/Commercial Aviation Safety Team [CAST]/Federal Aviation Administration
  • K Abbott
  • D Mckenney
  • P Railsback