Conference PaperPDF Available

The law of stretched systems in action: Exploiting robots

Authors:

Abstract

Robotic systems represent new capabilities that justifiably excite technologists and problem holders in many areas. But what affordances do the new capabilities represent and how will problem holders and practitioners exploit these capabilities as they struggle to meet performance demands and resource pressures? Discussions of the impact of new robotic technology typically mistake new capabilities for affordances in use. The dominate note is that robots as autonomous agents will revolutionize human activity. This is a fundamental oversimplification (see Feltovich et al., 2001) as past research has shown that advances in autonomy (an intrinsic capability) have turned out to demand advances in support for coordinated activity (extrinsic affordances). The Law of Stretched Systems captures the co-adaptive dynamic that human leaders under pressure for higher and more efficient levels of performance will exploit new capabilities to demand more complex forms of work (Woods and Dekker, 2000; Woods and Hollnagel, 2006). This law provides a guide to use past findings on the reverberations of technology change to project how effective leaders and operators will exploit the capabilities of future robotic systems. When one applies the Law of Stretched Systems to new robotic capabilities for demanding work settings, one begins to see new stories about how problem holders work with and through robotic systems to accomplish goals. These are not stories about machine autonomy and the substitution myth. Rather, the new capabilities trigger the exploration of new story lines about future operations that concern: how to coordinate activities over wider ranges, how to expand our perception and action over larger spans through remote devices, and how to project our intent into distant situations to achieve our goals..
Invited Talk
The Law of Stretched Systems in Action: Exploiting Robots
David D. Woods
Ohio State University
Columbus, OH USA
woods.2@osu.edu
Abstract
Robotic systems represent new capabilities that justifiably excite
technologists and problem holders in many areas. But what
affordances do the new capabilities represent and how will
problem holders and practitioners exploit these capabilities as they
struggle to meet performance demands and resource pressures?
Discussions of the impact of new robotic technology typically
mistake new capabilities for affordances in use. The dominate
note is that robots as autonomous agents will revolutionize human
activity. This is a fundamental oversimplification (see Feltovich
et al., 2001) as past research has shown that advances in autonomy
(an intrinsic capability) have turned out to demand advances in
support for coordinated activity (extrinsic affordances).
The Law of Stretched Systems captures the co-adaptive dynamic
that human leaders under pressure for higher and more efficient
levels of performance will exploit new capabilities to demand
more complex forms of work (Woods and Dekker, 2000; Woods
and Hollnagel, 2006). This law provides a guide to use past
findings on the reverberations of technology change to project
how effective leaders and operators will exploit the capabilities of
future robotic systems. When one applies the Law of Stretched
Systems to new robotic capabilities for demanding work settings,
one begins to see new stories about how problem holders work
with and through robotic systems to accomplish goals. These are
not stories about machine autonomy and the substitution myth.
Rather, the new capabilities trigger the exploration of new story
lines about future operations that concern:
how to coordinate activities over wider ranges,
how to expand our perception and action over larger spans
through remote devices, and
how to project our intent into distant situations to achieve our
goals.
Research on these story lines provide new results on awareness of
remote environments through robotic systems and
brittleness/resilience in coordinating people and robots that define
promising directions with high potential return for supporting
work though robotic systems (Woods et al., 2004). These results
also help us identify new candidates for challenge cases in HRI
and new classes of metrics (e.g., the fractal path scores developed
by Phillips and Voshel).
General Terms:
Human Factors, Design, Measurement,
Reliability
Bio
Professor in the Institute for Ergonomics at the Ohio State
University. Dr. Woods has been President of the Human Factors
and Ergonomic Society. He is a Fellow of that society as well as
the American Psychological Society and the American
Psychological Association. He has shared the Ely Award for best
paper in the journal Human Factors (1994), a Laurels Award from
Aviation Week and Space Technology (1995) for research on the
human factors of highly automated cockpits, the Jack Kraft
Innovators Award from the Human Factors and Ergonomics
Society (2002), an IBM Faculty Award (2005). Dr. Woods has
served on National Academy of Science and other advisory
committees including recently Engineering the Delivery of Health
Care (2005), and Dependable Software (2006). He has testified to
U.S. Congress on Safety at NASA and on Election Reform. He
was one of the founding board members of the National Patient
Safety Foundation, Associate Director of the Midwest Center for
Inquiry on Patient Safety of the Veterans Health Administration,
and was an advisor to the Columbia Accident Investigation Board.
Multimedia overviews of his research developing the foundations
and practice of Cognitive Systems Engineering are available at
url: http://csel.eng.ohio-state.edu/woods/ and he is co-author of
the monographs Behind Human Error (1994), A Tale of Two
Stories: Contrasting Views of Patient Safety (1998), Joint
Cognitive Systems (2005; 2006), and Resilience Engineering
(2006).
References
Feltovich, P.J., Hoffman, R.R., Woods, D.D., & Roesler, A.
(2004). Keeping it too simple: How the reductive tendency affects
cognitive engineering. IEEE Intelligent Systems, 19(3), 90-94.
Voshell, M. G., Woods, D. D. & Phillips, F. (2005). Human-
Robot Interaction: From Fieldwork to Simulation to Design.
Proceedings of the Human Factors and Ergonomics Society 49th
Annual Meeting. 26-28 September, Orlando FL.
Woods, D.D. & Dekker, S.W.A. (2000). Anticipating the effects
of technological change: A new era of dynamics for Human
Factors. Theoretical Issues in Ergonomic Science, 1(3), 272—282.
Woods, D.D. & Hollnagel, E. (2006). Joint Cognitive Systems:
Patterns in Cognitive Systems Engineering. Taylor & Francis.
Woods, D. D., Tittle, J., Feil, M. & Roesler, A. (2004).
Envisioning Human-Robot Coordination for Future Operations.
IEEE SMC Part C, 34(2), 210-218.
Copyright is held by the author/owner(s).
HRI’06, March 2–3, 2006, Salt Lake City, Utah, USA.
ACM 1-59593-294-1/06/0003.
... Following this, we can assert that soft interdependence typically consists of nuanced and context-dependent cognitive work. Expert human practitioners are highly skilled at this type of adaptive work, performing it almost instinctively (Branlat & Woods, 2010;Klein et al., 2004;Woods, 2006). Notably, there is often a lack of specificity and organizational awareness surrounding the details of the depth and breadth of adaptive work humans perform. ...
... This method of analysis also affords the ability to identify broader patterns within work and assess their suitability implications for joint activity in HMTs (Erik Johansson & Lundberg, 2017;Patriarca et al., 2018;Woods, 2006). ...
Thesis
Full-text available
Robotic technologies have been documented to often fall short of anticipated performance levels when deployed in complex field settings (Harbers et al., 2017). While robots are intended to work safely and efficiently, operators often describe them as slow, difficult, and error-prone (Murphy, 2017). As a result, the performance of robots in the field often relies on the ability of the human supervisor/controller to observe, predict, and direct robot actions (Johnson et al., 2020), introducing substantial overhead for humans to manage, interact, and coordinate with robotic and/or automated system(s) (McGuirl et al., 2009). As robots become increasingly integrated into complex environments, their ability to team effectively with humans will be paramount to reap the intended benefits of this technology without placing significant coordinative costs upon human operators. This thesis explores coordination and performance of two human-robot team designs participating in joint activity in a constrained environment. Temporal dynamics of human-robot teamwork are assessed, identifying trends in human-robot task delegation and role rigidity. Findings indicate that human operators employ multiple coordination strategies over time, dynamically changing human-robot teamwork approaches based on scenario-driven factors and environmental pressures. The results suggest a need to explicitly design robotic agents with diverse teamwork competencies to support a human operator’s ability to employ adaptive and effective teamwork strategies.
... Optimality can be understood as the favourability of system configuration with respect to variations, constraints, or disturbances within the environment (and is heavily context dependent; in AFSS, a salient example of optimality is the desirability of removing infrastructure to reduce range costs). This idea is alluded to in the Law of Stretched Systems, which states that "every system is stretched to operate at its capacity …as soon as there is some improvement, some new technology, we exploit it to achieve a new intensity and a new tempo of activity" [16]. Pursuit of optimality in the form of new intensity or tempo of operations often comes at the expense of resilience within sociotechnical systems, and optimalityresilience have been noted as a common trait inherent in sociotechnical systems that must be actively managed [9]. ...
... Such measures are undoubtedly important to ensure the base-level functionality of the system but are somewhat incomplete in terms of measuring the holistic reliability of a complex, multiagent, and multi-layered decision-making task. For these reasons, novel verification and validation (V&V) techniques are necessary to examine complexities and pitfalls that emerge when automating key safety tasks [16]. Such methods could prove incredibly valuable for the identification and subsequently mitigation of outlying risks. ...
Conference Paper
Full-text available
Flight safety systems (FSS) act as a method to terminate off-nominal rocket launches which threaten public safety. Traditional FSS delegate decision authority to an experienced Mission Flight Control Officer (MFCO) tasked with flight termination decisions, observing multiple points of telemetry data in real time to ensure nominal flight status. This study examines the engineering trade-offs, complexities, and pitfalls introduced by automating this key safety task through autonomous flight safety systems (AFSS). We approach this problem from a cognitive systems engineering perspective, connecting aspects of AFSS to existing literature in human-machine teaming and resilience engineering. Based on information gathered from a series of semi-structured interviews performed with various subject matter experts (mission controllers, regulators, engineers, amongst others) and existing literature, we outline a list of four assumptions underlying AFSS operations: [1] The system is fully autonomous, [2] An exhaustive flight safety analysis has been performed [3] The system will be able to respond appropriately to the world, and [4] MFCO expertise can be captured in (or translated to) software. Our findings highlight that, while the benefits of AFSS hold great promise for increasing the viability of commercial space operations, the automation of an irreversible, instantaneous, and complex decision-making task brings with it significant challenges and risks. We propose directions for further research to minimize the likelihood of errant, expensive, and dangerous flight terminations by an automated agent.
... There is also the issue of complexity at the higher, sociotechnical system level. We have long known that sociotechnical systems will stretch when novel technology is introduced, altering the types of work performed (Woods, 2006;Sheridan, 2008). For example, introducing AI into the domain of intelligence analysis raises questions about the nature of analysts' work changing to focus on inspecting AI outputs vs. conducting analysis, as well as other changes to the nature of workforce collaboration and management (Vogel et al., 2021). ...
Article
Full-text available
There is a growing expectation that artificial intelligence (AI) developers foresee and mitigate harms that might result from their creations; however, this is exceptionally difficult given the prevalence of emergent behaviors that occur when integrating AI into complex sociotechnical systems. We argue that Naturalistic Decision Making (NDM) principles, models, and tools are well-suited to tackling this challenge. Already applied in high-consequence domains, NDM tools such as the premortem, and others, have been shown to uncover a reasonable set of risks of underlying factors that would lead to ethical harms. Such NDM tools have already been used to develop AI that is more trustworthy and resilience, and can help avoid unintended consequences of AI built with noble intentions. We present predictive policing algorithms as a use case, highlighting various factors that led to ethical harms and how NDM tools could help foresee and mitigate such harms.
... Optimality can be understood as the favourability of a system's configuration with respect to variations, constraints, or disturbances within the environment (and is heavily context dependent: in AFSS, a salient example of optimality is the desirability of removing infrastructure to reduce range costs). These concepts are alluded to in the Law of Stretched Systems, which states that "every system is stretched to operate at its capacity …as soon as there is some improvement, some new technology, we exploit it to achieve a new intensity and a new tempo of activity" [25] . Pursuit of optimality in the form of new intensity or tempo of operations often comes at the expense of resilience within sociotechnical systems [ 15 , 16 ]. ...
Article
Flight safety systems (FSS) act as a method to terminate off-nominal rocket launches which threaten public safety. Traditional FSS delegate decision authority to an experienced Mission Flight Control Officer (MFCO) tasked with flight termination decisions, observing multiple points of telemetry data in real time to ensure nominal flight status. This study examines the engineering trade-offs, complexities, and pitfalls introduced by automating flight termination decision-making through autonomous flight safety systems (AFSS). We approach this problem from a cognitive systems engineering perspective, connecting aspects of AFSS to existing literature in human-machine teaming and resilience engineering. Based on information gathered from a series of semi-structured interviews with various subject matter experts (mission controllers, regulators, engineers, amongst others) and existing literature, we outline a list of four assumptions underlying AFSS operations: (1) The system is fully autonomous, (2) An exhaustive flight safety analysis has been performed, (3) The system will be able to respond appropriately to the world, and (4) MFCO expertise can be captured in (or translated to) software. Our findings highlight that, while the benefits of AFSS hold great promise for increasing the viability of commercial space operations, the automation of an irreversible, instantaneous, and complex decision-making task introduces significant challenges and risks. We propose directions for further research to minimize the likelihood of errant, expensive, and dangerous automated flight terminations.
... The law of stretched systems is an especially relevant construct when developing a completely novel system with a high degree of interdependency and complexity. In essence, the law of stretched systems states that new capabilities (such as autonomy) do not simply reduce workload of operators, but instead, demand more complex forms of work such as coordinating activities, expanding perception over larger ranges, and projecting intent or goals [13]. Stretched systems is often associated with resilience engineering. ...
Article
Full-text available
There are inherent difficulties in designing an effective Human–Machine Interface (HMI) for a first-of-its-kind system. Many leading cognitive research methods rely upon experts with prior experiences using the system and/or some type of existing mockups or working prototype of the HMI, and neither of these resources are available for such a new system. Further, these methods are time consuming and incompatible with more rapid and iterative systems development models (e.g., Agile/Scrum). To address these challenges, we developed a Wargame-Augmented Knowledge Elicitation (WAKE) method to identify information requirements and underlying assumptions in operator decision making concurrently with operational concepts. The developed WAKE method incorporates naturalistic observations of operator decision making in a wargaming scenario with freeze-probe queries and structured analytic techniques to identify and prioritize information requirements for a novel HMI. An overview of the method, required apparatus, and associated analytical techniques is provided. Outcomes, lessons learned, and topics for future research resulting from two different applications of the WAKE method are also discussed.
... Connectivity alone will not serendipitously result in effective coordination. Our previous work looked specifically at the coordination challenges of introducing new robotic platforms and capabilities in future operations (Woods, Tittle, Feil, & Roesler, 2004; Voshell & Oomes, 2006; Woods, Voshell, Roesler Phillips, Feil, & Tittle, 2006) and we continue to observe that developers and designers in such technology driven fields tend to underestimate these new role creations and neglect to acknowledge that new forms of autonomy and connectivity change the underlying system. Technologists typically respond by increasing isolated autonomy in attempt to create more coordinative agents-however we feel that this is fundamentally flawed. ...
Article
Full-text available
Designing effective coordination into domains of distributed decision making and decentralized control is a daunting joint cognitive systems challenge. In order to support such coordination, it is necessary to look at modeling these systems from the perspective of coordination requirements. We propose "coordination loops" as a model that enables us to expand upon measures and constructs that allow specification requirements of distributed work to cultivate effective decision making across multiple domains.
Article
Effective, appropriate improvisation has the potential to enhance system resilience, yet the phenomenon is currently not well understood. This research tests the notion that improvisation is a systems phenomenon and examines the appropriateness of Rasmussen’s (1997) Risk Management Framework and Accimap methodology for examining the factors influencing improvisation in safety critical situations. Impromaps (improvisation Accimaps) were used to determine whether the factors identified as influencing improvisation in two case studies met the predictions made by Rasmussen’s Risk Management Framework. The findings indicate improvisation is a systems phenomenon and support the use the Framework and Impromaps as an analysis methodology for the examination of improvisation incidents. The methodology allowed the identification of factors across all levels of both systems, and was able to describe the relationships between factors both within and across the system levels. It is concluded that Impromaps are applicable to improvisations occurring in different domains and resulting in positive as well as negative outcomes.
Book
Full-text available
Our fascination with new technologies is based on the assumption that more powerful automation will overcome human limitations and make our systems 'faster, better, cheaper,' resulting in simple, easy tasks for people. But how does new technology and more powerful automation change our work? Research in Cognitive Systems Engineering (CSE) looks at the intersection of people, technology, and work. What it has found is not stories of simplification through more automation, but stories of complexity and adaptation. When work changed through new technology, practitioners had to cope with new complexities and tighter constraints. They adapted their strategies and the artifacts to work around difficulties and accomplish their goals as responsible agents. The surprise was that new powers had transformed work, creating new roles, new decisions, and new vulnerabilities. Ironically, more autonomous machines have created the requirement for more sophisticated forms of coordination across people, and across people and machines, to adapt to new demands and pressures. This book synthesizes these emergent Patterns though stories about coordination and mis-coordination, resilience and brittleness, affordance and clumsiness in a variety of settings, from a hospital intensive care unit, to a nuclear power control room, to a space shuttle control center. The stories reveal how new demands make work difficult, how people at work adapt but get trapped by complexity, and how people at a distance from work oversimplify their perceptions of the complexities, squeezing practitioners. The authors explore how CSE observes at the intersection of people, technology, and work, how CSE abstracts patterns behind the surface details and wide variations, and how CSE discovers promising new directions to help people cope with complexities. The stories of CSE show that one key to well-adapted work is the ability to be prepared to be surprised. Are you ready?.
Article
Full-text available
Human factors studies the intersection between people, technology and work, with the major aim to find areas where design and working conditions produce human error. It relies on the knowledge base and research results of multiple fields of inquiry (ranging from computer science to anthropology) to do so. Technological change at this intersection (1) redefines the relationship between various players (both humans and machines), (2) transforms practice and shifts sources of error and excellence, and (3) often drives up operational requirements and pressures on operators. Human factors needs to predict these reverberations of technological change before a mature system has been built in order to steer design into the direction of cooperative human-machine architectures. The quickening tempo of technology change and the expansion of technological possibilities has largely converted the traditional shortcuts for access to a design process (task analysis, guidelines, verification and validation studies, etc.) into oversimplification fallacies that retard understanding, innovation, and, ultimately, human factors' credibility. There is an enormous need for the development of techniques that gain empirical access to the future-that generate human performance data about systems which have yet to be built.
Article
Full-text available
Certain features of tasks make especially difficult for humans. These constitute leverage points for applying intelligent technologies, but there's a flip side. Designing complex cognitive systems is itself a tough task. Cognitive engineers face the same challenges in designing systems that users confront in working the tasks that the systems are intended to aid. We discuss about these issues. We assume that the cognitive engineers will invoke one or more knowledge shields when they are confronted with evidence that their understanding and planning involves a reductive understanding. The knowledge shield phenomenon suggests that it will take effort to change the reductive mindset that people might bring to design a CCS.
Article
Full-text available
Developers of autonomous capabilities underestimate the need for coordination with human team members when their automata are deployed into complex operational settings. Automata are brittle as literal minded agents and there is a basic asymmetry in coordinative competencies between people and automata. The new capabilities of robotic systems raise new questions about how to support coordination. This paper presents a series of issues that demand innovation to achieve human-robot coordination (HRC). These include supporting people in their roles as problem holder and as robotic handler, overcoming ambiguities in remote perception, avoiding coordination surprises by better tools to see into future robotic activities and contingencies, and responsibility in human-robot teams.
Human- Robot Interaction: From Fieldwork to Simulation to Design
  • M G Voshell
  • D D Woods
  • F Phillips
  • Fl Orlando
Voshell, M. G., Woods, D. D. & Phillips, F. (2005). Human- Robot Interaction: From Fieldwork to Simulation to Design. Proceedings of the Human Factors and Ergonomics Society 49th Annual Meeting. 26-28 September, Orlando FL.