ChapterPDF Available

Abstract

Generalizing the concepts of joint activity developed by Clark (1996), we describe key aspects of team coordination. Joint activity depends on interpredictability of the participants' attitudes and actions. Such interpredictability is based on common ground-pertinent knowledge, beliefs, and assumptions that are shared among the involved parties. Joint activity assumes a basic compact, which is an agreement (often tacit) to facilitate coordination and prevent its breakdown. One aspect of the Basic Compact is the commitment to some degree of aligning multiple goals. A second aspect is that all parties are expected to bear their portion of the responsibility to establish and sustain common ground and to repair it as needed. We apply our understanding of these features of joint activity to account for issues in the design of automation. Research in software and robotic agents seeks to understand and satisfy requirements for the basic aspects of joint activity. Given the widespread demand for increasing the effectiveness of team play for complex systems that work closely and collaboratively with people, observed shortfalls in these current research efforts are ripe for further exploration and study.
A preview of the PDF is not available
... Coordination can be defined as the synchronization of interdependent activity. Coordination 103 requires multiple parties to align their goals, beliefs, and activity to achieve common goals 104 (Klein et al. 2005). Participants in joint, coordinated activity need to continuously monitor and 105 modify their representation of what is going on as time goes on to maintain an accurate mental 106 model of the world and the other participants in the joint activity (Klein et al. 2005 Effective coordination between humans and automated systems requires alignment of three 111 perspectives on ongoing streams of activity: the world in which cognitive work is situated (the 112 ground truth), autonomous systems activities (model of the world embedded in its algorithmic 113 logic, computations, and inputs), and the human supervisor's model of the world (Klein et al. 114 2005). ...
... Coordination 103 requires multiple parties to align their goals, beliefs, and activity to achieve common goals 104 (Klein et al. 2005). Participants in joint, coordinated activity need to continuously monitor and 105 modify their representation of what is going on as time goes on to maintain an accurate mental 106 model of the world and the other participants in the joint activity (Klein et al. 2005 Effective coordination between humans and automated systems requires alignment of three 111 perspectives on ongoing streams of activity: the world in which cognitive work is situated (the 112 ground truth), autonomous systems activities (model of the world embedded in its algorithmic 113 logic, computations, and inputs), and the human supervisor's model of the world (Klein et al. 114 2005). We present this graphically as a triangle with vertices denoting the relationships between 115 the three ( Figure 1). ...
... This 128 alignment is shown in Figure 1 with respect to the events in the environment and agents ' 129 activities. In studies of human-automation interaction (Wiener 1989;Endsley 2015) and, in a more 135 general sense, theories of joint activity (Clark 1996; Klein et al. 2005) successful coordination is 136 ...
Preprint
Full-text available
Supporting coordination between a human and their machine counterparts is essential for realizing the benefits of an automated system and maintaining system safety. In supervising the automation, the ability to answer question "what will happen next" given the system design is necessary for continuous coordination. If the human's view of the world, the autonomous system's activities, and the world are misaligned, automation surprises occur. We introduce the What's Next diagram which can be used to visualize the ability of the human to coordinate with 15 automated systems over time in both a retrospective and future-oriented manner. By analyzing the interplay between projection, retrojection, and events as they occur temporally, gaps in design can be recognized and design recommendations can be formulated. Two case studies are presented showing how to use and generate insights from this diagram in both manners (retrospective and future-oriented) supported by a computational-based analysis.
... Keeping pace in distributed systems requires not only keeping one's activities synchronized with the tempo of operations but also maintaining synchronicity with the activities of other agents (Klein et al., 2005). As disruptions require escalations in cognitive activity, they also increase the demands for coordination (Woods & Patterson, 2000). ...
Article
Full-text available
Envisioning new kinds of operations requires systematically developing architectures, work procedures, and artifacts to support human and machine agents in coordinating within dynamic environments. Accurately predicting how envisioned operations will unfold is challenging as (1) early design-phase descriptions of architectures, work procedures, and artifacts are often underspecified, and (2) key outcomes of interest emerge from interactions between cognitive work and environmental dynamics. This paper discusses how computational simulation of work can serve as a discovery tool for envisioning future operations. We introduce a three-phase approach using the Work Models that Compute (WMC) framework, which involves converting paper-based representations of work into computational models, developing scenarios and test conditions, and simulating work dynamics to analyze emergent behaviors. We illustrate this approach through a case study on developing contingency management procedures for envisioned air transport operations, specifically Urban Air Mobility (UAM). The case study demonstrates how computational simulation can (1) reveal the need for clearer design specifications, (2) uncover interactions and emergent behavior that may lead to undesirable outcomes, such as coordination surprises, and (3) identify trade-offs between multiple design options. Insight from simulation can complement other cognitive systems engineering methods to refine and enhance the feasibility and robustness of envisioned operations.
... A variety of efforts in disciplines ranging from psychology and sociology to engineering have attempted to measure relationships between human teammates using constructs such as common ground [25], rapport [26,27], affinity [28,29], cohesion [30], trust [31], fluency [32] and anxiety (especially if one's teammate is a robot) [33]. Other measures attempt to evaluate the teammate more holistically, rating them on whether they contribute to the team, deliver quality work, or have relevant skills [34]. ...
... Further studies by Sarter and Woods (1995) identified automation surprise and bumpy transfers of control as key issues when automated systems are strong and silent, that is, significant control authority with limited means to answer Wiener's key questions. Clark (1996) and Klein et al. (2005) described requirements for coordination that map to Wiener's basic questions. For two agents (human or automated) to effectively coordinate their activities, they need to be mutually observable, predictable, and directable. ...
Conference Paper
As advanced automated systems become more complex, there is a greater need to analyze the difficulty of human-automation coordination. We developed a novel visual analytics tool to evaluate human-automation coordination by examining the relationship between projection, explanation (retrojection), and the understanding of current events through time. The What’s Next Diagram visualizes demands on human sensemaking as operators answer Wiener’s key automation surprise question: what will automation do next. The tool can assess whether operators can answer the question directly and quickly given the design and architecture of human-automation systems. It can be used as a model of multi-agent coordination, as a tool for post-mortem analysis, or an aid for design. We illustrate how the tool can be used to identify system requirements for anomaly response in the aviation domain.
... This is one of the reasons that formal teams, like military units, typically adopt conventionalized terminology and standardized patterns of communication (Salas et al., 2007). It is suggested that this concise communication is possible when there is more common ground within the team and the presence of shared mental models of the task and team interaction (Klein, Feltovich, Bradshaw, & Woods, 2005). The communication density measure used in the current research, was first introduced by Gorman et al. (2003) in team communication analysis to measure the extent to which a team conveys information in a concise manner. ...
Preprint
Roles are one of the most important concepts in understanding human sociocognitive behavior. During group interactions, members take on different roles within the discussion. Roles have distinct patterns of behavioral engagement (i.e., active or passive, leading or following), contribution characteristics (i.e., providing new information or echoing given material), and social orientation (i.e., individual or group). Different combinations of these roles can produce characteristically different group outcomes, being either less or more productive towards collective goals. In online collaborative learning environments, this can lead to better or worse learning outcomes for the individual participants. In this study, we propose and validate a novel approach for detecting emergent roles from the participants' contributions and patterns of interaction. Specifically, we developed a group communication analysis (GCA) by combining automated computational linguistic techniques with analyses of the sequential interactions of online group communication. The GCA was applied to three large collaborative interaction datasets (participant N = 2,429; group N = 3,598). Cluster analyses and linear mixed-effects modeling were used to assess the validity of the GCA approach and the influence of learner roles on student and group performance. The results indicate that participants' patterns in linguistic coordination and cohesion are representative of the roles that individuals play in collaborative discussions. More broadly, GCA provides a framework for researchers to explore the micro intra- and interpersonal patterns associated with the participants' roles and the sociocognitive processes related to successful collaboration.
Article
Full-text available
This article describes the main contributions made by the late Paul J. Feltovich to the fields of cognitive engineering and decision making.
Article
The potential to create autonomous teammates that work alongside humans has increased with continued advancements in AI and autonomous technology. Research in human–AI teams and human–autonomy teams (HATs) has seen an influx of new and diverse researchers from human factors, computing, and teamwork, yielding one of the most interdisciplinary domains in modern research. However, the HAT domain’s interdisciplinary nature can make the design of research, especially experiments, more complex, and new researchers may not fully grasp the numerous decisions required to perform high-impact HAT research. To aid researchers in designing high-impact experiments, this article itemizes four initial decision points needed to form a HAT experiment: deciding on a research question, deciding on a team composition, deciding on a research environment, and deciding on data collection. For each decision point, this article discusses these decisions in practice, providing related works to guide researchers toward different options available to them. These decision points are then synthesized through actionable recommendations to guide future researchers. The contribution of this article will increase the impact and knowledge of HAT experiments.
Chapter
Full-text available
Because ever more powerful intelligent agents will interact with people in increasingly sophisticated and important ways, greater attention must be given to the technical and social aspects of how to make agents acceptable to people [16.72]. From a technical perspective, we want to help ensure the protection of agent states, the viability of agent communities, and the reliability of the resources on which they depend. To accomplish this, we must guarantee, insofar as is possible, that the autonomy of agents can always be bounded by an explicit enforceable policy that can be continually adjusted to maximize the agents’ effectiveness and safety for both human beings and computational environments. From a social perspective, we want agents to be designed to fit well with how people actually work together. Explicit policies governing human-agent interaction, based on careful observation of work practice and an understanding of current research in the social sciences and cognitive engineering, can help assure that effective and natural coordination, appropriate levels and modalities of feedback, and adequate predictability and responsiveness to human control are maintained. These factors are key to providing the reassurance and trust that are the prerequisites to the widespread acceptance of agent technology for non-trivial applications.
Article
There has been a transition in many supervisory control domains from continuous monitoring to minimizing staffing until a problem arises. The key to making this “on-call” model effective is to understand how to bring practitioners up to speed quickly when they are called in. A field study was conducted to investigate what it means to update a supervisory controller on the status of a continuous, anomaly-driven process in a complex, distributed environment. Sixteen shift changes, or handovers, were observed during an anomalous space shuttle mission. Handover updates included descriptions of events that had occurred, ongoing activities, results of data analyses, and changes to mission plans. The controllers engaged in intense, interactive briefings that highlighted what the incoming controller needed to review more deeply following the update. Interrogation strategies were employed by the incoming controllers. Implications for organizational investments and the design of tools to support updates are discussed.
Article
This paper describes and illustrates the use of a general methodology for knowledge elicitation to enable better prediction of the human factors implications of future system designs. Specifically, this approach involves the following steps: 1. Identifying critical factors that could influence performance in the future system. 2. Using this list of factors to predict incidents that could plausibly arise in the future system. 3. Designing realistic, detailed incident reports based on these predicted incidents. 4. Asking a group of experienced practitioners representing different perspectives in the current system to act as a review team by evaluating a reported incident and identifying the important issues and implications it raises. 5. Using the insights generated by the discussions of the review team to provide guidance in making decisions about the implementation of the future system. To illustrate the use of this methodology, a scenario was developed. This scenario was based on experiences with the expanded National Route Program involving high-altitude crossing traffic (overflights) over departure and arrival lanes at a major airport. This already occurring situation was modified to fit the conditions of a future system in which en route Free Flight is allowed, and a hypothetical incident is predicted and described in detail in an incident report. A group including a pilot, two controllers, and an airline dispatcher was asked to act as a review team and evaluate this incident. The insights provided by the resultant discussion are presented. Many of these insights relate to issues concerning the roles and responsibilities of flight crews, controllers, traffic managers, and dispatchers. Others are concerned with the definition of procedures. Still others deal with issues of workload, training, maintenance of skills, and the communication of intent.
Article
Operators in complex event-driven domains must coordinate competing attentional demands in the form of multiple tasks and interactions. This study examined the extent to which this requirement can be supported more effectively through informative interruption cueing (in this case, partial information about the nature of pending tasks). The 48 participants performed a visually demanding air traffic control (ATC) task. They were randomly assigned to 1 of 3 experimental groups that differed in the availability of information (not available, available upon request, available automatically) about the urgency and modality of pending interruption tasks. Within-subject variables included ATC-related workload and the modality, frequency, and priority of interruption tasks. The results show that advance knowledge about the nature of pending tasks led participants to delay visual interruption tasks the longest, which allowed them to avoid intramodal interference and scanning costs associated with performing these tasks concurrently with ATC tasks. The 3 experimental groups did not differ significantly in terms of their interruption task performance; however, the group that automatically received task-related information showed better ATC performance, thus experiencing a net performance gain. Actual or potential applications of this research include the design of interfaces in support of attention and interruption management in a wide range of event-driven environments.
Article
High Reliability Organizations (HROs) have been treated as exotic outliers in mainstream organizational theory because of their unique potentials for catastrophic consequences and interactively complex technology. We argue that HROs are more central to the mainstream because they provide a unique window into organizational effectiveness under trying conditions. HROs enact a distinctive though not unique set of cognitive processes directed at proxies for failure, tendencies to simplify, sensitivity to operations, capabilities for resilience, and temptations to overstructure the system. Taken together these processes induce a state of collective mindfulness that creates a rich awareness of discriminatory detail and facilitates the discovery and correction of errors capable of escalation into catastrophe. Though distinctive, these processes are not unique since they are a dormant infrastructure for process improvement in all organizations. Analysis of HROs suggests that inertia is not indigenous to organizing, that routines are effective because of their variation, that learning may be a byproduct of mindfulness, and that garbage cans may be safer than hierarchies.
Article
The construction of computer systems that are intelligent, collaborative problem-solving partners is an important goal for both the science of AI and its application. From the scientific perspective, the development of theories and mechanisms to enable building collaborative systems presents exciting research challenges across AI subfields. From the applications perspective, the capability to collaborate with users and other systems is essential if large-scale information systems of the future are to assist users in finding the information they need and solving the problems they have. In this address, it is argued that collaboration must be designed into systems from the start; it cannot be patched on. Key features of collaborative activity are described, the scientific base provided by recent AI research is discussed, and several of the research challenges posed by collaboration are presented. It is further argued that research on, and the development of, collaborative systems should itself be a collaborative endeavor - within AI, across subfields of computer science, and with researchers in other fields. Copyright © 1996, American Association for Artificial Intelligence. All rights reserved.