Conference PaperPDF Available

Why Human-Autonomy Teaming?


Automation has entered nearly every aspect of our lives, but it often remains hard to understand. Why is this? Automation is often brittle, requiring constant human oversight to assure it operates as intended. This oversight has become harder as automation has become more complicated. To resolve this problem, Human-Autonomy Teaming (HAT) has been proposed. HAT is based on advances in providing automation transparency, a method for giving insight into the reasoning behind automated recommendations and actions, along with advances in human automation communications (e.g., voice). These, in turn, permit more trust in the automation when appropriate, and less when not, allowing a more targeted supervision of automated functions. This paper proposes a framework for HAT, incorporating three key tenets: transparency, bi-directional communication, and operator directed authority. These tenets, along with more capable automation, represent a shift in human-automation relations.
Why Human-Autonomy Teaming?
R. Jay Shively1, Joel Lachter1, Summer L. Brandt2, Michael Matessa3,
Vernol Battiste2, and Walter W. Johnson1
1NASA Ames Research Center, Moffett Field, CA 94035
2San José State University, NASA Ames Research Center, Moffett Field, CA 94035
3Rockwell Collins, 350 Collins Rd NE, Cedar Rapids, IA 52498
{Robert.J.Shively, Joel.Lachter, Summer.L.Brandt, Vernol.Battiste}
Abstract. Automation has entered nearly every aspect of our lives, but it often remains hard to understand. Why is
this? Automation is often brittle, requiring constant human oversight to assure it operates as intended. And this
oversight has become harder as automation has become more complicated. To resolve this problem, Human-
Autonomy Teaming (HAT) has been proposed. HAT is based on advances in providing automation transparency, a
method for giving insight into the reasoning behind automated recommendations and actions, along with advances in
human automation communications (e.g., voice). These, in turn, permit more trust in the automation when
appropriate, and less when not, allowing a more targeted supervision of automated functions. This paper proposes a
framework for HAT, incorporating three key tenets: transparency, bi-directional communication, and operator
directed authority. These tenets, along with more capable automation, represent a shift in human-automation relations.
Keywords: Human-Autonomy Teaming · Automation · Human Factors
1 Introduction
Where is my flying car? For years, we have been told that automation will change our lives dramatically with both
positive and negative consequences. On the positive side, we would have flying cars, self-driving cars, personal robots
and much more. On the negative side, some jobs might be lost to automation, and visionaries such as Bill Gates, Elon
Musk and Stephen Hawking [1] caution us as we approach singularity1.
However, for better or worse, the promise of automation has not been realized. Great strides have been made in some
areas. Self-driving cars are not just on the horizon, but on our streets (with mixed results)[2]; however, most of the
promises remain the domain of science fiction. The Roomba® leaves a great deal to be desired when compared to Rosie,
the Jetson’s robotic maid. Why is this the case? Technology development has proceeded at a rapid pace and our desire
for increasingly automated systems remains robust. One thing looms large among the many things probably responsible
for our dissatisfaction and slow acceptance of automation - that is how automation interacts with humans. In a word,
badly. Why is this? Hints and outright answers are there in the literature.
First and foremost, there is the notion that automation replaces and/or relieves the human, making their role less
critical. This is not the case. For example, Lee [3] in his review of Parasuraman’s and Riley’s [4] seminal paper on
human-automation interaction, pointed out that the amount of training needed by the humans goes up, not down, when
automation is introduced, and the design of the automation interfaces becomes more challenging and important. Also,
and maybe more critically, automation usually does not replace the human; rather, it changes the nature of the human’s
work [5]. For example, with increasing automation on the flight deck, commercial pilots have moved away from their
traditional role directly managing the aircraft to a more supervisory role managing the automation.
Of course there are other factors as well. Parasuraman and Riley [4] identified a number of these. Prominent among
them were trust and reliance. People need to know when and how they can rely on automation, and often they do not.
The result, initial over-reliance on the automation until the inevitable failure, followed by under-reliance on automation
are well documented. Not to mention, misunderstanding of the automation leads to abuse/misuse, with potentially
disastrous consequences.
So despite the fact that human-automation interaction has been recognized as a critical part of automation design for
some time, and has seen development of guidelines for specific domains (e.g., Aerospace [6], Robots and Unmanned
Systems [7]), many issues that have plagued this area have remained. Why? Perhaps first and foremost among the
unresolved problems is that, in order for us to realize the promise of automation it must be able work with us, not just
11Singularity is the concept that artificial intelligence will eventually think beyond human capacity, which according to some could
negatively affect civilization.
for us. It must be a teammate, able to do things that do not require us to supervise it all the time and in all contexts. It
needs a greater degree of autonomy. Along with this it must have our trust - earned and appropriate trust.
The concepts behind a relatively new area, called Human-Autonomy Teaming (HAT), are increasingly being seen as
one important way in which to realize the capabilities of new powerful automation while gaining its acceptance. This
role for HAT is due to many factors such as the recent increases in the speed and quality of automation technology (e.g.,
Watson [8]), the advances in voice interaction including increasingly natural language understanding (e.g. Siri®, Alexa®,
Google Home®), and the ability of automation to work in more collaborative manners (e.g. advanced recommender
systems). These have all contributed to automation being thought of and used as a team member. Understanding and
effectively designing for HAT may be the key to finally realizing the promise of automation.
2 Human-Autonomy Teaming: Critical Factors
As automation increases to the point of being able to exhibit greater autonomy and team with humans, so does the
promise of automation. But, there are still many hurdles to overcome. Full system autonomy continues to be quite
difficult (some may claim impossible) to achieve, so, for at least the foreseeable future, most systems will exist in a
semi-autonomous state. Given this projection, humans and semi-autonomous systems will continue to need to interact in
teams, and the development of autonomous systems that can support this teaming should be based on a foundation of
research on human–automation interaction. Here we discuss several commonly cited issues with current automation.
2.1 Brittle Automation
Most automation to date suffers from brittleness, operating well for the range of situations it is designed for but needing
human intervention to manage situations outside of those environments [9], a situation that will continue to exist in
degrees in the future. Automation is designed for a certain environment. True, that environment might be fairly general.
An automated car, for instance, might be designed for a wide variety of driving environments. However, there are
boundaries. When placed in an environment which the designers did not anticipate, maybe a white panel truck the same
color as the sky behind it, the results can be catastrophic [2]. Similarly, software bugs may appear in unusual (and thus
poorly tested) situations.
2.2 Lack of Transparency
Many semi-autonomous systems badly lack understandability and predictability. This is referred to as a problem with
transparency. This lack of transparency often makes it unclear why the automation is taking the actions that it is taking
or not taking an expected action. Maybe the classic example of this was given by the purported perplexity of pilots
when faced with the behavior of the flight management system, “What is it doing now, why is it doing that, and what
will it do next?” [10]. This opaqueness often makes it impossible for the operator to analyze whether the actions taken
by the automation are appropriate.
2.3 Lack of Shared Awareness
Related to the lack of transparency is a lack of shared awareness: operators often do not know what information the
automation is using to perform the task. What are the consequences of an operator not knowing the accuracy or
reliability of information used by any automated tool (e.g., weather information)? Errors, over- and under-trust and
reduced usage are all possible consequences. What are the consequences of the automation not knowing the accuracy or
reliability of information used by the operator (e.g, the weather seen out the window)? Poor or confusing
recommendations at the very least. Onken [11] suggests a number of factors which will be required for shared situation
awareness. He suggests that the key specifications for developing the next generation of cockpit automation should
include comprehensive machine knowledge of the actual flight situation and efficient communication between crew and
machine. In his model of the “Knowledge Required for Situation Assessment” he identified factors related to the aircraft
(e.g., performance data, system status, etc.), crew (resources, standard behavior, individual behavior, etc.), mission
(goals and constraints), air traffic control (clearances, instructions, etc.) and environmental factors (navigation data,
weather, and traffic). He suggested that the machine cannot assist in an efficient way without a clear understanding of
the situation.
2.4 Miscalibrated Trust
One important implication of poor transparency and a lack of shared awareness is miscalibrated trust. Lyons and
colleagues [12, 13] demonstrated the importance of transparency in a recent study that assessed different levels of
transparency while interacting with an automated route planner. They study showed that as transparency increased, user
ability to understand and appropriately trust automation also increased. Here trust was defined using the definition of
Lee and See [14, p.54] as “the attitude that an agent will help achieve an individual’s goals in a situation characterized
by uncertainty and vulnerability.” Inappropriate trust can lead to both underuse of automation due to mistrust, or
overuse due to over-trust. To truly and effectively team with automation, humans must be able to trust those systems, or
else they cannot rely on them. However, that trust cannot be blind or operators will use the automation under conditions
it was not designed for. Miscalibrated trust is the enemy here. If the operators do not understand why automation is
taking the actions it is taking, they may not be able to determine when they can rely on the automation. They have no
basis upon which to calibrate their trust. If operators do not trust the automation when it is supplying correct
information, they may not use it, undermining the reason for having the automation. Conversely, if operators trust the
automation even under conditions where the automation lacks information necessary to perform adequately, severe
problems may result. Lees and Lee [15] suggested that there are at least three dimensions of trust (utility, predictability,
and intent) that need to be considered when designing autonomous systems. Their research suggests that operators
reliance on automation was more appropriate when transparent information was present. Additionally, research suggests
that with semi-autonomous systems, the degree to which people monitor automation decreases with increased trust in
the automation [16], making the importance of appropriately calibrated trust paramount. Lee and See [14] suggested
that to improve reliance, the automation should not only present options but possible solutions. Calibrated trust, an
integral part of HAT, is fundamental to making semi-autonomous systems robust and acceptable.
2.5 The Challenge of Monitoring
Brittleness, lack of transparency and miscalibrated trust are not independent issues of course, and as Endsley [17] points
out, they combine in ways that are particularly dangerous. Because systems are brittle, they appear to be operating well,
until suddenly things go wrong. It often falls to the human operator to monitor for such failures. But monitoring a
system that appears to be operating correctly is a job humans are particularly poor at. Unless the operator can see
potential problems on the horizon (which they cannot because of poor transparency), there is a strong pull to over-trust
the automation. When the system does fail, the operator is in a state Endsley refers to as “reduced situation awareness
after being out-of-the-loop.” The automation has been performing the task, so they are unaware of the system state. The
system has poor transparency, so they cannot discover the system state quickly, an issue exacerbated by the fact that, if
the situation were simple, the automation would probably have been able to handle it.
3 A Conceptual Model for HAT
In our work at the Human-Autonomy Teaming Laboratory at NASA Ames Research Center, we have been working on a
conceptual model for HAT. There are three major tenets of the HAT model as presented here.
3.1 Bi-Directional Communication
Teammates often discuss options, brainstorm on solutions and openly discuss courses of action. For automation to be on
a team, this bi-directional communication needs to exist. We see bi-directional communication as key to solving a
number of the issues typically found in highly automated systems. Bi-directional communication can make systems less
brittle. History has shown that humans, generally, are better able to adapt to unfamiliar situations, poorly structured
problems, or situations with incomplete information. With HAT, the human can provide the missing information and
context, apply judgment from experience in similar situations that would not be recognized as relevant by the
automation (Christofferson and Woods [18] have shown how automation does not generalize well from one domain to
another) and if necessary, override the decisions made by automation in these situations.
Bi-directional communication can also help to build shared awareness. Alberts and colleagues [19] define shared
awareness as a cognitive capability that builds on shared information. Shared awareness of an event can be developed
through four mechanisms: 1) directly, 2) though independent sensors, 3) through information that is passed from one
agent to another, and 4) through information that is shared and the fused results presented to two (or more) agents. The
last two, sharing and fusing of information require a communication channel through which information can pass in
either direction. And, while the same information could be gathered independently (mechanism 1 and 2), to assure
shared awareness, this information needs to be cross-validated or eventually the information will be out of sync.
Bi-directional communication requires some ability to have a common cognitive understanding and for both the
human and automation to communicate their intentions [20]. To have true bi-directional communication, a shared
language will be required. A shared communication channel allowing both implicit and explicit communication
between the human and agent, is required. Understanding will be facilitated by explicit discussion of goals (as opposed
to intent inferencing), as well as confidence, and rationale.
3.2 Transparency
Lyons [21] argues that the system and its operators should be able to recognize what the human/automated teammate is
doing and why. He defines automation transparency as the enabler of such recognition. Transparency is more than
simply making the information available to the human operator. Often the calculations done by automation do not
correspond directly to those a human would do when performing the same task. To be transparent, the automation must
bridge this gap by presenting information in ways that fit the operator’s mental model.
Transparency is important because it enables the operator to determine whether the automation is likely to have the
correct answer in a particular situation (trust calibration). Without this information operators are likely to trust (and thus
rely on) the automation under conditions where it is likely to fail (over-trust) or ignore the automation in conditions
where it could be helpful (under-trust).
3.3 Operator Directed Interface
Previous work has focused on task allocation between the humans and automation. However, this static relationship can
lead to non-optimal performance. At the same time, if the automation reallocates tasks, the operator can become
disoriented. A well planned and understood allocation strategy coupled with an operator directed dynamic allocation of
tasks based on workload, time pressure and other important factors allows a much more agile, flexible system. One
potential enabler of such a dynamic system of task allocation is the concept of a play [22]. A play encapsulates goals,
tasks, and a task allocation strategy appropriate for a particular situation. Operators can call a play to quickly adapt to a
new situation.
3.4 A HAT Agent
One approach to creating systems that instantiate these HAT tenets, is to develop a HAT agent that intercedes between
the automation and the human operator. Fig. 1 provides a conceptual illustration for such an agent. The interface
between the HAT agent and the operator instantiates a bi-directional communication channel. Through it the operator
can input requests and queries that set the goals for the system. The agent tracks these goals and formulates requests to
the automation to meet them. These requests might take the form of status checks to make sure that the system is on
track to meet the goals (at least as far as the automation knows), or, if it is not, requests for options to meet the goals.
The automation returns possible courses of action, along with their predicted results, and information about the rationale
for their selection, and confidence in their success. These results are filtered based on the current context to avoid
having too much information overwhelm the contextually relevant results.
Fig. . HAT Agent Architecture.
An example may illustrate these concepts. The example steps through a pilot teaming with aircraft automation
including a recommender system advising which airport to divert to, if needed. In this example an automated alerting
system detects an anomaly, such as a wheel well fire. This information is then communicated to the agent, which
forwards it to the pilot while the automation simultaneously initiates a search for candidate airports. Depending on
context, the agent determines which candidates are presented on the dynamic interface, taking into consideration such
things as time pressure. If the pilot is under extreme time pressure, it is not appropriate to present multiple options with
a great deal of transparency detail on how the automation arrived at its recommendation. This dynamic interface would
allow flexible, agile allocation of tasks to humans and pilots, rather than the planned static allocation. In our example, if
the pilot does have adequate time, the agent would present multiple options generated by the automation with ratings as
to the adequacy of the alternative. This provides transparency to the pilot. This is very important in building the pilot’s
trust in the system. But, what if the pilot has questions or additional information that the automation does not know
about? This highlights a key tenet; bi-directional communication. The pilot needs to be able to ask questions of the
automation or provide additional information. S/he should be able to ask: How confident are you in the
recommendations? How did you determine the scores? And where appropriate provide additional information: There is
a trauma center close to airport X (for a medical emergency). In addition, the pilot should be able to propose solutions
and have the automation critique and evaluate them, with the same question asking, perhaps from automation as well as
from the pilot. HAT enabled collaborative problem solving is a back-and-forth process. This kind of bi-directional
communication transforms automation from a tool to a teammate and has potential to help automation achieve its
4 Conclusion
HAT holds great potential to help automation truly become a partner and for us to realize the promise of automation.
Many researchers are hard at work exploring and investigating HAT principles and techniques. We hope that the initial
design principles and model for HAT outlined in this brief paper can help add to a foundation on which to base research.
Other papers in this session will look at empirically evaluating these concepts and generalizing the results to other
Acknowledgments. We would like to acknowledge NASA’s Safe and Autonomous System Operations Project, which
funded this research.
1. The Observer,
2. PBS,
3. Lee, D.D.: Review of a Pivotal Human Factors Article: “Humans and Automation: Use, Misuse, Disuse, Abuse. Hum. Factors. 50,
404--410 (2008)
4. Parasuraman, R., Riley, V.: Humans and Automation: Use, Misuse, Disuse, Abuse. Hum. Factors. 39, 230--253 (1997)
5. Parasuraman, R., Manzey, D.H.: Complacency and Bias in Human Use of Automation: An Attentional Integration. Hum. Factors.
52, 381--410 (2010)
6. Billings, C.E.: Human-Centered Aviation Automation: Principles and Guidelines. NASA-TM-110381. NASA, Washington, DC
7. Chen, J.Y.C., Barnes, M.J.: Human-Agent Teaming for Multi-Robot Control: A Literature Review. Army Research Lab Technical
Report, ARL-TR-6328 (2013)
8. The Atlantic,
9. Christoffersen, K., Woods, D.D.: How to Make Automated Systems Team Players. In: Advances in Human Performance and
Cognitive Engineering Research, vol. 2, pp. 1--12. Emerald Group Publishing Limited (2002)
10. Wiener, E.L.: Cockpit Automation. In: E.L. Wiener, D.C. Nagel (eds.) Human Factors in Aviation, pp. 433-461. Academic Press,
Inc., New York (1989)
11. Onken, R.:. The Cockpit Assistant System CASSY as an On-Board Player in the ATM Environment." In: Proceedings of First Air
Traffic Management Research and Development Seminar, pp 1--26 (1997)
12. Lyons, J.B., Saddler, G.G., Koltai, K., Battiste, H., Ho, N.T., Hoffmann, L.C., Smith, D., Johnson, W., Shively, R.: Shaping Trust
through Transparent Design: Theoretical and Experimental Guidelines. Advances in Human Factors in Robotics and Unmanned
System. 499, 127--136 (2017)
13. Sadler, G., Battiste, H., Ho, N., Hoffmann, L., Johnson, W., Shively, R., Lyons, J., Smith, D.: Effects of Transparency on Pilot
Trust and Agreement in the Autonomous Constrained Flight Planner. In: Digital Avionics Systems Conference (DASC)
IEEE/AIAA 35th, pp. 1--9. IEEE (2016)
14. Lee, D.D., See K.A.:Trust in Automation: Designing for Appropriate Reliance. Hum Factors. 46, 50--80 (2004)
15. Lees, M.N., Lee, J.D.: The Influence of Distraction and Driving Context on Driver Response to Imperfect Collision Warning
Systems. Ergonomics, 50, 1264--1286 (2007)
16. Hergeth, S., Lorenz, L., Vilimek, R., Krems, J.F: Keep Your Scanners Peeled: Gace Behavior as a Measure of Automation Trust
During Highly Automated Driving. Hum. Factors. 58, 5--27 (2016)
17. Endsley, M.R.: From Here to Autonomy: Lessons Learned from Human-Automation Research. Hum. Factors. 59, 5--27 (2017)
18. Christoffersen, K., Woods, D.D.: How to Make Automated Systems Team Players. Advances in Human Performance and
Cognitive Engineering Research. Emerald Group Publishing Limited, 1--12 (2002)
19. Alberts, D.S., Garstka, J.J., Hayes, R.E., Signori, D.A.: Understanding Information Age Warfare. Command Control Research
Program, Washington, DC (2001)
20. Chen, J.Y.C., Barnes, M.J., Harper-Sciarini, M.: Supervisory Control of Multiple Robots: Human Performance Issues and User
Interface Design. IEEE Transactions on Systems, Man, and Cybernetics–Part C: Applications and Reviews, 41, 435--454 (2011)
21. Lyons, J.B.: Being Transparent about Transparency: A Model for Human Robot Interaction. In: AIAA Spring Symposium Series
22. Miller, C.A., Parasuraman, R.: Designing for Flexible Interaction Between Humans and Automation: Delegation Interfaces for
Supervisory Control. Hum. Factors, 49, 57--75 (2007)
... Triad teams are also widespread in other human-AI teaming research as they represent the lowest number of participants necessary to achieve those complex group interactions (Demir et Participants were not allowed to communicate verbally or textually during this experiment. The choice to disallow communication was made to isolate the effects of implicit coordination (Entin & Serfaty, 1999;Hanna & Richards, 2014Shively et al., 2017) in human-human teams vs. human-agent teams and to ensure participants believed the agent was a true AI. Additionally, a trend towards focusing on implicit communication has received attention in applied human-machine research (Aubert et al., 2018), where implicit communication is defined as any action utilized to convey an agent's planned intentions (does not verbalize or use written language). ...
... Participants completed a simulated team task called "IIHAT" (Implicit Interaction for Human-Autonomy Teams), which has been used in past research . This task was developed explicitly for human-agent teaming research and did not allow communication between players to isolate the effect of implicit communication on team cognition and human-agent teams (Entin & Serfaty, 1999;Hanna & Richards, 2014Shively et al., 2017). This control on communication also ensured that the participants did not suspect the AI teammate as a human given the difficulty that AI has with natural language processing (Young et al., 2018). ...
... In reality, all were humans akin to the participants in HHH condition as the confederate researcher portraying the AI teammate followed a script that mimicked human task performance (in place of expert-level AI agent performance (Foerster et al., 2018)). Furthermore, we did not allow communication between players to isolate the effect of implicit communication on team cognition and human-agent teams, and ensure participants, especially those within the HHA condition, did not suspect the AI teammate as a human (Entin & Serfaty, 1999;Hanna & Richards, 2014Shively et al., 2017;Young et al., 2018). ...
Teammates powered by artificial intelligence (AI) are becoming more prevalent and capable in their abilities as a teammate. While these teammates have great potential in improving team performance, empirical work that explores the impacts of these teammates on the humans they work with is still in its infancy. Thus, this study explores how the inclusion of AI teammates impacts both the performative abilities of human-AI teams in addition to the perceptions those humans form. The current study found that participants perceiving their third teammate as artificial performed worse than those perceiving them as human. Furthermore, these performance differences were significantly moderated by the task’s difficulty, with participants in the AI teammate condition significantly outperforming participants perceiving a human teammate in the highest difficulty task, which diverges from previous human-AI teaming literature. Alternatively, no significant effect of perceived teammate artificiality was found on shared mental model similarity. However, it did significantly affect participants’ levels of perceived team cognition. Individual performance on medium difficulty maps also mediated the effect of perceived teammate artificiality on perceived team cognition. These results further build on the current understanding of how AI teammates impact perceptions of individual human teammates and how those perceptions subsequently impact their objective performance, which is critical in building more effective AI teammates to incorporate alongside humans.
... With AI technology a machine can evolve from an assistive tool that primarily supports human operations to a potential collaborative teammate of a team with a human operator, playing the dual roles of "assistive tool þ collaborative teammate" (e.g., Brill et al., 2018;Lyons et al., 2018;O'Neill et al., 2020). Thus, the human-machine relationship in the AI era has added a new type of human-AI collaboration, often called "Human-Machine Teaming" (e.g., Brandt et al., 2018;Brill et al., 2018;Shively et al., 2017). ...
... To study the human-AI collaboration, researchers have leveraged the frameworks of other disciplines, such as psychological human-human team theory (e.g., de Visser, 2018; Mou & Xu, 2017). For example, the human-human team theory helps formulate basic principles: two-way/shared communication, trust, goals, situation awareness, language, intentions, and decision-making between humans and AI systems (e.g., Demir et al., 2017;Shively et al., 2017), instead of a one-way approach as we currently do in the conventional HCI context. ...
... Future HCI work also needs to explore innovative design approaches (e.g., collaborative interaction, adaptive control strategy, human directed authority) from the perspective of human-AI interaction. For instance, we need to model under what conditions (e.g., based on a threshold of mutual trust between humans and AI agents) an AI agent will take over or hand off the control of system to a human in specific domains, such as autonomous vehicles (Kistan et al., 2018;Shively et al., 2017). In the context of distributed AI and multi-agent systems, we need to figure out the collaborative relationship between at least one human operator and the collective system of agents, where and how multiple agents communicate and interact with primitives, such as common goals, shared beliefs, joint intentions, and joint commitments, as well as conventions to manage any changes to plans and actions (Hurts & de Greef, 1994;Wooldridge, 2009). ...
Full-text available
While AI has benefited humans, it may also harm humans if not appropriately developed. The priority of current HCI work should focus on transiting from conventional human interaction with non-AI computing systems to interaction with AI systems. We conducted a high-level literature review and a holistic analysis of current work in developing AI systems from an HCI perspective. Our review and analysis highlight the new changes introduced by AI technology and the new challenges that HCI professionals face when applying the human-centered AI (HCAI) approach in the development of AI systems. We also identified seven main issues in human interaction with AI systems, which HCI professionals did not encounter when developing non-AI computing systems. To further enable the implementation of the HCAI approach, we identified new HCI opportunities tied to specific HCAI-driven design goals to guide HCI professionals addressing these new issues. Finally, our assessment of current HCI methods shows the limitations of these methods in support of developing HCAI systems. We propose the alternative methods that can help overcome these limitations and effectively help HCI professionals apply the HCAI approach to the development of AI systems. We also offer strategic recommendation for HCI professionals to effectively influence the development of AI systems with the HCAI approach, eventually developing HCAI systems.
... This teamwork extends beyond simple human-automation teamwork, which is traditionally rigid, by creating flexible and powerful teammates that will both adapt to humans and require that humans adapt to them. This transition will also require attention to interactions that team members have, such as communication and authority [2]. As many modern teams begin to transition from human-human teams to human-agent teams, it is important to equip human leaders with the tools to lead their teams. ...
... When these deficiencies in communication are overcome, human-agent teams are able to significantly increase their effectiveness [1]; however, since this is not possible with current technology, it is vital that leaders actively mediate this communication. This mediation involves multiple tasks including: (1) gathering information for their team to utilize, (2) determining what information should be given to humans or agents, and (3) mediating the share of information between humans and agents. This is a prime example of why humans still need to be leaders as they need to be the ones mediating conversation between humans, agents, and other teams, which would be difficult with an artificial leader with communication deficiencies. ...
... The novel human factor challenges of teaming with intelligent and autonomous machines raise concerns about trust optimization (Lyons and Havig, 2014;Shively et al., 2017; Frontiers in Psychology | de Visser et al., 2018). ...
Full-text available
Effective human–robot teaming (HRT) increasingly requires humans to work with intelligent, autonomous machines. However, novel features of intelligent autonomous systems such as social agency and incomprehensibility may influence the human’s trust in the machine. The human operator’s mental model for machine functioning is critical for trust. People may consider an intelligent machine partner as either an advanced tool or as a human-like teammate. This article reports a study that explored the role of individual differences in the mental model in a simulated environment. Multiple dispositional factors that may influence the dominant mental model were assessed. These included the Robot Threat Assessment (RoTA), which measures the person’s propensity to apply tool and teammate models in security contexts. Participants ( N = 118) were paired with an intelligent robot tasked with making threat assessments in an urban setting. A transparency manipulation was used to influence the dominant mental model. For half of the participants, threat assessment was described as physics-based (e.g., weapons sensed by sensors); the remainder received transparency information that described psychological cues (e.g., facial expression). We expected that the physics-based transparency messages would guide the participant toward treating the robot as an advanced machine (advanced tool mental model activation), while psychological messaging would encourage perceptions of the robot as acting like a human partner (teammate mental model). We also manipulated situational danger cues present in the simulated environment. Participants rated their trust in the robot’s decision as well as threat and anxiety, for each of 24 urban scenes. They also completed the RoTA and additional individual-difference measures. Findings showed that trust assessments reflected the degree of congruence between the robot’s decision and situational danger cues, consistent with participants acting as Bayesian decision makers. Several scales, including the RoTA, were more predictive of trust when the robot was making psychology-based decisions, implying that trust reflected individual differences in the mental model of the robot as a teammate. These findings suggest scope for designing training that uncovers and mitigates the individual’s biases toward intelligent machines.
... Based on the CONOPS, specified requirements and consideration of the relevant literature (see Chapter 6 References), identify more general design guidelines to support human-human interactions (supported by automation) and human-automation interaction (Dorneich et al. 2017;Draper et al., 2013;Johnson et al., 2014;Hollnagel and Woods, 2005;Parasuraman and Riley, 1997;Roth et al. , 2019Sheridan, 1992;Shively et al., , 2017Smith and Hoffman, 2018) in such a distributed work system (Chapter 4). 9 3.0 RESULTS AND DISCUSSION 3.1 CONOPS Below we present a CONOPS that was developed based on the results of Study 1, which focused on a logistics convoy that was ambushed in an urban setting. ...
Full-text available
This research focused on design concepts for the development of automation to support shared control of multiple unmanned aerial vehicles (UAVs) and the associated sensors used for surveillance in a distributed work system. Scenarios and storyboards were developed to study these design concepts in the context of troops in contact arising during convoy and search and rescue missions where multiple UAVs could be controlled by soldiers locally (at the site of an ambush) or remotely (from a Tactical Operations Center for a battalion or brigade), or by automation. Storyboards were developed for three such scenarios and were used to conduct two cognitive walkthroughs with a total of 9 experienced soldiers. The results were used to complete cognitive task analyses defining roles and responsibilities and associated data and information requirements and to develop requirements for the design of such a distributed system. Generalizations regarding effective design for human-automation interaction in such a distributed work system were identified, emphasizing the need to provide benefit without being intrusive by supporting control and information display at different levels of abstraction through the use of pre-defined,mission-specific plays, by allowing fluid shifts in roles and responsibilities in order to redistribute the work, and by transitioning among different control paradigms (Management by Directive, Management by Permission, Management by Collaboration and Management by Exception).
... As the autonomy increases, so does the tasks that can be performed [28], and more emphasis is being placed on how agents can form teams, alongside humans, to achieve a common goal, a.k.a. human-autonomy teaming (HAT) [32,19]. ...
Human-autonomy teaming (HAT) scenarios feature humans and autonomous agents collaborating to meet a shared goal. For effective collaboration, the agents must be transparent and able to share important information about their operation with human teammates. We address the challenge of transparency for Belief-Desire-Intention agents defined in the Conceptual Agent Notation (CAN) language. We extend the semantics to model agents that are observable (i.e. the internal state of tasks is available), and attention-directing (i.e. specific states can be flagged to users), and provide an executable semantics via an encoding in Milner's bigraphs. Using an example of unmanned aerial vehicles, the BigraphER tool, and PRISM, we show and verify how the extensions work in practice.
Full-text available
Due to the technological progress, increasingly sophisticated and highly automated systems have replaced human roles in the cockpit of commercial aircraft. Consequently, the crew size has been reduced from initially five to two cockpit crew members over the past decades. Nowadays, a captain and a first officer share the tasks throughout the flight by assuming the roles of pilot flying (PF) and pilot monitoring (PM). However, in light of the ongoing technological advancements, the logical next step seems to be a further de-crewing from two-crew operations (TCO) to single-pilot operations (SPO). To provide adequate support for the single pilot, a redesign of the cockpit is required. The present study contributes to this research area by adopting a human-centered perspective and investigating how the PF is affected by the absence of the PM during commercial SPO. A study was conducted in a fixed-base Airbus A320 flight simulator. Fourteen professional pilots participated. Their task was to fly short approach and landing scenarios at Frankfurt Airport both with and without a PM. A 2x3 factorial within-subject design was used with the factors crew (TCO and SPO) and scenario (baseline, turbulence, and abnormal). A combination of quantitative and qualitative data was collected in the form of subjective workload ratings, eye-tracking data, simulator parameters, video recordings, and debriefing interviews. The results showed that workload was not generally higher during SPO but particularly the temporal demand increased significantly. Additionally, checklist usage was less consistent and pilots handled the abnormal scenario differently when the PM was absent. The pilots’ scanning behavior was also significantly affected by the absence of the PM. Pilots had to spend considerably more time scanning secondary instruments at the expense of primary instruments. Moreover, transition behavior between the cockpit instruments and the external view was less efficient in SPO and was interpreted in terms of an overload on the pilots’ visual modality. This research will help inform the design of commercial SPO flight decks providing adequate support for the single pilot. Several implications for the design of SPO cockpits are discussed, such as headup displays, multisensory interfaces, augmented reality glasses, advanced automation, and additional support from ground operators.
Conference Paper
As the aviation industry is actively working on adopting AI for air traffic, stakeholders agree on the need for a human-centered approach. However, automation design is often driven by user-centered intentions, while the development is actually technology-centered. This can be attributed to a discrepancy between the system designers’ perspective and complexities in real-world use. The same can be currently observed with AI applications where most design efforts focus on the interface between humans and AI, while the overall system design is built on preconceived assumptions. To understand potential usability issues of AI-driven cockpit assistant systems from the users’ perspective, we conducted interviews with four experienced pilots. While our participants did discuss interface issues, they were much more concerned about how autonomous systems could be a burden if the operational complexity exceeds their capabilities. Besides commonly addressed human-AI interface issues, our results thus point to the need for more consideration of operational complexities on a system-design level.
Full-text available
The current research discusses transparency as a means to enable trust of automated systems. Commercial pilots (N = 13) interacted with an automated aid for emergency landings. The automated aid provided decision support during a complex task where pilots were instructed to land several aircraft simultaneously. Three transparency conditions were used to examine the impact of transparency on pilot’s trust of the tool. The conditions were: baseline (i.e., the existing tool interface), value (where the tool provided a numeric value for the likely success of a particular airport for that aircraft), and logic (where the tool provided the rationale for the recommendation). Trust was highest in the logic condition, which is consistent with prior studies in this area. Implications for design are discussed in terms of promoting understanding of the rationale for automated recommendations.
Full-text available
Armed with a general understanding of the concepts of Information Superiority and Network Centric Warfare, enterprising individuals and organizations are developing new ways of accomplishing their missions by leveraging the power of information and applying network centric concepts. Visions are being created and significant progress is being made. But to date we have been only scratching the surface of what is possible.
Full-text available
The purpose of this paper is to review research pertaining to the limitations and advantages of supervisory control for unmanned systems. We identify and discuss results showing technologies that mitigate the observed problems such as specialized interfaces, and adaptive systems. In the report, we first present an overview of definitions and important terms of supervisory control and human-agent teaming. We then discuss human performance issues in supervisory control of multiple robots with regard to operator multitasking performance, trust in automation, situation awareness, and operator workload. In the following sections, we review research findings for specific areas of supervisory control of multiple ground robots, aerial robots, and heterogeneous robots (using different types of robots in the same mission). In the last section, we review innovative techniques and technologies designed to enhance operator performance and reduce potential performance degradations identified in the literature.
Full-text available
Our aim was to review empirical studies of complacency and bias in human interaction with automated and decision support systems and provide an integrated theoretical model for their explanation. Automation-related complacency and automation bias have typically been considered separately and independently. Studies on complacency and automation bias were analyzed with respect to the cognitive processes involved. Automation complacency occurs under conditions of multiple-task load, when manual tasks compete with the automated task for the operator's attention. Automation complacency is found in both naive and expert participants and cannot be overcome with simple practice. Automation bias results in making both omission and commission errors when decision aids are imperfect. Automation bias occurs in both naive and expert participants, cannot be prevented by training or instructions, and can affect decision making in individuals as well as in teams. While automation bias has been conceived of as a special case of decision bias, our analysis suggests that it also depends on attentional processes similar to those involved in automation-related complacency. Complacency and automation bias represent different manifestations of overlapping automation-induced phenomena, with attention playing a central role. An integrated model of complacency and automation bias shows that they result from the dynamic interaction of personal, situational, and automation-related characteristics. The integrated model and attentional synthesis provides a heuristic framework for further research on complacency and automation bias and design options for mitigating such effects in automated and decision support systems.
As autonomous and semiautonomous systems are developed for automotive, aviation, cyber, robotics and other applications, the ability of human operators to effectively oversee and interact with them when needed poses a significant challenge. An automation conundrum exists in which as more autonomy is added to a system, and its reliability and robustness increase, the lower the situation awareness of human operators and the less likely that they will be able to take over manual control when needed. The human-autonomy systems oversight model integrates several decades of relevant autonomy research on operator situation awareness, out-of-the-loop performance problems, monitoring, and trust, which are all major challenges underlying the automation conundrum. Key design interventions for improving human performance in interacting with autonomous systems are integrated in the model, including human-automation interface features and central automation interaction paradigms comprising levels of automation, adaptive automation, and granularity of control approaches. Recommendations for the design of human-autonomy interfaces are presented and directions for future research discussed.
Objective: The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Background: Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. Method: The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Results: Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. Conclusion: We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Application: Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving.
The current paper discusses the concept of human-robot interaction through the lens of a model depicting the key elements of robot-to-human and robot-of-human transparency. Robot-to-human factors represent information that the system (which includes the robot but is broader than just the robot) needs to present to users before, during, or after interactions. Robot-of-human variables are factors relating the human (or the interactions with the human; i.e., teamwork) that the system needs to communicate an awareness of to the users. The paper closes with some potentials design implications for the various transparency domains to include: training and the human-robot interface (including social design, feedback, and display design). © 2013, Association for the Advancement of artificial intelligence.
This paper addresses theoretical, empirical, and analytical studies pertaining to human use, misuse, disuse, and abuse of automation technology. Use refers to the voluntary activation or disengagement of automation by human operators. Trust, mental workload, and risk can influence automation use, but interactions between factors and large individual differences make prediction of automation use difficult. Misuse refers to over reliance on automation, which can result in failures of monitoring or decision biases. Factors affecting the monitoring of automation include workload, automation reliability and consistency, and the saliency of automation state indicators. Disuse, or the neglect or underutilization of automation, is commonly caused by alarms that activate falsely. This often occurs because the base rate of the condition to be detected is not considered in setting the trade-off between false alarms and omissions. Automation abuse, or the automation of functions by designers and implementation by managers without due regard for the consequences for human performance, tends to define the operator's roles as by-products of the automation. Automation abuse can also promote misuse and disuse of automation by human operators. Understanding the factors associated with each of these aspects of human use of automation can lead to improved system design, effective training methods, and judicious policies and procedures involving automation use.