Article

Supporting Dynamic Re-Planning In Multiple Uav Control: A Comparison of 3 Levels of Automation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Unmanned Aerial Vehicle (UAV) control currently requires multiple operators to supervise the mission of a single vehicle. The goal is to improve this ratio and have a single operator supervise up to 10 UAVs. Achieving this goal requires the introduction of automated systems that support multitasking and decision-making. However, there is uncertainty about the appropriate level of automation (LOA). The present study compared re-planning performance at three LOAs (manual, intermediate, full automation) of 30 participants who each supervised 9 UAVs. Full automation resulted in the best re-planning performance and matched intermediate automation in terms of target detection. The manual condition showed significantly poorer performance on these tasks, especially in high workload, but suffered the smallest loss of UAVs. Subjectively, most participants preferred intermediate automation, which they trusted more than full automation. The findings from this research help inform UAV system design and add to the knowledge base in human-automation collaboration.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This natural mapping (Norman, 1988) was used to minimize the need for training and ensure proper attention allocation by exploiting crossmodal spatial links between vision and touch (Driver & Spence, 1998;Ferris & Sarter, 2008). Three pairs of tactors were placed in the central column to avoid vibrations directly on the spine (see Panjabi, Andersson, Jorneus, Hult, & Mattsson, 1986) and to ensure even and consistent contact on the participant (based on Prinet, Terhune, & Sarter, 2012). Pilot testing ensured that vibrations between single-and dual-tactor stimulation were not perceived differently and accuracy in detecting and responding to changes in each sector were not different. ...
... As with the visual indications, the times when tactile targets could appear in the 8.5 s trial were randomized and presented at this higher intensity for the remainder of the trial when a change occurred. Note that the background of the respective UAV feed was highlighted to ensure that participants used the appropriate buttons to respond to tactile trials (Lu et al., 2012;Prinet et al., 2012;. Pilot testing of tactile change trials with visual highlighting showed that it did not result in any performance differences (i.e., accuracy) for tactile cues compared with when there was no visual highlighting present. ...
Article
Objective:: The present study examined whether tactile change blindness and crossmodal visual-tactile change blindness occur in the presence of two transient types and whether their incidence is affected by the addition of a concurrent task. Background:: Multimodal and tactile displays have been proposed as a promising means to overcome data overload and support attention management. To ensure the effectiveness of these displays, researchers must examine possible limitations of human information processing, such as tactile and crossmodal change blindness. Method:: Twenty participants performed a unmanned aerial vehicle (UAV) monitoring task that included visual and tactile cues. They completed four blocks of 70 trials each, one involving visual transients, the other tactile transients. A search task was added to determine whether increased workload leads to a higher risk of change blindness. Results:: The findings confirm that tactile change detection suffers in terms of response accuracy, sensitivity, and response bias in the presence of a tactile transient. Crossmodal visual-tactile change blindness was not observed. Also, change detection was not affected by the addition of the search task and helped reduce response bias. Conclusion:: Tactile displays can help support multitasking and attention management, but their design needs to account for tactile change blindness. Simultaneous presentation of multiple tactile indications should be avoided as it adversely affects change detection. Application:: The findings from this research will help inform the design of multimodal and tactile interfaces in data-rich domains, such as military operations, aviation, and healthcare.
... It is important for system designers to understand how to balance the degree of autonomy with the degree of desired control, keeping in mind human control and cognitive limitations. The design space for such systems includes understanding the role of mental workload, automation reliability, and the system's expected costs and benefits from possible choices of decision/action implementations [14,17]. ...
Article
Full-text available
Unmanned Aerial Vehicles (UAVs), also known as drones, have extensive applications in civilian rescue and military surveillance realms. A common drone control scheme among such applications is human supervisory control, in which human operators remotely navigate drones and direct them to conduct high-level tasks. However, different levels of autonomy in the control system and different operator training processes may affect operators’ performance in task success rate and efficiency. An experiment was designed and conducted to investigate such potential impacts. The results showed us that a dedicated supervisory drone control interface tended toward increased operator successful task completion as compared to an enhanced teleoperation control interface, although this difference was not statistically significant. In addition, using Hidden Markov Models, operator behavior models were developed to further study the impact of operators’ drone control strategies as a function of differing levels of autonomy. These models revealed that people with both supervisory and enhanced teleoperation control training were not able to determine the right control action at the right time to the same degree that people with just training in the supervisory control mode. Future work is needed to determine how trust plays a role in such settings.
... Designing for a human-oriented semi-autonomy fulfills best the requirements of keeping the user in the decision loop while automating non-critical tasks. Furthermore, trust in intermediate automation is higher than trust in fully automated systems [11]. Naturally, higher automation means less interaction required [12]. ...
Conference Paper
Full-text available
The military benefit of unmanned reconnaissance for infantry as the most exposed military branch is obvious. Furthermore, unmanned systems can also support by transporting heavy equipment, including sensor payloads usually not fielded by infantry units. While larger assets are typically controlled from afar, smaller assets can be controlled directly by nearby troops and satisfy im-mediate reconnaissance needs. In this work, the design, implementation and evaluation of a user interface (UI) for integrating unmanned platforms into the German army’s infantry platoons is presented. More specific, two unmanned aerial vehicles and two unmanned ground vehicles were to be integrated into a platoon. This work highlights the user interface aspects, training effort and or-ganizational changes. German paratroopers and mountain infantry assisted with the requirements analysis and UI evaluation. In addition, the German Army Concepts and Capabilities Development Center supported the evaluation. The effort to bring unmanned systems into infantry units is motivated, related work concerning the control of unmanned systems is presented, the results of the re-quirements elicitation for this undertaking is reported, the design and imple-mentation as well as the instruction strategy are outlined and the results of a test campaign reported. The paper concludes by summing up the current state and outlining future work regarding UI development for soldier-multi-robot-teams.
... However, cognitive aids and warning systems in many operational settings become underutilised either entirely or within specific scenarios. Prior research has shown that devices designed to support performance are not necessarily beneficial to operators, and their success may be contingent upon a number of factors including display complexity , level of automation (Cummings and Mitchell 2005;Prinet et al. 2012), automation transparency (Nunes 2003;Nunes and St-Cyr 2003), attentional engagement (Szafir and Mutlu 2012;Iani and Wickens 2007), nature of work domain (Vicente 1999), operator trust (Hoffman et al. 2013) to name a few. Contextual factors can influence the effectiveness of cognitive aids. ...
Article
Full-text available
Task performance in uncertain and complex environments is dependent on human effectiveness in dynamic prioritisation and attention allocation. Non-routine critical events place a high demand on cognitive resources in operators and they require operators to dynamically re-prioritise sub-tasks calling upon the need for predictive aids. Predictive aids provide real-time prediction of system variables and they can potentially facilitate the detection of non-routine critical events and dynamic re-prioritisation during such events. However, prior research has reported mixed findings in their effectiveness and no prior research has been done to test the effectiveness of predictive aids during non-routine critical events that require operators to dynamically prioritise tasks. Our experimental study with 77 participants examined the effect of a predictive aid on prioritisation of non-routine critical events in cyber security event monitoring. The predictive aid resulted in decrements in sustained attention as seen in delayed detection of some non-routine critical events and errors in prioritising non-routine critical events. This sustained attention decrement is a result of miscalibration in the importance of looking ahead with the predictive aid. This miscalibration was possibly associated with the look ahead time, complexity of the predictive aid, and the lack of automation transparency (i.e. mechanism underlying prediction) together affecting its perceived usefulness. The experimental results have the following implications on the design and testing of predictive aids: (i) In a task requiring dynamic prioritisation and detection of non-routine critical events, predictive aids with insufficient look ahead times can result in decrements in sustained attention (ii) Predictive aids should be evaluated with subjective measures of perceived usefulness and workload (iii) Predictive aids should be designed to have sufficient look ahead times and transparency about underlying its prediction mechanism as these would affect visual scanning effort and perceived usefulness (iv) Experimental tests of their effectiveness should involve scenarios long enough that would take sustained attention effects and any potential learning effects into consideration.
... The majority of UAV swarming research is focused on improving or increasing the degree of automation [5]. This includes investigating different Levels of Automation [9][10][11] or different forms of collaboration between operators and the automation [12]. While these studies show good results, they mostly overlook the human-machine interface and therefore underestimate the positive influence visualizations can have on human information processing and system understanding [13]. ...
Article
Full-text available
Advances in miniaturized computer technology made it possible for a single Unmanned Aerial Vehicle (UAV) to complete its mission autonomously. This also sparked interest in having swarms of UAVs that are cooperating as a team on a single mission. The level of automation involved in the control of UAV swarms will also change the role of the human operator. That is, instead of manually controlling the movements of the individual UAVs, the system operator will need to perform higher-level mission management tasks. However, most ground control stations are still tailored to the control of single UAVs by portraying raw flight status data on cockpit-like instruments. In this paper, the ecological interface design paradigm is used to enhance the human-machine interface of a ground control station to support mission management for UAV swarms. As a case study, a generic ground-surveillance mission with four UAVs is envisioned. A preliminary evaluation study with 10 participants showed that the enhanced interface successfully enables operators to control a swarm of four UAVs and to resolve failures during mission execution. The results of the evaluation study showed that the interface enhancements promoted creative problem-solving activities to scenarios that could not have been solved by following a fixed procedure. However, the results also showed that the current interface still required control actions to be performed per single UAV, making it labor intensive to change mission parameters for swarms consisting of more than four UAVs.
Article
Objective: The goal of the present study was to develop and empirically evaluate three countermeasures to tactile change blindness (where a tactile signal is missed in the presence of a tactile transient). Each of these countermeasures relates to a different cognitive step involved in successful change detection. Background: To date, change blindness has been studied primarily in vision, but there is limited empirical evidence that the tactile modality may also be subject to this phenomenon. Change blindness raises concerns regarding the robustness of tactile and multimodal interfaces. Method: Three countermeasures to tactile change blindness were evaluated in the context of a highly demanding monitoring task. One countermeasure was proactive (alerting the participant to a possible change before it occurred) whereas the other two were adaptive (triggered after the change upon an observed miss). Performance and subjective data were collected. Results: Compared to the baseline condition, all countermeasures improved intramodal tactile change detection. Adaptive measures resulted in the highest detection rates, specifically when signal gradation was employed (i.e., when the intensity of the tactile signal was increased after a miss was observed). Conclusion: Adaptive displays can be used to counter the effects of change blindness and ensure that tactile information is reliably detected. Increasing the tactile intensity after a missed change appears most promising and was the preferred countermeasure. Application: The findings from this study can inform the design of interfaces employing the tactile modality to support monitoring and attention management in data-rich domains.
Article
Full-text available
In this chapter, I review research involving remote human supervision of multiple unmanned vehicles (UVs) using command complexity as an organizing construct. Multi-UV tasks range from foraging, requiring little coordination among UVs, to formation following, in which UVs must function as a cohesive unit. Command complexity, the degree to which operator effort increases with the number of supervised UVs, is used to categorize human interaction with multiple UVs. For systems in which each UV requires the same form of attention (O(n)), effort increases linearly with the number of UVs. For systems in which the control of one UV is dependent upon another (O(>n)), additional UVs impose greater than linear increases due to the expense of coordination. For other systems, an operator interacts with an autonomously coordinating group, and effort is unaffected by group size (O(1)). Studies of human/multi-UV interaction can be roughly grouped into O(n) supervision, involving one-to-one control of individual UVs, or O(1) commanding, in which higher-level commands are directed to a group. Research in O(n) command has centered on round-robin control, neglect tolerance, and attention switching. Approaches to O(1) command are divided into systems using autonomous path planning only, plan libraries, human-steered planners, and swarms. Each type of system has its advantages. Less complete work in scalable displays for multiple UVs is reviewed. Mixing levels of command is probably necessary to supervise multiple UVs performing realistic tasks. Research in O(n) control is mature and can provide quantitative and qualitative guidance for design. Interaction with planners and swarms is less mature but more critical to developing effective multi-UV systems capable of performing complex tasks.
Article
Full-text available
Recent technological advances have made viable the implementation of intelligent automation in advanced tactical aircraft. The use of this technology has given rise to new human factors issues and concerns. Errors in highly automated aircraft have been linked to the adverse effects of automation on the pilot's system awareness, monitoring workload, and ability to revert to manual control. However adaptive automation, or automation that is implemented dynamically in response to changing task demands on the pilot, has been proposed to be superior to systems with fixed, or static automation. This report examines several issues concerning the theory and design of adaptive automation in aviation systems, particularly as applied to advanced tactical aircraft. An analysis of the relative costs and benefits of conventional (static) aviation automation provides the starting point for the development of a theory of adaptive automation. This analysis includes a review of the empirical studies investigating effects of automation on pilot performance. The main concepts of adaptive automation are then introduced, and four major methods for implementing adaptive automation in the advanced cockpit are described and discussed. Aircraft Automation, Pilot Situational Awareness, Aviation Human Factors, Pilot Workload.
Article
Full-text available
This study examined the effectiveness of using informative peripheral visual and tactile cues to support task switching and interruption management. Effective support for the allocation of limited attentional resources is needed for operators who must cope with numerous competing task demands and frequent interruptions in data-rich, event-driven domains. One prerequisite for meeting this need is to provide information that allows them to make informed decisions about, and before, (re)orienting their attentional focus. Thirty participants performed a continuous visual task. Occasionally, they were presented with a peripheral visual or tactile cue that indicated the need to attend to a separate visual task. The location, frequency, and duration parameters of these cues represented the domain, importance, and expected completion time, respectively, of the interrupting task. The findings show that the informative cues were detected and interpreted reliably. Information about the importance (rather than duration) of the task was used by participants to decide whether to switch attention to the interruption, indicating adherence to experimenter instructions. Erroneous task-switching behavior (nonadherence to experimenter instructions) was mostly caused by misinterpretation of cues. The effectiveness of informative peripheral visual and tactile cues for supporting interruption management was validated in this study. However, the specific implementation of these cues requires further work and needs to be tailored to specific domain requirements. The findings from this research can inform the design of more effective notification systems for a variety of complex event-driven domains, such as aviation, medicine, or process control.
Article
Full-text available
The potential of supervisory controlled teleoperators for accomplishment of manipulation and sensory tasks in deep ocean environments is discussed. Teleoperators and supervisory control are defined, the current problems of human divers are reviewed, and some assertions are made about why supervisory control has potential use to replace and extend human diver capabilities. The relative roles of man and computer and the variables involved in man-computer interaction are next discussed. Finally, a detailed description of a supervisory controlled teleoperator system, SUPERMAN, is presented.
Article
Full-text available
Technological developments have made it possible to automate more and more functions on the commercial aviation flight deck and in other dynamic high-consequence domains. This increase in the degrees of freedom in design has shifted questions away from narrow technological feasibility. Many concerned groups, from designers and operators to regulators and researchers, have begun to ask questions about how we should use the possibilities afforded by technology skillfully to support and expand human performance. In this article, we report on an experimental study that addressed these questions by examining pilot interaction with the current generation of flight deck automation. Previous results on pilot-automation interaction derived from pilot surveys, incident reports, and training observations have produced a corpus of features and contexts in which human-machine coordination is likely to break down (e.g., automation surprises). We used these data to design a simulated flight scenario that contained a variety of probes designed to reveal pilots' mental model of one major component of flight deck automation: the Flight Management System (FMS). The events within the scenario were also designed to probe pilots' ability to apply their knowledge and understanding in specific flight contexts and to examine their ability to track the status and behavior of the automated system (mode awareness). Although pilots were able to 'make the system work' in standard situations, the results reveal a variety of latent problems in pilot-FMS interaction that can affect pilot performance in nonnormal time critical situations.
Article
Full-text available
The authors discuss empirical studies of human-automation interaction and their implications for automation design. Automation is prevalent in safety-critical systems and increasingly in everyday life. Many studies of human performance in automated systems have been conducted over the past 30 years. Developments in three areas are examined: levels and stages of automation, reliance on and compliance with automation, and adaptive automation. Automation applied to information analysis or decision-making functions leads to differential system performance benefits and costs that must be considered in choosing appropriate levels and stages of automation. Human user dependence on automated alerts and advisories reflects two components of operator trust, reliance and compliance, which are in turn determined by the threshold designers use to balance automation misses and false alarms. Finally, adaptive automation can provide additional benefits in balancing workload and maintaining the user's situation awareness, although more research is required to identify when adaptation should be user controlled or system driven. The past three decades of empirical research on humans and automation has provided a strong science base that can be used to guide the design of automated systems. This research can be applied to most current and future automated systems.
Article
Full-text available
Technical developments in computer hardware and software now make it possible to introduce automation into virtually all aspects of human-machine systems. Given these technical capabilities, which system functions should be automated and to what extent? We outline a model for types and levels of automation that provides a framework and an objective basis for making such choices. Appropriate selection is important because automation does not merely supplant but changes human activity and can impose new coordination demands on the human operator. We propose that automation can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully automatic. A particular system can involve automation of all four types at different levels. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design using our model. Secondary evaluative criteria include automation reliability and the costs of decision/action consequences, among others. Examples of recommended types and levels of automation are provided to illustrate the application of the model to automation design.
Article
Full-text available
We tested the hypothesis that automation errors on tasks easily performed by humans undermine trust in automation. Research has revealed that the reliability of imperfect automation is frequently misperceived. We examined the manner in which the easiness and type of imperfect automation errors affect trust and dependence. Participants performed a target detection task utilizing an automated aid. In Study 1, the aid missed targets either on easy trials (easy miss group) or on difficult trials (difficult miss group). In Study 2, we manipulated both easiness and type of error (miss vs. false alarm). The aid erred on either difficult trials alone (difficult errors group) or on difficult and easy trials (easy miss group; easy false alarm group). In both experiments, easy errors led to participants mistrusting and disagreeing more with the aid on difficult trials, as compared with those using aids that generated only difficult errors. This resulted in a downward shift in decision criterion for the former, leading to poorer overall performance. Misses and false alarms led to similar effects. Automation errors on tasks that appear "easy" to the operator severely degrade trust and reliance. Potential applications include the implementation of system design solutions that circumvent the negative effects of easy automation errors.
Article
Automation is often problematic because people fail to rely upon it appropriately. Because people respond to technology socially, trust influences reliance on automation. In particular, trust guides reliance when complexity and unanticipated situations make a complete understanding of the automation impractical. This review considers trust from the organizational, sociological, interpersonal, psychological, and neurological perspectives. It considers how the context, automation characteristics, and cognitive processes affect the appropriateness of trust. The context in which the automation is used influences automation performance and provides a goal-oriented perspective to assess automation characteristics along a dimension of attributional abstraction. These characteristics can influence trust through analytic, analogical, and affective processes. The challenges of extrapolating the concept of trust in people to trust in automation are discussed. A conceptual model integrates research regarding trust in automation and describes the dynamics of trust, the role of context, and the influence of display characteristics. Actual or potential applications of this research include improved designs of systems that require people to manage imperfect automation. Copyright © 2004, Human Factors and Ergonomics Society. All rights reserved.
Article
As complex automated aids proliferate in transportation and manufacturing domains, examining human users' trust in such systems gains importance. We review some of the growing literature on trust in automated systems, and outline a program for future studies and theoretical developments. Trust is an intervening variable between automation reliability and use, among other factors. Consistent reliable machine performance can increase trust, and discrete errors can decrease trust. Trust tends to resist change over time. The association between trust and subsequent usage is positive but not clear-cut, and may be mediated by risk and self-confidence. The place of trust in the overall picture of human-automation system performance must be established. The suggested research program accomplishes this by investigating training issues and individual differences, employing new measures, and examining dynamics of trust and usage in automation possessing different reliability in different situations, automation with multiple modes, and adaptive automation.
Article
This paper addresses theoretical, empirical, and analytical studies pertaining to human use, misuse, disuse, and abuse of automation technology. Use refers to the voluntary activation or disengagement of automation by human operators. Trust, mental workload, and risk can influence automation use, but interactions between factors and large individual differences make prediction of automation use difficult. Misuse refers to over reliance on automation, which can result in failures of monitoring or decision biases. Factors affecting the monitoring of automation include workload, automation reliability and consistency, and the saliency of automation state indicators. Disuse, or the neglect or underutilization of automation, is commonly caused by alarms that activate falsely. This often occurs because the base rate of the condition to be detected is not considered in setting the trade-off between false alarms and omissions. Automation abuse, or the automation of functions by designers and implementation by managers without due regard for the consequences for human performance, tends to define the operator's roles as by-products of the automation. Automation abuse can also promote misuse and disuse of automation by human operators. Understanding the factors associated with each of these aspects of human use of automation can lead to improved system design, effective training methods, and judicious policies and procedures involving automation use.
Article
Remotely operated vehicles (ROVs) are vehicular robotic systems that are teleoperated by a geographically separated user. Advances in computing technology have enabled ROV operators to manage multiple ROVs by means of supervisory control techniques. The challenge of incorporating telepresence in any one vehicle is replaced by the need to keep the human "in the loop" of the activities of all vehicles. An evaluation was conducted to compare the effects of automation level and decision-aid fidelity on the number of simulated remotely operated vehicles that could be successfully controlled by a single operator during a target acquisition task. The specific ROVs instantiated for the study were unmanned air vehicles (UAVs). Levels of automation (LOAs) included manual control management-by-consent, and management-by-exception. Levels of decision-aid fidelity (100% correct and 95% correct) were achieved by intentionally injecting error into the decision-aiding capabilities of the simulation. Additionally, the number of UAVs to be controlled varied (one, two, and four vehicles). Twelve participants acted as UAV operators. A mixed-subject design was utilized (with decision-aid fidelity as the between-subjects factor), and participants were not informed of decision-aid fidelity prior to data collection. Dependent variables included mission efficiency, percentage correct detection of incorrect decision aids. workload and situation awareness ratings, and trust in automation ratings. Results indicate that an automation level incorporating management-by-consent had some clear performance advantages over the more autonomous (management-by-exception) and less autonomous (manual control) levels of automation. However, automation level interacted with the other factors for subjective measures of workload, situation awareness, and trust. Additionally, although a 3D perspective view of the mission scene was always available, it was used only during low-workload periods and did not appear to improve the operator's sense of presence. The implications for ROV interface design are discussed, and future research directions are proposed.