ArticleLiterature Review

A meta-analysis of human-system interfaces in unmanned aerial vehicle (UAV) swarm management

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A meta-analysis was conducted to systematically evaluate the current state of research on human-system interfaces for users controlling semi-autonomous swarms composed of groups of drones or unmanned aerial vehicles (UAVs). UAV swarms pose several human factors challenges, such as high cognitive demands, non-intuitive behavior, and serious consequences for errors. This article presents findings from a meta-analysis of 27 UAV swarm management papers focused on the human-system interface and human factors concerns, providing an overview of the advantages, challenges, and limitations of current UAV management interfaces, as well as information on how these interfaces are currently evaluated. In general allowing user and mission-specific customization to user interfaces and raising the swarm’s level of autonomy to reduce operator cognitive workload are beneficial and improve situation awareness (SA). It is clear more research is needed in this rapidly evolving field.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... • change mission objectives (Kolling et al., 2016) Human-swarm interaction has recently started to receive considerable attention; see Kolling et al. (2016); Mi and Yang (2013); Hocraffer and Nam (2017) for surveys. Several control strategies were proposed by previous studies to allow the human to take different roles in swarm operations (Adams, 2009;Pendleton & Goodrich, 2013;Kolling et al., 2012;Nam, Walker, Lewis, & Sycara, 2017). ...
... This level of autonomy will require the swarm to be able to deal with unstructured, dynamic, and uncertain environments as in the case of most real environment. As most swarm experiments have been conducted in lab or controlled environments, fully autonomous swarms are still far from reach (Ferrer, 2017;Hocraffer & Nam, 2017;Mi & Yang, 2013). The next two autonomy levels imply shared task performance. ...
... Given the current capabilities of robot swarms, human-oriented semi-autonomous settings could be a prudent choice as higher autonomy levels exceed swarm capabilities while the lower level dismisses swarm abilities to perform low-level actions. This is also consistent with the results of previous HSI research which showed that in some swarm tasks humans perform better when they act as supervisors than operators (Hocraffer & Nam, 2017). ...
Thesis
Full-text available
Human-swarm interaction (HSI) is a research area that studies how human and swarm capabilities can be combined to successfully perform tasks that exceed the performance limits of single robot systems. The main objective of this thesis is to improve the success of HSI by improving the effectiveness of three interdependent elements: the swarm decision-making algorithm in performing its tasks, human interventions in swarm operation, and the interface between the human and the swarm. Swarm decision-making warrants investigation as the state-of-the-art algorithms per-form very poorly under some conditions. Analysing the root causes of such failures reveals that a key performance inhibitor is the unreliable estimation of swarm members’ confidence in their judgements. Two different approaches are proposed to circumvent the identified issues. Performance evaluation under different conditions demonstrates the merits of the proposed approaches and shows that profound improvements to the effective-ness and efficiency of swarm decision-making are possible through the reliable estimation of confidence. Improving swarm effectiveness begets significant benefits to mission performance, but it can negatively affect the effectiveness of human interventions. Previous research has shown that when interacting with a highly reliable machine, humans tend to over-rely on the machine and exhibit notable complacency that limits their ability to detect and fix machine errors. Although over-trust in automation is widely blamed for such complacency, this attribution is yet to be empirically confirmed. This gap is addressed through an empirical investigation of trust in HSI. The results confirm the significant role of trust as a predictor of human reliance on swarm, which suggests that designing trust-aware HSI systems may reduce the negative impacts of human reliance. Utilising a highly reliable swarm while maintaining human vigilance is an objective that might not be possible without an effective human-swarm interface. As automation transparency has proven useful for boosting human understanding of machine operations, it could facilitate human awareness of machine limitations and possible failures. Thus, the thesis empirically examines the efficacy of swarm transparency as a potential intervention for minimising human complacency. The results assert the benefits of transparency in ensuring continued human contributions to the mission even when a highly reliable swarm is used.
... Brambilla et al., (2013) propose that swarm robotics is based on the following principles: robots are autonomous, are situated in the environment and can adapt their behaviour to modify it, have local sensing and communication capabilities, do not have access to centralised control or global knowledge and can cooperate to fulfil a mission. However, there are other branches of swarm robotics that will preserve the role of the human operator (both in terms of operation and supervision) (e.g., Hocraffer & Nam, 2017). In such contexts, the role of the human operator will remain integral to the success of such systems (Dousse et al., 2016). ...
... However, the level of human control is mediated by the role in which they assume within the system itself (i.e., there are obvious differences between supervising a swarm of UAVs and manually controlling them). Given that human agents will continue to be involved in swarm management (Hocraffer & Nam, 2017), they must be properly supported in maintaining meaningful human control (Boardman & Butcher, 2019). Typically, the concept of meaningful control is used within ethical, legal and political debates (Santoni de Sio & Van den Hoven, 2018). ...
Conference Paper
It is widely recognised that multiple autonomous agents operating together as part of a team, or swarm, could be used to assist in a variety of situations including search and rescue missions, warehouse operations and a number of military scenarios. From a sociotechnical perspective, these scenarios depict situations in which non-human and human agents are likely to work together in order to achieve a common goal. Unmanned Aerial Vehicles (UAVs) are often viewed as a convenient and cost effective way to gather information that is not easily accessible from any other means. However, we are beginning to see increasing efforts to scale up the autonomy of single-UAV systems to create aerial swarms. It is thought that aerial swarms may be used to assist in various situations including search and rescue missions, warehouse operations and military scenarios. Compared to a single robot, a swarm can provide a more efficient means to cover large areas and are scalable (i.e., can easily add or remove individual robots without significantly impacting the performance of the remaining group). Despite this, there has been some concern that Human Factors research into human-swarm partnerships is lacking. Thus, in order to understand the current ‘state of the art’, a systematic literature review was conducted to explore what Human Factors research is being conducted within the area of human-swarm partnerships and explore what design guidance exists to support the development of efficient and effective relationships. The initial search returned 143 articles. Duplicates were first removed and then the screening process involved filtering articles by titles then by abstract and then finally, full text. This approach led to 55 articles being retained. Inductive coding was used to identify themes within the text. This provided greater insight into the current focus of research with the context of human-swarm partnerships. A total of 5 themes were identified: interaction strategies, user interface design, management, operator monitoring and trust. However, the review also found that when it comes to design guidance, very little is available. One potential avenue for future research centre on the concepts of Meaningful Human Control and Effective Human Control. These concepts have been recognised as providing the foundation in which the design of human-swarm partnerships may be developed. This is because human agents are still likely to play a pivotal role in overall mission success and as such should retain full decisional awareness and possess a comprehensive understanding of the context of action in order for control to be meaningful. This implicates four of the research themes identified as part of this review: interaction strategies, user interface design, management and trust. Operator Monitoring, the final theme identified as part of this review, is indirectly linked to MHC and EHC because it acts as the mechanism in which operator engagement can be augmented. Arguably then, the building blocks to achieve MHC and EHC are beginning to take shape. However, more research is needed to bring this altogether in the quest for efficient and effective relationships between human agents and robot counterparts.
... Regarding detailed control systems, for example, [19] have provided a review of human-system interface (HSI) solutions for the management of swarms of drones. Their main conclusion from this review was that allowing user and mission-specific customization to user interfaces and raising the swarm's level of autonomy to reduce the cognitive workload shouldered by the operator are beneficial and improve operators' situation awareness [19]. ...
... Regarding detailed control systems, for example, [19] have provided a review of human-system interface (HSI) solutions for the management of swarms of drones. Their main conclusion from this review was that allowing user and mission-specific customization to user interfaces and raising the swarm's level of autonomy to reduce the cognitive workload shouldered by the operator are beneficial and improve operators' situation awareness [19]. ...
Chapter
It is widely recognised that swarms are the likely next step for Unmanned Aerial Vehicle (UAV) or drone technology. Although substantially increased autonomy for navigation, data collection and decision-making is very much part of the “collective artificial intelligence” vision, this expected development raises questions about the most productive form of interaction between the swarm and its human operator(s). On the one hand, low-level “micro-management” of every unit clearly nullifies many of the advantages of using swarms. On the other, retaining an ability to exercise some control over the swarm’s objectives and real-time behaviour is obviously paramount. We present two families of control methods, direct and indirect, that we believe could be used to design suitable, i.e. simultaneously intuitive, easy to use, powerful and flexible, Graphical User Interfaces (GUI) that would allow a single operator to choreograph a swarm’s actions. Simulation results are used to illustrate the concept and perform a quantitative performance analysis of both control methods in different scenarios. Human factors aspects related to drone swarm control are identified and both control methods are discussed from the human operator’s usage point of view. We conclude that the direct approach is more suitable over short time-scales (“tactical” level), whilst indirect methods allow to specify more abstract long-term objectives (“operational” level), making them naturally complementary.
... Linked publications are frequently interested in the development of semi-autonomous drones [37], [38], machine learning techniques [39], and human-computer interaction. Often focusing on civilian environments, some scholars pay special attention to disruptive situations [40] or warfare technology [41]. Still, many engineering studies are interested in optimizing automatic or autonomous processes and robotics, disregarding ethical questions or highlighting the potential of LAWS [42]- [45]. ...
... It is therefore critical to avoid automatization of firing when identification is seen as successfully completed. Options of multi-channel communication between operator and system are already under research [41]. Yet, with respect to assuring MHC, it is a) important to ensure the option of intervention, while b) active confirmation or denial of an attack might reduce the risk of technology-biased behavior. ...
Article
Full-text available
The debate on the development and deployment of lethal autonomous weapon systems (LAWS) as an emerging technology is of increasing importance, with discussions stalling and technological development progressing. Monitoring the progress of increasingly autonomous weapons systems in civilian and military use as well as regulating possible autonomous systems early on is demanded by civil society actors, like the Campaign to Stop Killer Robots and the International Committee of the Red Cross (ICRC), while nation states follow a variety of interests and strategies, showing little room for consensus on central terms and questions [2], [3]. This article therefore sheds light on the work of the Group of Governmental Experts (GGE) of the UN Convention of Certain Conventional Weapons (CCW). The CCW, offering an arena for international cooperation, has dedicated itself to the purpose of finding common ground with respect to an understanding of LAWS, as well as to the necessary degree of human control. From an ethical perspective, the concept of Meaningful Human Control (MHC) supports a human-centric approach. Several IEEE projects, series and publications are dedicated to this prioritization, especially regarding civilian use. As autonomous technology is increasingly at the center of contemporary military innovations, questions of (human) agency and responsibility in warfare have become even more pressing. As stressed by the United Nations Institute for Disarmament Research (UNIDIR), the concept of MHC may prove useful in the context of development and use of (semi-) autonomous weaponry.
... This makes it easy for operators to ignore key information, causing human error, resulting in the loss of the UAV [4][5][6]. Thus, in a complex environment, a reasonable interface is one of the main ways to effectively improve the comfort, safety, and efficiency of an HCI in the process of a UAV confrontation [7,8]. In recent years, researchers have carried out in-depth research on optimization methods and intelligent algorithms for solving the problems of the HCI interface. ...
Article
Full-text available
In modern warfare, it is often necessary for the operator to control the UAV cluster from a ground control station to perform an attack task. However, the absence of an effective method for optimizing the human–computer interface in ground control stations for UAV clusters leads to usability difficulties and heightens the probability of human errors. Hence, we propose an optimization framework for human–computer interaction interfaces within UAV ground control stations, rooted in interface-essential elements. Specifically, the interface evaluation model was formulated by combining the Salient, Effort, Expectancy, and Value (SEEV) framework with the essential factor mutation cost of the quantified interface. We employed the SEEV–ant colony algorithm to address the challenge of optimizing the interface design within this context. For a typical UAV cluster attack mission, we optimized the human–computer interaction interfaces of the three mission stages based on the proposed SEEV-AC model. We conducted extensive simulation experiments in these optimized interfaces, and used eye-movement indicators to evaluate the effectiveness of the interface optimization model. Based on the experimental results, divergence is reduced by 11.59%, and the fitness of the optimized interface is increased from 1.34 to 3.42. The results show that the proposed intelligent interface optimization method can effectively improve the interface design and reduce the operator’s workload.
... The integration of UAVs into commercial industries has led to the replacement of manned aircraft in several tasks, resulting in cost savings and improved performance (Gupta et al., 2021). Unlike manned aircraft, UAVs are not affected by human factors such as fatigue and stress, allowing them to perform repetitive or prolonged tasks efficiently and at a lower cost (Hocraffer and Nam, 2017). UAVs have been widely adopted in several sectors such as agriculture, mining, public works investigations, forest fires, filming, and mapping (Gupta et al., 2013). ...
Article
Commercial aerial drone systems have gained significance in various industries, such as agriculture, infrastructure monitoring, cargo transport, security, and filming. This paper comprehensively reviews the trends and innovative research areas in commercial drone technology. The top five usage areas of drones are surveyed, and a master list of drone usage areas in the commercial sectors is provided. Furthermore, the current bottlenecks of the drone industry, such as endurance problems, noise issues, mid-air collision risks, navigation problems, insurance policies, and a lack of technical education opportunities, are evaluated. Finally, a list of innovative and trending research areas for drone technology is presented, which includes alternative energy sources, wireless charging methods, hybrid VTOL-drones, bio-inspired designs, and more. This paper serves as a valuable resource for entrepreneurs and researchers interested in drone-related technologies, and it has the potential to guide the future industry development.
... In an accident, we experienced sudden and erratic altitude swings during takeoff, which required the operator to perform an impossibly complex series of actions in order to gain manual control before the sUAS plunged to the ground. DRV must support human interaction testing by allowing users to connect their interactive devices, such as Radio Controllers, to the simulation environment [54], [55]. − Sensor and Hardware Issues: While the primary aim of DRV is to test the safe operation of sUAS software applications and deployments, hardware failures, sometimes confounded by environmental factors, are often the primary cause, or a clear contributor, to an accident [21]. ...
Preprint
Full-text available
Flight-time failures of small Uncrewed Aerial Systems (sUAS) can have a severe impact on people or the environment. Therefore, sUAS applications must be thoroughly evaluated and tested to ensure their adherence to specified requirements, and safe behavior under real-world conditions, such as poor weather, wireless interference, and satellite failure. However, current simulation environments for autonomous vehicles, including sUAS, provide limited support for validating their behavior in diverse environmental contexts and moreover, lack a test harness to facilitate structured testing based on system-level requirements. We address these shortcomings by eliciting and specifying requirements for an sUAS testing and simulation platform, and developing and deploying it. The constructed platform, DroneReqValidator (DRV), allows sUAS developers to define the operating context, configure multi-sUAS mission requirements, specify safety properties, and deploy their own custom sUAS applications in a high-fidelity 3D environment. The DRV Monitoring system collects runtime data from sUAS and the environment, analyzes compliance with safety properties, and captures violations. We report on two case studies in which we used our platform prior to real-world sUAS deployments, in order to evaluate sUAS mission behavior in various environmental contexts. Furthermore, we conducted a study with developers and found that DRV simplifies the process of specifying requirements-driven test scenarios and analyzing acceptance test results
... Human performance is improved when they function as supervisors rather than operators, according to previous HSI studies [8,45]. On the one hand, when acting as an operator to control low-level swarm actions, a human can exhibit inferior performance due to exhaustion, task-switching-related distractions, and decreased situational awareness [46]. ...
Article
Full-text available
Robot swarms are becoming popular in domains that require spatial coordination. Effective human control over swarm members is pivotal for ensuring swarm behaviours align with the dynamic needs of the system. Several techniques have been proposed for scalable human–swarm interaction. However, these techniques were mostly developed in simple simulation environments without guidance on how to scale them up to the real world. This paper addresses this research gap by proposing a metaverse for scalable control of robot swarms and an adaptive framework for different levels of autonomy. In the metaverse, the physical/real world of a swarm symbiotically blends with a virtual world formed from digital twins representing each swarm member and logical control agents. The proposed metaverse drastically decreases swarm control complexity due to human reliance on only a few virtual agents, with each agent dynamically actuating on a sub-swarm. The utility of the metaverse is demonstrated by a case study where humans controlled a swarm of uncrewed ground vehicles (UGVs) using gestural communication, and via a single virtual uncrewed aerial vehicle (UAV). The results show that humans could successfully control the swarm under two different levels of autonomy, while task performance increases as autonomy increases.
... Current research has explored ways to exercise control over a swarm without resorting to low-level motor commands. Swarm behavior often takes inspiration from animals such as bees, birds, and fish (Hocraffer and Nam 2017). Research has also started to explore the possibility of incorporating a leader among the drones (Kerman et al. 2012;Kolling et al. 2016). ...
Chapter
Education is one of the predominant applications that is foreseen by researchers in social robotics. In this context, social robots are often designed to interact with one or several learners and with teachers. While educational scenarios for social robots have been studied widely, with experiments being conducted in several countries for nearly 20 years, the cultural impact of accepting social robots in classrooms is still unclear. In this paper, we review the literature on social robots for education with the lens of cultural sensitivity and adaptation. We discuss culture theories and their application in social robotics and highlight research gaps in terms of culture-sensitive design and cultural adaptation in social robots assisting learners in terms of (1) the robot’s role, (2) envisioned tasks, and (3) interaction types. We also present guidelines for designing cross-cultural robots and culturally adaptive systems.
... Current research has explored ways to exercise control over a swarm without resorting to low-level motor commands. Swarm behavior often takes inspiration from animals such as bees, birds, and fish (Hocraffer and Nam 2017). Research has also started to explore the possibility of incorporating a leader among the drones (Kerman et al. 2012;Kolling et al. 2016). ...
Chapter
Full-text available
Emergency services organizations are committed to the challenging task of saving people in distress and minimizing harm across a wide range of events, including accidents, natural disasters, and search and rescue. The teams responsible for these operations use advanced equipment to support their missions. Given the risks and the time pressure of these missions, however, adopting new technologies requires careful testing and preparation. Drones have become a valuable technology in recent years for emergency services teams employed to locate people across vast and difficult to traverse terrains. These unmanned aerial vehicles are faster and cheaper to deploy than traditional crewed aircraft. While an individual drone can be helpful to personnel by quickly offering a bird’s eye view, future scenarios may allow multiple drones working together as a swarm to reduce the time required to locate a person. Given these potentially high payoffs, we explored the challenges and opportunities of drone swarms in search and rescue operations. We conducted interviews as well as initial user studies with relevant stakeholders to understand the challenges and opportunities for drone swarms in the context of search and rescue. Through this, we gained insights to inform the development of prototypes for drone swarm control interfaces, including both technical and human interaction concerns. While drone swarms can likely benefit search and rescue operations, the significant shift from single drones to swarms may necessitate reimagining how rescue missions are conducted. We distill our findings into five key research challenges: visualization, situational awareness, technical issues, team culture, and public perception. We discuss initial steps to investigate these further.
... Human-swarm interaction specifically and human-robot interaction generally is an active area of research that includes interface design, control, communications, autonomy, and human factors such as situation awareness and cognition load, among others (Drew, 2021;Chen & Barnes, 2021;Kolling, Walker, Chakraborty, Sycara, & Lewis, 2015;Hocraffer & Nam, 2017). However, in this subsection we focus on work that enables a one-tomany operator control over an offensive swarm, beginning with command complexity and ending with user interface customization. ...
Preprint
Full-text available
Swarm robotics systems have the potential to transform warfighting in urban environments, but until now have not seen large-scale field testing. We present the Rapid Integration Swarming Ecosystem (RISE), a platform for future multi-agent research and deployment. RISE enables rapid integration of third-party swarm tactics and behaviors, which was demonstrated using both physical and simulated swarms. Our physical testbed is composed of more than 250 networked heterogeneous agents and has been extensively tested in mock warfare scenarios at five urban combat training ranges. RISE implements live, virtual, constructive simulation capabilities to allow the use of both virtual and physical agents simultaneously, while our "fluid fidelity" simulation enables adaptive scaling between low and high fidelity simulation levels based on dynamic runtime requirements. Both virtual and physical agents are controlled with a unified gesture-based interface that enables a greater than 150:1 agent-to-operator ratio. Through this interface, we enable efficient swarm-based mission execution. RISE translates mission needs to robot actuation with rapid tactic integration, a reliable testbed, and efficient operation.
... Furthermore, under conditions in which concurrent task demands have a high information processing load, there could be increased potential for operators provided high transparency to further rely on automated advice and thus not allocate the attentional capacity required to scrutinise the high transparency information provided in order to verify automated advice. A real-world instance of where this could be problematic is in robotic swarm management, where operators are controlling or supervising multiple UVs and switching between many tasks (Hocraffer and Nam, 2017;Hussein et al., 2020). A potential design solution may be to adapt the presentation of automation transparency information based on the ongoing types and complexities of concurrent task demands. ...
Article
Automated decision aids typically improve decision-making, but incorrect advice risks automation misuse or disuse. We examined the novel question of whether increased automation transparency improves the accuracy of automation use under conditions with/without concurrent (non-automated assisted) task demands. Participants completed an uninhabited vehicle (UV) management task whereby they assigned the best UV to complete missions. Automation advised the best UV but was not always correct. Concurrent non-automated task demands decreased the accuracy of automation use, and increased decision time and perceived workload. With no concurrent task demands, increased transparency which provided more information on how the automation made decisions, improved the accuracy of automation use. With concurrent task demands, increased transparency led to higher trust ratings, faster decisions, and a bias towards agreeing with automation. These outcomes indicate increased reliance on highly transparent automation under conditions with concurrent task demands and have potential implications for human-automation teaming design.
... Human-swarm interaction specifically and human-robot interaction generally is an active area of research that includes interface design, control, communications, autonomy, and human factors such as situation awareness and cognition load, among others (Drew, 2021;Chen & Barnes, 2021;Kolling, Walker, Chakraborty, Sycara, & Lewis, 2015;Hocraffer & Nam, 2017). However, in this section, we focus on work that enables a one-to-many operator control over an offensive swarm, beginning with command complexity and ending with user interface customization. ...
Article
Full-text available
Swarm robotics systems have the potential to transform warfighting in urban environments but until now have not seen large-scale field testing. We present the Rapid Integration Swarming Ecosystem (rise), a platform for future multi-agent research and deployment. rise enables rapid integration of third-party swarm tactics and behaviors, which was demonstrated using both physical and simulated swarms. Our physical testbed is composed of more than 250 networked heterogeneous agents and has been extensively tested in mock warfare scenarios at five urban combat training ranges. rise implements live, virtual, constructive simulation capabilities to allow the use of both virtual and physical agents simultaneously, while our “fluid fidelity” simulation enables adaptive scaling between low and high fidelity simulation levels based on dynamic runtime requirements. Both virtual and physical agents are controlled with a unified gesture-based interface that enables a greater than 150:1 agent-to-operator ratio. Through this interface, we enable efficient swarm-based mission execution. rise translates mission needs to robot actuation with rapid tactic integration, a reliable testbed, and efficient operation.
... Swarming algorithms have been developed to maximize the probability of accomplishing an objective upon which cooperation amongst the vehicles is not prioritized, or random paths are determined for a single trajectory solution [2]. And SUAS have been use in MUM-T operations to accomplish a mission with the SUAS while a human interface is maintained in the loop [3]. Finally, Xue performed research for flight path trajectory planning, approaching real-time computational speed, while avoiding constraints in the urban environment [4]. ...
Preprint
Full-text available
This work develops feasible path trajectories for a coordinated strike with multiple aircraft in a constrained environment. Using direct orthogonal collocation methods, the two-point boundary value optimal control problem is transcribed into a nonlinear programming problem. A coordinate transformation is performed on the state variables to leverage the benefits of a simplex discretization of the search domain. Applying these techniques allows each path constraint to be removed from the feasible search space, eliminating computationally expensive, nonlinear constraint equations and problem specific parameters from the optimal control formulation. Heuristic search techniques are used to determine a Dubins path solution through the space to seed the optimal control solver. In the scenario, three aircraft are initiated in separate directions and are required to avoid all constrained regions while simultaneously arriving at the target location, each with a different viewing angle. A focus of this work is to reduce computation times for optimal control solvers such that real-time solutions can be implemented onboard small unmanned aircraft systems. Analysis of the problem examines optimal flight paths through simplex corridors, velocity and heading vectors, control vectors of acceleration and heading rate, and objective times for minimum time flight.
... To this end, several works have investigated how to minimise this cognitive load. For example, Hocraffer and Nam (2017) state in their meta-analysis that increasing the autonomy of the swarm in a human-system interface scenario reduces the cognitive load put on the user, thus improving their situational awareness. Additionally, Podevijn et al. (2016) successfully demonstrated that increasing the number of robots does not influence the cognitive load required from a user if the control is performed on the swarm as a whole. ...
Article
Full-text available
Many people are fascinated by biological swarms, but understanding the behavior and inherent task objectives of a bird flock or ant colony requires training. Whereas several swarm intelligence works focus on mimicking natural swarm behaviors, we argue that this may not be the most intuitive approach to facilitate communication with the operators. Instead, we focus on the legibility of swarm expressive motions to communicate mission-specific messages to the operator. To do so, we leverage swarm intelligence algorithms on chain formation for resilient exploration and mapping combined with acyclic graph formation (AGF) into a novel swarm-oriented programming strategy. We then explore how expressive motions of robot swarms could be designed and test the legibility of nine different expressive motions in an online user study with 98 participants. We found several differences between the motions in communicating messages to the users. These findings represent a promising starting point for the design of legible expressive motions for implementation in decentralized robot swarms.
... Similarly, the use of a single operator to control multiple UAVs is being investigated as the automated systems become more advanced leaving operators to supervise the UAVs (Jessee et al. 2017;Lim et al. 2021;Silva et al. 2017). Currently, the use of UAV swarms monitored by a single operator is being explored (Hocraffer and Nam 2017). ...
Article
Full-text available
Interest in Maritime Autonomous Surface Ships (MASS) is increasing as it is predicted that they can bring improved safety, performance and operational capabilities. However, their introduction is associated with a number of enduring Human Factors challenges (e.g. difficulties monitoring automated systems) for human operators, with their ‘remoteness’ in shore-side control centres exacerbating issues. This paper aims to investigate underlying decision-making processes of operators of uncrewed vehicles using the theoretical foundation of the Perceptual Cycle Model (PCM). A case study of an Unmanned Aerial Vehicle (UAV) accident has been chosen as it bears similarities to the operation of MASS through means of a ground-based control centre. Two PCMs were developed; one to demonstrate what actually happened and one to demonstrate what should have happened. Comparing the models demonstrates the importance of operator situational awareness, clearly defined operator roles, training and interface design in making decisions when operating from remote control centres. Practitioner Summary: To investigate underlying decision-making processes of operators of uncrewed vehicles using the Perceptual Cycle Model, by using an UAV accident case study. The findings showed the importance of operator situational awareness, clearly defined operator roles, training and interface design in making decisions when monitoring uncrewed systems from remote control centres.
... This research explained the features and principles of these algorithms and analyzed different algorithm combinations and task assignments for multiple UAVs. Hocraffer and Nam [44] performed a meta-examination of the human-system interface concerning human factors. The analysis provided a basis to start research, enhanced situation awareness (SA), and yielded efficient results. ...
Chapter
Full-text available
The unmanned aerial vehicle (UAV) swarm is gaining massive interest for researchers as it has huge significance over a single UAV. Many studies focus only on a few challenges of this complex multidisciplinary group. Most of them have certain limitations. This paper aims to recognize and arrange relevant research for evaluating motion planning techniques and models for a swarm from the viewpoint of control, path planning, architecture, communication, monitoring and tracking, and safety issues. Then, a state-of-the-art understanding of the UAV swarm and an overview of swarm intelligence (SI) are provided in this research. Multiple challenges are considered, and some approaches are presented. Findings show that swarm intelligence is leading in this era and is the most significant approach for UAV swarm that offers distinct contributions in different environments. This integration of studies will serve as a basis for knowledge concerning swarm, create guidelines for motion planning issues, and strengthens support for existing methods. Moreover, this paper possesses the capacity to engender new strategies that can serve as the grounds for future work.
... Extended Reality (XR) applications in the military exist in many fields, including aviation, wargaming, weapons training, and human agent teaming applications (Hocraffer and Nam, 2017). XR environments allow soldiers to visualize information in ways not practical in traditional training (Pallavicini et al., 2016;Gawlik-Kobylinska et al., 2020;Kaplan et al., 2021). ...
Article
Full-text available
This study identifies that increasing the fidelity of terrain representation does not necessarily increase overall understanding of the terrain in a simulated mission planning environment using the Battlefield Visualization and Interaction software (BVI; formerly known as ARES (M. W. Boyce et al., International Conference on Augmented Cognition, 2017, 411–422). Prior research by M. Boyce et al. (Military Psychology, 2019, 31(1), 45–59) compared human performance on a flat surface (tablet) versus topographically-shaped surface (BVI on a sand table integrated with top-down projection). Their results demonstrated that the topographically-shaped surface increased the perceived usability of the interface and reduced cognitive load relative to the flat interface, but did not affect overall task performance (i.e., accuracy and response time). The present study extends this work by adding BVI onto a Microsoft HoloLens™. A sample of 72 United States Military Academy cadets used BVI on three different technologies: a tablet, a sand table (a projection-based display onto a military sand table), and on the HoloLens™ in a within-subjects design. Participants answered questions regarding military tactics in the context of conducting an attack in complex terrain. While prior research (Dixon et al., Display Technologies and Applications for Defense, Security, and Avionics III, 2009, 7327) suggested that the full 3D visualization used by the Hololens™ should improve performance relative to the sand table and tablet, our results demonstrated that the HoloLens™ performed relatively worse than the other modalities in accuracy, response time, cognitive load, and usability. Implications and limitations of this work will be discussed.
... In AI, the term "intelligent agent" refers to an autonomous entity having goal-directed behavior in an environment using observation through sensors and execution actions through actuators (Russell & Norvig, 2022). Examples of the application of agents can be seen in the automotive industry (Society of Automotive Engineers, 2021), healthcare (Coronato et al., 2020;Loftus et al., 2020), unmanned aerial vehicles (UAV) (Hocraffer & Nam, 2017), manufacturing (Elghoneimy & Gruver, 2012), and recent development towards maritime autonomous surface ships (IMO, 2018). Even though agents can be very sophisticated and can perform certain task with a high degree of independence, they often require some form of human supervision in case of failures or unforeseen situations. ...
Article
Full-text available
Objective: In this review, we investigate the relationship between agent transparency, Situation Awareness, mental work-load, and operator performance for safety critical domains. Background: The advancement of highly sophisticated automation across safety critical domains poses a challenge for effective human oversight. Automation transparency is a design principle that could support humans by making the automation's inner workings observable (i.e., "seeing-into"). However, experimental support for this has not been systematically documented to date. Method: Based on the PRISMA method, a broad and systematic search of the literature was performed focusing on identifying empirical research investigating the effect of transparency on central Human Factors variables. Results: Our final sample consisted of 17 experimental studies that investigated transparency in a controlled setting. The studies typically employed three human-automation interaction types: responding to agent-generated proposals, supervisory control of agents, and monitoring only. There is an overall trend in the data pointing towards a beneficial effect of transparency. However, the data reveals variations in Situation Awareness, mental workload, and operator performance for specific tasks, agent-types, and level of integration of transparency information in primary task displays. Conclusion: Our data suggests a promising effect of automation transparency on Situation Awareness and operator performance , without the cost of added mental workload, for instances where humans respond to agent-generated proposals and where humans have a supervisory role. Application: Strategies to improve human performance when interacting with intelligent agents should focus on allowing humans to see into its information processing stages, considering the integration of information in existing Human Machine Interface solutions.
... First, a survey report (Chung et al. 2018) suggests that the missing component of modern automated SAR systems is a measure of learning to provide better autonomy and better flexibility. Furthermore, a meta analysis review (Hocraffer and Nam 2017) outlines several human factors challenges, including high cognitive demands for the operator and non-intuitive behavior. This review also states that these challenges could be mitigated by raising the swarm's level of autonomy to reduce operators' cognitive workload and by way of fact improve their situation awareness. ...
Thesis
Full-text available
In an attempt to solve search-and-rescue problematics such as rescue time and difficulty in accessing certain search areas, a cognitive swarm of drones system is proposed, using artificial intelligence techniques interacting with cognitive components. The system’s various elements (drones’ cognition, pathfinding, policies, but also humans-swarm interactions) are elaborated, implemented and evaluated using a simulator custom-built for this dissertation. Evaluation outcomes show that cognitive functions can be beneficial to non- cognitive drone components, and vice versa. Possible improvements are discussed.
... The use of swarm of small aerial vehicles allows for extended mission area, flexible mission capability, robustness to single point failure, and cost effectiveness. Research areas related to swarm drones have been very diverse, including development of small-scale aerial vehicles [3][4][5], ad hoc communication backbone tailored to swarm operation [6][7][8], path generation to ensure collision avoidance [9][10][11], mission-level planning and scheduling to achieve high-level autonomy [12][13][14][15], and interaction/interface between the human operator and the swarm drones [16][17][18][19]. It should be noted that while earlier literature focused on technologies to enhance performance and capabilities, recent work has been looking into more safe, secure, and reliable operations of such swarm systems [20][21][22][23]. ...
Article
Full-text available
This paper addresses anomaly detection and monitoring for swarm drone flights. While the current practice of swarm flight typically relies on the operator's naked eyes to monitor health of the multiple vehicles, this work proposes a machine learning-based framework to enable detection of abnormal behavior of a large number of flying drones on the fly. The method works in two steps: a sequence of two unsupervised learning procedures reduces the dimensionality of the real flight test data and labels them as normal and abnormal cases; then, a deep neural network classifier with one-dimensional convolution layers followed by fully connected multi-layer perceptron extracts the associated features and distinguishes the anomaly from normal conditions. The proposed anomaly detection scheme is validated on the real flight test data, highlighting its capability of online implementation.
... Personal service robots are expected to exceed 22.1 million units in 2019 and 61.1 million units in 2022, while the sales for agricultural robots are projected to grow by 50% each year. Different techniques have been proposed for engineering the various aspects of robotic behavior [25][26][27]32,37,39,92,108], such as interoperability at the human-robot (or humanswarm) level [43,53] and at the software-component level in middlewares [78], or multi-robot target detection and tracking [92]. ...
Article
Full-text available
Mobile robots are becoming increasingly important in society. Fulfilling complex missions in different contexts and environments, robots are promising instruments to support our everyday live. As such, the task of defining the robot’s mission is moving from professional developers and roboticists to the end-users. However, with the current state-of-the-art, defining missions is non-trivial and typically requires dedicated programming skills. Since end-users usually lack such skills, many commercial robots are nowadays equipped with environments and domain-specific languages tailored for end-users. As such, the software support for defining missions is becoming an increasingly relevant criterion when buying or choosing robots. Improving these environments and languages for specifying missions toward simplicity and flexibility is crucial. To this end, we need to improve our empirical understanding of the current state-of-the-art of such languages and their environments. In this paper, we contribute in this direction. We present a survey of 30 mission specification environments for mobile robots that come with a visual and end-user-oriented language. We explore the design space of these languages and their environments, identify their concepts, and organize them as features in a feature model. We believe that our results are valuable to practitioners and researchers designing the next generation of mission specification languages in the vibrant domain of mobile robots.
... The potential benefits of swarm technology have sparked discussion and research regarding how UAV swarms can be used. Area exploration and surveillance, search and rescue, military point defense, and relaying communications are recurring use case examples listed in the literature [7,10]. These are examples of general or strategic level swarm applications. ...
Chapter
Unmanned aircraft systems (UAS) are a rapidly emerging sector of aviation, however loss of control in-flight (LOC-I) is the largest reported category by the Air Accident Investigations Branch (2021). When UAS unexpectedly revert from global positioning system (GPS) to attitude mode (ATTI), automatic safety features may degrade. This creates a pre-condition for LOC-I. Under visual line of sight (VLOS), remote pilots (RPs) must maintain a constant aircraft watch, but current user interfaces may not offer appropriate alerting for visual tasks. A repeated measures experiment compared RP reaction times against two different ATTI alerts: verbal and passive. Participants with General VLOS Certificates were recruited to fly a maneuver sequence in June 2022 (n = 5). Quantitative data was supported by a qualitative questionnaire. Four research questions (RQs) were asked. RQ1: Is there a significant difference in RP reaction times to unexpected modechanges between a passive system and a verbal ATTI warning system? RQ2: Is reaction time to reversion significantly affected by maneuver? RQ3: Do RPs have a preference of warning system? RQ4: What suggestions do RPs have for warning system design? Results indicate a significant improvement using a verbal system (p = 0.048), with a large effect (ƞp2 = 0.66). Participants unanimously agreed that verbal alerts enhanced awareness of unexpected reversion. A combined haptic/verbal system was suggested by participants. The theoretical concept “Alert System Assessment Tool” has been introduced, alongside other RP human factors research areas. This study expands limited research within VLOS alerts and is believed to be primary research into verbal ATTI warnings.KeywordsUASVLOSAlertingRemote PilotATTI Mode
Chapter
The relationship of humans to robots has evolved from using tools, to controlling platforms, to supervising automated systems, to collaborating with autonomous agents. This chapter discusses the human element of the evolving man-machine symbiosis including both traditional and emerging human factors issues related to designing systems that emulate human teams operating in complex decision spaces. It reviews common human-robot interaction (HRI) techniques for robots with various degrees of autonomy, including conventional, multimodal, and advanced techniques such as augmented-reality-based systems. The chapter also reviews the state-of-the-art of HRI applications and techniques. Detailed discussion of human-robot communications reviews the state-of-the-art and promising techniques currently under development. Particularly, communications-related issues such as agent transparency are examined in detail. Finally, the chapter provides a thorough examination of human performance issues (situation awareness, trust, workload, training, and individual differences).
Chapter
Wilderness Search and Rescue (WiSAR) operations require navigating large unknown environments and locating missing victims with high precision and in a timely manner. Several studies used deep reinforcement learning (DRL) to allow for the autonomous navigation of Unmanned Aerial Vehicles (UAVs) in unknown search and rescue environments. However, these studies focused on indoor environments and used fixed altitude navigation which is a significantly less complex setting than realistic WiSAR operations. This paper uses a DRL-powered approach for WiSAR in an unknown mountain landscape environment. To manage the complexity of the problem, the proposed approach breaks up the problem into five modules: Information Map, DRL-based Navigation, DRL-based Exploration Planner (waypoint generator), Obstacle Detection, and Human Detection. Curriculum learning has been used to enable the Navigation module to learn 3D navigation. The proposed approach was evaluated both under semi-autonomous operations where waypoints are externally provided by a human and under full autonomy. The results demonstrate the ability of the system to detect all humans when waypoints are generated randomly or by a human, whereas DRL-based waypoint generation led to a lower recall of 75%.
Chapter
Dynamical networks are a framework commonly used to model large networks of interacting time-varying components such as power grids and epidemic disease networks. The connectivity structure of dynamical networks play a key role in enabling many interesting behaviours such as synchronisation and chimeras. However, dynamical networks can also be vulnerable to network attack, where the connectivity structure is externally altered. This can cause sudden failure and loss of stability in the network. The ability to detect these network attacks is useful in troubleshooting and preventing system failure. Recently, a backpropagation regression method inspired by RNN training algorithms was proposed to infer both local node dynamics and connectivity structure from measured node signals. This paper explores the application of backpropagation regression for fault detection in dynamical networks. We construct separate models for local dynamics and coupling structure to perform short-term freerun predictions. Due to the separation of models, abnormal increases in prediction error can be attributed to changes in the network structure. Automatic detection is achieved by comparing prediction error statistics across two windows that span a period before and after a network attack. This method is tested on a simulated dynamical network of chaotic Lorenz oscillators undergoing gradual edge corruption via three different processes: edge swapping, moving and deletion. We demonstrate that the correlation between increased prediction error and the occurrence of edge corruption can be used to reliably detect both the onset and approximate location of the attack within the network.
Article
The use of robotic swarms has become increasingly common in research, industrial, and military domains for tasks such as collective exploration, coordinated movement, and collective localization. Despite the expanded use of robotic swarms, little is known about how swarms are perceived by human operators. To characterize human-swarm interactions, we evaluate how operators perceive swarm characteristics, including movement patterns, control schemes, and occlusion. In a series of experiments manipulating movement patterns and control schemes, participants tracked swarms on a computer screen until they were occluded from view, at which point participants were instructed to estimate the spatiotemporal dynamics of the occluded swarm by mouse click. In addition to capturing mouse click responses, eye tracking was used to capture participants eye movements while visually tracking swarms. We observed that manipulating control schemes had minimal impact on the perception of swarms, and that swarms are easier to track when they are visible compared to when they were occluded. Regarding swarm movements, a complex pattern of data emerged. For example, eye tracking indicates that participants more closely track a swarm in an arc pattern compared to sinusoid and linear movement patterns. When evaluating behavioral click-responses, data show that time is underestimated, and that spatial accuracy is reduced in complex patterns. Results suggest that measures of performance may capture different patterns of behavior, underscoring the need for multiple measures to accurately characterize performance. In addition, the lack of generalizable data across different movement patterns highlights the complexity involved in the perception of swarms of objects.
Article
This study aimed to model a trust decision-making of Indonesian small and medium-sized enterprise (SME) groups in the adoption of Industry 4.0 namely, ergonomic, machinery, and e-commerce technology. The data on trust and its constraints were collected through a questionnaire, and formulated in a Kansei fitness function. The trust was modeled by Swarm Modeling (SM) to extract critical constraints. Traveling salesman problem-based Ant Colony Optimization (ACO) was used to determine the optimum decision-making path. The simulation indicated that the perception of technology benefit limited the adoption of Industry 4.0. The three optimal trust decision-making paths were generated on Java, Sumatera and Nusa Tenggara groups.
Article
Full-text available
Human Factors play a significant role in the development and integration of avionic systems to ensure that they are trusted and can be used effectively. As Unoccupied Aerial Vehicle (UAV) technology becomes increasingly important to the aviation domain this holds true. This study aims to gain an understanding of UAV operators’ trust requirements when piloting UAVs by utilising a popular aviation interview methodology (Schema World Action Research Method), in combination with key questions on trust identified from the literature. Interviews were conducted with six UAV operators, with a range of experience. This identified the importance of past experience to trust and the expectations that operators hold. Recommendations are made that target training to inform experience, in addition to the equipment, procedures and organisational standards that can aid in developing trustworthy systems. The methodology that was developed shows promise for capturing trust within human-automation interactions.
Article
In the context of the applications of higher automation and complex system configuration in nuclear power plants (NPPs), ensuring nuclear safety by maintaining the reliability of the operators in main control room is become more crucial than ever before. As an essential dimension of physiological indicator, human mental workload is widely used in evaluating human performance and reliability. This paper presents an approach to evaluate the mental workload of operators using multiple resources in Visual, Auditory, Cognitive, and Psychomotor (VACP) model, and takes time pressure (TP) as a critical index. We have performed the approach by a real case study and analyzed the relationship between the operator mental workload and TP, task type and the error probability during the Loss of Coolant Accident (LOCA) under full power operation. There are two operating phases in the workload has been analyzed. Moreover, the relationship between human‐error probability (HEP) and TP had been studied via volatility gain analysis for distinguishing which step of a task is of most important. The results shows that the fault remind (“informing LOCA”) is the first important fluctuation in the phase ignoring TP and HEP. On the other hand, the information validity (“ensuring control rods has been inserted to the bottom”) in the case considering TP is the most important. Because the high task load is positively correlated with human error, the research provides an important basis for preventing human errors in the phase of NPP designs, which will be especially useful in safety‐critical tasks.
Chapter
In this paper, an improved pigeon-inspired optimization (IPIO) algorithm based on natural selection and Gauss-Cauchy mutation is proposed for unmanned aerial vehicle (UAV) swarm to rapidly realize cooperative dynamic target search and full coverage of target area under uncertain environment. Firstly, the environment awareness map is established, which includes coverage distribution map, target probability map (TPM), digital pheromone map and their updating mechanism. Meanwhile, in order to improve the possibility of discover targets, the target probability map is integrated into the attraction pheromone updating mechanism. Next, by the helps of the above environment awareness map, a reasonable collaborative search task optimization model is designed. Furthermore, based on the classical PIO algorithm, the integer encoding method, discrete compass operator and discrete landmark operator are designed in detail. Gaussian mutation and Cauchy mutation operators are introduced to guarantee the evolution escaping from local optimum, and natural selection is applied to accelerate the convergence. Finally, the simulation results show the effectiveness and superior of the proposed target search strategy.
Thesis
Intralogistics operations in automotive OEMs increasingly confront problems of overcomplexity caused by a customer-centred production that requires customisation and, thus, high product variability, short-notice changes in orders and the handling of an overwhelming number of parts. To alleviate the pressure on intralogistics without sacrificing performance objectives, the speed and flexibility of logistical operations have to be increased. One approach to this is to utilise three-dimensional space through drone technology. This doctoral thesis aims at establishing a framework for implementing aerial drones in automotive OEM logistic operations. As of yet, there is no research on implementing drones in automotive OEM logistic operations. To contribute to filling this gap, this thesis develops a framework for Drone Implementation in Automotive Logistics Operations (DIALOOP) that allows for a close interaction between the strategic and the operative level and can lead automotive companies through a decision and selection process regarding drone technology. A preliminary version of the framework was developed on a theoretical basis and was then revised using qualitative-empirical data from semi-structured interviews with two groups of experts, i.e. drone experts and automotive experts. The drone expert interviews contributed a current overview of drone capabilities. The automotive experts interview were used to identify intralogistics operations in which drones can be implemented along with the performance measures that can be improved by drone usage. Furthermore, all interviews explored developments and changes with a foreseeable influence on drone implementation. The revised framework was then validated using participant validation interviews with automotive experts. The finalised framework defines a step-by-step process leading from strategic decisions and considerations over the identification of logistics processes suitable for drone implementation and the relevant performance measures to the choice of appropriate drone types based on a drone classification specifically developed in this thesis for an automotive context.
Conference Paper
Full-text available
The European Union (EU) regulations require the pilots of Unmanned Aircraft System (UAS) to register and pass a basic theory exam from 2021. Therefore, it is essential to systematically gather information for pilots for utilizing UAS in a safe and meaningful manner especially in Nordic weather conditions. This book chapter describes the Nordic challenges for UAS operations. These can be categorized into two main categories: technological and operational. Based on the extensive literature review and practical experience of the authors, both categories of challenges especially with relevance to the severe Arctic weather conditions are presented. The relevant weather conditions include changing the density of the air, dust and solid particles clouds, extreme light conditions, freezing rain, heavy and gusty wind, heavy clouds, ice fog, rain, mist and fog combinations, rapid temperature changes, snow, storm and hailstorm, temperature crossing 0 C several times within 24 hours, wind shear and whirlwind. Many technical and operational challenges (e.g., weather-related phenomena) are overlapping on both categories and can be mitigated partly by technological advances and partly by operational preparedness. Finally, the future challenges and needs in UAS research are also discussed.
Chapter
The field of swarm guidance and control can rely on intrinsic strategies such as a rule-based system within each member of the swarm or extrinsic strategies, whereby an external agent guides the swarm. In the shepherding problem, sheepdogs drive and collect a flock (swarm) of sheep, guiding them to a goal location. In the case of multiple dogs guiding the swarm, we examine how shared contextual awareness of the sheepdog agents improves the performance when solving the shepherding problem. Specifically consensus around the dynamic centre of mass of a flock is shown to improve shepherding performance.
Chapter
WeChat has become an essential social media platform in China. This research investigates the importance of the WeChat Red packet as a motivator at achieving user’s satisfaction and loyalty in China. To investigate its impact, and factors that attribute to its popularity and acceptability, we extended technology acceptance model (TAM), in addition to the main model factors, “perceived usefulness and perceived ease of use” our proposed model include, perceived trust, perceived security, and perceived entertainment. The questionnaire was designed, and SPSS was used for the analysis. The research results provide insight into how WeChat Red Packet can motivate users and build their satisfaction to improve their loyalty which in turn increases WeChat users’. These results have future implication and practice for research.
Article
This research empirically evaluates the introduction of speech to existing keyboard and mouse input modalities in an application used to control aircraft in a simulated, complex and dynamic environment. Task performance and task performance degradation are assessed for three levels of workload. Previous studies have evaluated task performance using these modalities however, only a couple have evaluated task performance under varying workload. Even though speech is a common addition to modern control interfaces, the effect of varying workload on this combination of control modalities has not yet been reported. Thirty-six participants commanded simulated aircraft through generated obstacle courses to reach a Combat Air Patrol (CAP) point while also responding to a secondary task. There were nine conditions that varied the control modality (Keyboard and Mouse (KM), Voice (V), and Keyboard, Mouse and Voice (KMV)), and workload by varying the number of aircraft being controlled (low, medium and high). Results showed that KM outperformed KMV and V for the low and medium workload levels. However, task performance with KMV was found to degrade the least as workload increased. KMV and KM were found to enable significantly more correct responses to the secondary task which was delivered aurally. Participants reported a preference for the combined modalities (KMV), self-assessing that KMV most reduced their workload. This research suggests that the addition of a speech interface to existing keyboard and mouse modalities, for control of aircraft in a simulation, may help manage cognitive load and may assist in controlling more aircraft under higher workloads.
Article
Full-text available
This paper presents user needs and preferences gathered prior to the development of an indoor remotely piloted air system. A literature review was carried out to analyse previous studies about the involvement of users in the design of indoor unmanned aerial vehicles. Subsequently, the results of these user needs obtained from three focus groups held in European countries (Belgium, Spain and United Kingdom) are presented here. Through a content analysis of the information obtained in the focus groups, 40 codes and 4 variables were defined and used to examine the differences between types of users and their previous experience with drones. The literature review gave support to the results obtained through users’ involvement in the features to be included in a new unmanned aerial vehicle. Non-parametric tests and qualitative comparative analysis were used to analyse the information gathered in the focus groups. The results revealed few differences between artists working in creative industries and drone operators working for the creative industries. These differences affected features such as detecting and avoiding obstacles, which requires the inclusion of sensors. In addition, previous experience with drones was found to be a sufficient condition to explain greater concerns over safety, ethical and security issues in indoor environments.
Conference Paper
Full-text available
The creation of a multimodal, natural user interface to facilitate multi-agent interaction is essential to establishing trust among human and machine teammates in multi-agent systems. Trust is being researched, along with trustworthiness, as a path to certification of autonomous systems by the Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability (ATTRACTOR) project at NASA. The Autonomous Mission Experimental Logistics Interactive Assistant (AMELIA) is a natural user interface that enables multimodal interaction and is designed for rapid mission planning. AMELIA is an intelligent system that considers the user’s preferred communication strategies, as well as the time-critical aspect of the multi-agent system decision-making process. Twenty-four participants planned a multi-agent search and rescue mission, with the aid of an intelligent assistant. The results show that while the combined use of touch and speech was faster than speech alone, the single modality, touch, was still the most efficient. Future research should investigate additional input technologies.
Article
Full-text available
The use of unmanned aerial vehicles (UAV) in military and civilian areas is increasing day by day. This increased use poses risks related to accidents and incidents. Human factors are among the most important causes of accidents and incidents in aviation. Understanding the impact of these factors on unmanned aerial vehicles is vital to prevent the accidents and incidents. In this study, literature on human factors in unmanned aerial vehicles is systematically reviewed and classified. As a result of the classification made, it is aimed to understand which subjects are missing or inadequate. In this way, it is also attempted to make suggestions about future studies.
Article
Full-text available
Recent advances in technology are delivering robots of reduced size and cost. A natural outgrowth of these advances are systems comprised of large numbers of robots that collaborate autonomously in diverse applications. Research on effective autonomous control of such systems, commonly called swarms, has increased dramatically in recent years and received attention from many domains, such as bioinspired robotics and control theory. These kinds of distributed systems present novel challenges for the effective integration of human supervisors, operators, and teammates that are only beginning to be addressed. This paper is the first survey of human–swarm interaction (HSI) and identifies the core concepts needed to design a human–swarm system. We first present the basics of swarm robotics. Then, we introduce HSI from the perspective of a human operator by discussing the cognitive complexity of solving tasks with swarm systems. Next, we introduce the interface between swarm and operator and identify challenges and solutions relating to human–swarm communication, state estimation and visualization, and human control of swarms. For the latter, we develop a taxonomy of control methods that enable operators to control swarms effectively. Finally, we synthesize the results to highlight remaining challenges, unanswered questions, and open problems for HSI, as well as how to address them in future works.
Article
Full-text available
Operators currently controlling Unmanned Aerial Vehicles report significant boredom, and such systems will likely become more automated in the future. Similar problems are found in process control, commercial aviation, and medical settings. To examine the effect of boredom in such settings, a long duration low task load experiment was conducted. Three low task load levels requiring operator input every 10, 20, or 30 minutes were tested in a our-hour study using a multiple unmanned vehicle simulation environment that leverages decentralized algorithms for sometimes imperfect vehicle scheduling. Reaction times to system-generated events generally decreased across the four hours, as did participants’ ability to maintain directed attention. Overall, participants spent almost half of the time in a distracted state. The top performer spent the majority of time in directed and divided attention states. Unexpectedly, the second-best participant, only 1% worse than the top performer, was distracted almost one third of the experiment, but exhibited a periodic switching strategy, allowing him to pay just enough attention to assist the automation when needed. Indeed, four of the five top performers were distracted more than one-third of the time. These findings suggest that distraction due to boring, low task load environments can be effectively managed through efficient attention switching. Future work is needed to determine optimal frequency and duration of attention state switches given various exogenous attributes, as well as individual variability. These findings have implications for the design of and personnel selection for supervisory control systems where operators monitor highly automated systems for long durations with only occasional or rare input.
Article
Full-text available
In this chapter, I review research involving remote human supervision of multiple unmanned vehicles (UVs) using command complexity as an organizing construct. Multi-UV tasks range from foraging, requiring little coordination among UVs, to formation following, in which UVs must function as a cohesive unit. Command complexity, the degree to which operator effort increases with the number of supervised UVs, is used to categorize human interaction with multiple UVs. For systems in which each UV requires the same form of attention (O(n)), effort increases linearly with the number of UVs. For systems in which the control of one UV is dependent upon another (O(>n)), additional UVs impose greater than linear increases due to the expense of coordination. For other systems, an operator interacts with an autonomously coordinating group, and effort is unaffected by group size (O(1)). Studies of human/multi-UV interaction can be roughly grouped into O(n) supervision, involving one-to-one control of individual UVs, or O(1) commanding, in which higher-level commands are directed to a group. Research in O(n) command has centered on round-robin control, neglect tolerance, and attention switching. Approaches to O(1) command are divided into systems using autonomous path planning only, plan libraries, human-steered planners, and swarms. Each type of system has its advantages. Less complete work in scalable displays for multiple UVs is reviewed. Mixing levels of command is probably necessary to supervise multiple UVs performing realistic tasks. Research in O(n) control is mature and can provide quantitative and qualitative guidance for design. Interaction with planners and swarms is less mature but more critical to developing effective multi-UV systems capable of performing complex tasks.
Article
Full-text available
In this paper we present the first study of human-swarm interaction comparing two fundamental types of interaction, coined intermittent and environmental. These types are exemplified by two control methods, selection and beacon control, made available to a human operator to control a foraging swarm of robots. Selection and beacon control differ with respect to their temporal and spatial influence on the swarm and enable an operator to generate different strategies from the basic behaviors of the swarm. Selection control requires an active selection of groups of robots while beacon control exerts an influence on nearby robots within a set range. Both control methods are implemented in a testbed in which operators solve an information foraging problem by utilizing a set of swarm behaviors. The robotic swarm has only local communication and sensing capabilities. The number of robots in the swarm range from 50 to 200. Operator performance for each control method is compared in a series of missions in different environments with no obstacles up to cluttered and structured obstacles. In addition, performance is compared to simple and advanced autonomous swarms. Thirty-two participants were recruited for participation in the study. Autonomous swarm algorithms were tested in repeated simulations. Our results showed that selection control scales bet- ter to larger swarms and generally outperforms beacon control. Operators utilized different swarm behaviors with different frequency across control methods, suggesting an adaptation to different strategies induced by choice of control method. Simple autonomous swarms outperformed human operators in open environments, but operators adapted better to complex environments with obsta- cles. Human controlled swarms fell short of task-specific benchmarks under all conditions. Our results reinforce the importance of understanding and choosing appropriate types of human-swarm interaction when designing swarm systems, in addition to choosing appropriate swarm behaviors.
Article
Full-text available
Advances in miniaturized computer technology made it possible for a single Unmanned Aerial Vehicle (UAV) to complete its mission autonomously. This also sparked interest in having swarms of UAVs that are cooperating as a team on a single mission. The level of automation involved in the control of UAV swarms will also change the role of the human operator. That is, instead of manually controlling the movements of the individual UAVs, the system operator will need to perform higher-level mission management tasks. However, most ground control stations are still tailored to the control of single UAVs by portraying raw flight status data on cockpit-like instruments. In this paper, the ecological interface design paradigm is used to enhance the human-machine interface of a ground control station to support mission management for UAV swarms. As a case study, a generic ground-surveillance mission with four UAVs is envisioned. A preliminary evaluation study with 10 participants showed that the enhanced interface successfully enables operators to control a swarm of four UAVs and to resolve failures during mission execution. The results of the evaluation study showed that the interface enhancements promoted creative problem-solving activities to scenarios that could not have been solved by following a fixed procedure. However, the results also showed that the current interface still required control actions to be performed per single UAV, making it labor intensive to change mission parameters for swarms consisting of more than four UAVs.
Conference Paper
Full-text available
In this paper we propose a bio-inspired model for a decentralized swarm of robots, similar to the model proposed by Couzin [5], that allows for dynamic task assignment and is robust to limited communication from a human. We provide evidence that the model has two fundamental attractors: a torus attractor and a flock attractor. Through simulation and mathematical analysis we investigate the stability of these attractors and show that a control input can be used to force the system to change from one attractor to the other. Finally, we generalize another of Couzin's ideas [4] and present the idea of a stakeholder agent. We show how a human operator can use stakeholders to responsively influence group behavior while maintaining group structure.
Conference Paper
Full-text available
This study shows that appropriate human interaction can benefit a swarm of robots to achieve goals more efficiently. A set of desirable features for human swarm interaction is identified based on the principles of swarm robotics. Human swarm interaction architecture is then proposed that has all of the desirable features. A swarm simulation environment is created that allows simulating a swarm behavior in an indoor environment. The swarm behavior and the results of user interaction are studied by considering radiation source search and localization application of the swarm. Particle swarm optimization algorithm is slightly modified to enable the swarm to autonomously explore the indoor environment for radiation source search and localization. The emergence of intelligence is observed that enables the swarm to locate the radiation source completely on its own. Proposed human swarm interaction is then integrated in a simulation environment and user evaluation experiments are conducted. Participants are introduced to the interaction tool and asked to deploy the swarm to complete the missions. The performance comparison of the user guided swarm to that of the autonomous swarm shows that the interaction interface is fairly easy to learn and that user guided swarm is more efficient in achieving the goals. The results clearly indicate that the proposed interaction helped the swarm achieve emergence.
Conference Paper
Full-text available
In this paper we show a consistent approach of using Hierarchical Task Analysis together with model checking to identify pilot errors during the interaction with cockpit automation systems in aircraft. Task analysis is used to model flight procedures which describe how to operate a specific system in a particular situation. Afterwards model checking is used to identify deviations from these procedures in empirical simulator data. We envision applying this method to automatically detect pilot errors during flight tests or pilot training.
Article
Full-text available
The main objective of the Synthetic Teammate project is to develop language and task enabled synthetic agents capable of being integrated into team training simulations. To achieve this goal without detriment in team training, the synthetic agents must be capable of closely matching human behavior. The initial application for the Synthetic Teammate research is the creation of an agent capable of performing the functions of a pilot for an Unmanned Aerial Vehicle (UAV) simulation as part of a three-person team. 1. Project Overview The main objective of the Synthetic Teammate project is to develop synthetic agents capable of being integrated into team training simulations. To achieve this goal without detriment in team training, the synthetic agents must be capable of closely matching human behavior across several cognitive capacities, such as situation assessment, task behavior, and language comprehension and generation. The initial application for the synthetic teammate research is the creation of an agent capable of functioning as the pilot of an Unmanned Aerial Vehicle (UAV) within a synthetic task environment (STE) which is described in the following section.
Article
Full-text available
For future systems that require one or a small team of operators to supervise a network of automated agents, automated planners are critical since they are faster than humans for path planning and resource allocation in multivariate, dynamic, time-pressured environments. However, such planners can be brittle and unable to respond to emergent events. Human operators can aid such systems by bringing their knowledge-based reasoning and experience to bear. Given a decentralized task planner and a goal-based operator interface for a network of unmanned vehicles in a search, track, and neutralize mission, we demonstrate with a human-on-the-loop experiment that humans guiding these decentralized planners improved system performance by up to 50%. However, those tasks that required precise and rapid calculations were not significantly improved with human aid. Thus, there is a shared space in such complex missions for human–automation collaboration.
Article
Full-text available
As the use of unmanned aerial vehicles expands to near earth applications and force multiplying scenarios, current methods of operating UAVs and evaluating pilot performance need to expand as well. Many human factors studies on UAV operations rely on self reporting surveys to assess the situational awareness and cognitive workload of an operator during a particular task, which can make objective evaluations difficult. Functional Near-Infrared Spectroscopy (fNIR) is an emerging optical brain imaging technology that monitors brain activity in response to sensory, motor, or cognitive activation. fNIR systems developed during the last decade allow for a rapid, non-invasive method of measuring the brain activity of a subject while conducting tasks in realistic environments. This paper investigates deployment of fNIR for monitoring UAV operator’s cognitive workload and situational awareness during simulated missions. The experimental setup and procedures are presented with some early results supporting the use of fNIR for enhancing UAV operator training, evaluation and interface development.
Article
Full-text available
This study examined the impact of increasing automation replanning rates on operator performance and workload when supervising a decentralized network of heterogeneous unmanned vehicles. Futuristic unmanned vehicles systems will invert the operator-to-vehicle ratio so that one operator can control multiple dissimilar vehicles connected through a decentralized network. Significant human-automation collaboration will be needed because of automation brittleness, but such collaboration could cause high workload. Three increasing levels of replanning were tested on an existing multiple unmanned vehicle simulation environment that leverages decentralized algorithms for vehicle routing and task allocation in conjunction with human supervision. Rapid replanning can cause high operator workload, ultimately resulting in poorer overall system performance. Poor performance was associated with a lack of operator consensus for when to accept the automation's suggested prompts for new plan consideration as well as negative attitudes toward unmanned aerial vehicles in general. Participants with video game experience tended to collaborate more with the automation, which resulted in better performance. In decentralized unmanned vehicle networks, operators who ignore the automation's requests for new plan consideration and impose rapid replans both increase their own workload and reduce the ability of the vehicle network to operate at its maximum capacity. These findings have implications for personnel selection and training for futuristic systems involving human collaboration with decentralized algorithms embedded in networks of autonomous systems.
Article
Full-text available
Technical developments in computer hardware and software now make it possible to introduce automation into virtually all aspects of human-machine systems. Given these technical capabilities, which system functions should be automated and to what extent? We outline a model for types and levels of automation that provides a framework and an objective basis for making such choices. Appropriate selection is important because automation does not merely supplant but changes human activity and can impose new coordination demands on the human operator. We propose that automation can be applied to four broad classes of functions: 1) information acquisition; 2) information analysis; 3) decision and action selection; and 4) action implementation. Within each of these types, automation can be applied across a continuum of levels from low to high, i.e., from fully manual to fully automatic. A particular system can involve automation of all four types at different levels. The human performance consequences of particular types and levels of automation constitute primary evaluative criteria for automation design using our model. Secondary evaluative criteria include automation reliability and the costs of decision/action consequences, among others. Examples of recommended types and levels of automation are provided to illustrate the application of the model to automation design.
Conference Paper
Full-text available
In a variety of emergency settings robot assistance has been identified as highly valuable, providing remote, and thus safe, access and operation. There are many different forms of human-robot interactions, allowing a team of humans and robots to take advantage of skills of each team member. A relatively new area of research considers interactions between human and a team of robots performing as a swarm. This work is concerned with the interactive use of autonomous robots in fire emergency settings. In particular, we consider a swarm of robots that are capable of supporting and enhancing fire fighting operations co-operatively and we investigate how firefighters in the field work with such a swarm. This paper outlines some of the key characteristics of this emergency setting. It discusses possible forms of interactions with swarm robotics being examined in the GUARDIANS project. The paper addresses the use of assistive swarm robotics to support firefighters with navigation and search operations. It reports on existing firefighters operations and how human-swarm interactions are to be used during such operations. The design approaches for human-swarm interaction are described and the preliminary work in the area are outlined. The paper ends by linking current expertise with common features of emergency related interaction design.
Article
Full-text available
In the future vision of allowing a single operator to remotely control multiple unmanned vehicles, it is not well understood what cognitive constraints limit the number of vehicles and related tasks that a single operator can manage. This paper illustrates that, when predicting the number of unmanned aerial vehicles (UAVs) that a single operator can control, it is important to model the sources of wait times (WTs) caused by human-vehicle interaction, particularly since these times could potentially lead to a system failure. Specifically, these sources of vehicle WTs include cognitive reorientation and interaction WT (WTI), queues for multiple-vehicle interactions, and loss of situation awareness (SA) WTs. When WTs were included, predictions using a multiple homogeneous and independent UAV simulation dropped by up to 67%, with a loss of SA as the primary source of WT delays. Moreover, this paper demonstrated that even in a highly automated management-by-exception system, which should alleviate queuing and WTIs, operator capacity is still affected by the SA WT, causing a 36% decrease over the capacity model with no WT included.
Article
In all situations in which a wide area has to be monitored, a practice emerging in recent years consists in using Unmanned Aerial Vehicles (UAVs), and in particular multirotors. Even if many steps forward have been taken towards the fully autonomous control of UAVs, a human pilot is usually in charge of controlling the robots. However, teleoperating UAVs can become a hard task whenever it is necessary to deploy a swarm of robots instead of a single unit, to the end of increasing the area under observation. In this case, the organization of robots in a structured formation may reduce the effort of the operator to the control the swarm. When controlling a team of robots, the typology of visual feedback is crucial. It is known that, while overall awareness and pattern recognition are optimized by exocentric views, i.e., with cameras from above the swarm, the immediate environment is often better viewed egocentrically, i.e., with cameras on board the robots. In this article we present the implementation of a human-robot interface for the control of a swarm of UAVs, with a focus on the analysis of the effects of different visual feedbacks on the performance of human operators.
Article
This paper presents a proposed framework of research experiments to explore human-in-the-loop supervisory control of unmanned aerial vehicle swarms. Current research, tends to focus on swarm intelligence without human, control and human controlled small pack swarms. The proposed research will examine potential hybrid methods of controlling swarms such that a human operator can control both large swarms and small packs.
Decision aiding systems are becoming an important part of command and control. Selecting the best type of decision aiding information remains an important design decision. The research reported in this paper assesses the is to determine if a decision aid in an aircraft identification task should provide a recommendation for action or status information about the identity of the aircraft. Thirty-two subjects were equally divided into four groups: a control group where no decision aiding information was provided; a group who received only status information; a third group who received only recommendation information; and a fourth group who received both status and recommendation information. Results indicated that, in general, providing decision aiding information reduced the time required to identify the aircraft. Differences among the three types of decision aiding information occurred under those conditions when the decision aid was incorrect. When the decision aid provided inaccurate information, the group receiving only status information was least affected by the decision aid and was best able to correctly identify the aircraft. Recommendations for selecting the type of decision aiding information are discussed.
Conference Paper
To address the lack of motion feedback to a UAV pilot, a system was developed that integrates a motion simulator into UAV operations. The system is designed such that during flight, the angular rate of a UAV is captured by an onboard inertial measurement unit (IMU) and is relayed to a pilot controlling the vehicle from inside the motion simulator. Efforts to further increase pilot SA led to the development of a mixed reality chase view piloting interface. Chase view is similar to a view of being towed behind the aircraft. It combines real world onboard camera images with a virtual representation of the vehicle and the surrounding operating environment.
Article
Unmanned aerial vehicles (UAVs) have become increasingly valuable military assets, and reliance upon them will continue to increase. Despite lacking an onboard pilot, UAVs require crews of up to three human operators. These crews are already experiencing high workload levels, which is a problem that will be likely compounded as the military envisions a future where a single operator controls multiple UAVs. To accomplish this goal, effective scheduling of UAVs and human operators is crucial to future mission success. We present a mathematical model for simultaneously routing UAVs and scheduling human operators, subject to operator workload considerations. This model is thought to be the first of its kind. Numerical examples demonstrate the dangers of ignoring the human element in UAV routing and scheduling.
Article
This paper surveys the human-machine interaction technologies supporting the Mission Specialist role in unmanned aerial systems (UASs). The Mission Specialist role is one of three formal human team member roles extracted from the UAS-related literature (the others are Flight Director and Pilot), but unlike the Pilot role, the interface needs have not been established. The interfaces used by 17 micro, small, medium altitude long endurance (MALE) and high altitude long endurance (HALE) platforms are examined to determine (1) what type of user interface technologies are present and/or available; (2) how the Mission Specialist currently or could interact with the user interface technology; and (3) what are the perceived positive and negative aspects of this user interface technology in the context of the UAS human-robot team roles. Micro and small UAVs pose significant user interface limitations for the Mission Specialist role and may produce unintentional interaction conflicts between the Mission Specialist role and the Pilot, potentially resulting in suboptimal performance and loss of robustness. The survey is expected to serve as a reference for future design and refinement of user interfaces for UAS and a foundation for better understanding human-robot interaction in UAS.
Conference Paper
Maintenance of Situation Awareness (SA) during supervision of a swarm of highly autonomous Unmanned Aerial Vehicles (UAV) is a complex and visually demanding process. A solution for supporting UAV operators are Intelligent SA-Adaptive Interfaces (ISAAI), which dynamically adapt to the individual SA needs of operators. However, the successful deployment of ISAAIs depends on a suitable tool for assessing operator SA during mission execution. In this paper we present the tool SA-Tracer. SA-Tracer implements a formal situation model and analyses the scanning behaviour of UAV swarm operators in order to infer and assess SA on a formal basis. We present results of a first experiment performed in order to evaluate the suitability of SA-Tracer for the intended context of use and the validity of SA assessments produced by the tool.
Article
In this paper we investigate principles of swarm control that enable a human operator to exert influence on and control large swarms of robots. We present two principles, coined selection and beacon control, that differ with respect to their temporal and spatial persistence. The former requires active selection of groups of robots while the latter exerts a passive influence on nearby robots. Both principles are implemented in a testbed in which operators exert influence on a robot swarm by switching between a set of behaviors ranging from trivial behaviors up to distributed autonomous algorithms. Performance is tested in a series of complex foraging tasks in environments with different obstacles ranging from open to cluttered and structured. The robotic swarm has only local communication and sensing capabilities with the number of robots ranging from 50 to 200. Experiments with human operators utilizing either selection or beacon control are compared with each other and to a simple autonomous swarm with regard to performance, adaptation to complex environments, and scalability to larger swarms. Our results show superior performance of autonomous swarms in open environments, of selection control in complex environments, and indicate a potential for scaling beacon control to larger swarms.
Article
In theory, autonomous robotic swarms can be used for critical Army tasks, including accompanying vehicle convoys to provide security and enhance situational awareness. However, the Soldier providing swarm supervisory control must be able to correct swarm actions, especially in disrupted or degraded conditions. Dynamic map displays are visual interfaces that can be useful for swarm supervisory control tasks, because they can show the spatial positions of objects of interest (e.g., people, robots, swarm members, and vehicles), at different locations (e.g., on roads and intersections), while allowing user commands as well as world changes, often in real time. In this study, multimodal speech and touch controls were designed for a U.S. Army Research Laboratory dynamic map display to allow users to provide supervisory control of a simulated robotic swarm. This experiment explored the use of sequential multimodal touch and speech commands for placement of swarm-related map objects at different map locations. The criterion variable was temporal binding, the time between the onset of each command in the sequence, relative to the system's ability to fuse the two sequential commands into a unitary response. User preference of modality for the first command was also measured. These concepts were tested in a laboratory study using 12 male Marine volunteers with a mean age of 19 years. Results indicated significant differences in temporal binding for different map objects and map locations. Additionally, nine out of 12 Marines used speech commands approximately 75% or more of the time, while the remaining three Marines used touch commands first approximately 75% or more of the time. Temporal binding was significantly shorter for touch-first than for speech-first commands. Suggestions for future research and future applications to robotic command and control systems are described.
Article
APL has been engaged in a number of independent research and development projects over the past 5 years intended to demonstrate the cooperative behaviors of swarms of small, autonomous unmanned aerial vehicles (uAVs). swarm members cooperate to accomplish complex mission goals with no human in the loop. These projects represent a variety of approaches to uAV swarming, including teaming, consensus variables, and stigmergic potential fields. A series of experiments was conducted from 2001 through 2005 to demonstrate these concepts, and research in this area is ongoing. As a result of these efforts, ApL has developed autonomy frameworks, hardware architectures, and communications concepts that are applicable across a broad range of small, autonomous aerial vehicles.
Article
Despite advances in autonomy, there will always be a need for human involvement in vehicle teleoperation. In particular, tasks such as exploration, reconnaissance and surveillance will continue to require human supervision, if not guidance and direct control. Thus, it is critical that the operator interface be as efficient and as capable as possible. In this paper, we provide an overview of vehicle teleoperation and present a summary of interfaces currently in use.
Conference Paper
Robot swarms are capable of performing tasks with robustness and flexibility using only local interactions between the agents. Such a system can lead to emergent behavior that is often desirable, but difficult to control and manipulate post-design. These properties make the real-time control of swarms by a human operator challenging-a problem that has not been adequately addressed in the literature. In this paper we present preliminary work on two possible forms of control: top-down control of global swarm characteristics and bottom-up control by influencing a subset of the swarm members. We present learning methods to address each of these. The first method uses instance-based learning to produce a generalized model from a sampling of the parameter space and global characteristics for specific situations. The second method uses evolutionary learning to learn placement and parameterization of virtual agents that can influence the robots in the swarm. Finally we show how these methods generalize and can be used by a human operator to dynamically control a swarm in real time.
Conference Paper
In this paper, we describe the development processes adopted for effective human centred design in the context of developing a human robot interface. The human robot interaction context is that of a working with a swarm of autonomous robots being developed to assist the process of search and rescue as carried out by fire fighters. The paper illustrates an approach to early design evaluation motivated by user centred design objectives. The conclusion from the study illustrates the value of early experiential feedback. In particular we show that the complex nature of professional practice in the high risk settings has significant influences upon the fitness for purpose.
Article
Micro Unmanned Aerial Vehicles (UAVs) such as quadrocopters have gained great popularity over the last years, both as a research platform and in various application fields. However, some complex application scenarios call for the formation of swarms consisting of multiple drones. In this paper a platform for the creation of such swarms is presented. It is based on commercially available quadrocopters enhanced with on-board processing and communication units enabling full autonomy of individual drones. Furthermore, a generic ground control station is presented that serves as integration platform. It allows the seamless coordination of different kinds of sensor platforms.
Conference Paper
In this paper, we propose to examine the practices of leadership defined in human relationships and model their use in maximizing performance for human-robot interaction scenarios. This process involves first defining the human-robot space of interaction and mapping the situational context in which human leadership styles are most fitting. We then determine which behavior, for both the human and robot, is most appropriate in order to understand the proper roles for human-robot integration. From there, we model the necessary robot behavior for increasing efficiency in human-robot interaction schemes. We conclude by discussing experimental results derived from allocating roles in representative human-robot navigation scenarios.
Swarming & the Future of Conflict
  • J Arquilla
  • D Ronfeldt
Arquilla, J., Ronfeldt, D., 2000. Swarming & the Future of Conflict. RAND, Santa Monica, CA.
Multimodal controls for soldier/swarm interaction
  • E Haas
Haas, E., et al., 2011. Multimodal controls for soldier/swarm interaction. In: 20th IEEE International Symposium on Robot and Human Interactive Communication, pp. 223e228.
Towards human control of robot swarms
  • A Kolling
  • S Nunnally
  • M Lewis
Kolling, A., Nunnally, S., Lewis, M., 2012. Towards human control of robot swarms. In: Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction. ACM.
National Transportation Systems Center
Unmanned Aircraft System (UAS) Service Demand 2015 e 2035, 2013. National Transportation Systems Center, pp. 1e151.