Project

Long-Term Human-Robot Teaming for Disaster Response (TRADR)

Updates
0 new
0
Recommendations
0 new
0
Followers
0 new
25
Reads
0 new
232

Project log

Vladimír Kubelka
added a research item
Mobile tracked robots are suitable for traversing rough terrain. However, standard exteroceptive localization methods (visual or laser SLAM) may be unreliable due to smoke, dust, fog or insufficient lighting in harsh conditions of urban search and rescue missions. During extensive end-user evaluations in real-world conditions of such scenarios, we have observed that the accuracy of dead-reckoning localization suffers while traversing vertical obstacles. We propose to combine explicit modeling of robot kinematics and data-driven approach based on machine learning. The proposed method is experimentally verified indoors and outdoors traversing various obstacles. Indoors, a reference position has been recorded as well to assess the accuracy of our solution. The experimental dataset is released to the public to help the robotics community.
Vladimír Kubelka
added 3 research items
If we aim for autonomous navigation of a mobile robot, it is crucial and essential to have proper state estimation of its position and orientation. We already designed a multi-modal data fusion algorithm that combines visual, laser-based, inertial, and odometric modalities in order to achieve robust solution to a general localization problem in challenging Urban Search and Rescue environment. Since different sensory modalities are prone to different nature of errors, and their reliability varies vastly as the environment changes dynamically, we investigated further means of improving the localization. The common practice related to the EKF-based solutions such as ours is a standard statistical test of the observations—or of its corresponding filter residuals—performed to reject anomalous data that deteriorate the filter performance. In this paper we show how important it is to treat well visual and laser anomalous residuals, especially in multi-modal data fusion systems where the frequency of incoming observations varies significantly across the modalities. In practice, the most complicated part is to correctly identify the actual anomalies, which are to be rejected, and therefore here lies our major contribution. We go beyond the standard statistical tests by exploring different state-of-the-art machine learning approaches and exploiting our rich dataset that we share with the robotics community. We demonstrate the implications of our research both indoor (with precise reference from a Vicon system) as well as in challenging outdoor environment. In the final, we prove that monitoring the health of the observations in Kalman filtering is something, that is often overlooked, however, it definitively should not be.
This paper presents evaluation of four different state estimation architectures exploiting the extended Kalman filter (EKF) for 6-DOF dead reckoning of a mobile robot. The EKF is a well proven and commonly used technique for fusion of inertial data and robot's odometry. However, different approaches to designing the architecture of the state estimator lead to different performance and computational demands. While seeking the best possible solution for the mobile robot, the nonlinear model and the error model are addressed, both with and without a complementary filter for attitude estimation. The performance is determined experimentally by means of precision of both indoor and outdoor navigation, including complex-structured environment such as stairs and rough terrain. According to the evaluation, the nonlinear model combined with the complementary filter is selected as a best candidate (reaching 0.8 m RMSE and average of 4% return position error (RPE) of distance driven) and implemented for real-time onboard processing during a rescue mission deployment.
Vladimír Kubelka
added a research item
Precise motion estimation is vital for any mobile robot to correctly control its actuators and thus to navigate through terrain. Basic approaches of motion estimation (e.g. wheel odometry) that can be considered reliable in laboratory conditions tend to fail in real-world search and rescue scenarios because of uneven and slippery surface the robot has to cross. In this article, we pick some of the current localization and motion estimation techniques and discuss their prerequisites in contrast with experience gathered during end-user evaluations and a real-world deployment of our robotic platform in a town struck by an earthquake (Mirandola, Italy). The robotic platform is equipped with a set of sensors allowing us to combine various approaches to robot localization and motion estimation in order to increase the redundancy in the system and thus the overall reliability. We present our approach to fuse selected sensor modalities that was developed with emphasis on possible sensor failures, which have been subsequently experimentally tested.
Vladimír Kubelka
added 2 research items
Localization of mobile robots is still an important topic, especially in case of dynamically changing, complex environments such as in Urban Search & Rescue (USAR). In this paper we aim for improving the reliability and precision of localization of our multimodal data fusion algorithm. Multimodal data fusion requires resolving several issues such as significantly different sampling frequencies of the individual modalities. We compare our proposed solution with the well-proven and popular Rauch-Tung-Striebel smoother for the Extended Kalman filter. Furthermore, we improve the precision of our data fusion by incorporating scale estimation for the visual modality.
Urban search and rescue (USAR) missions for mobile robots require reliable state estimation systems resilient to conditions given by the dynamically changing environment. We design and evaluate a data fusion system for localization of a mobile skid-steer robot intended for USAR missions. We exploit a rich sensor suite including both proprioceptive (inertial measurement unit and tracks odometry) and exteroceptive sensors (omnidirectional camera and rotating laser rangefinder). To cope with the specificities of each sensing modality (such as significantly differing sampling frequencies), we introduce a novel fusion scheme based on an extended Kalman filter for six degree of freedom orientation and position estimation. We demonstrate the performance on field tests of more than 4.4 km driven under standard USAR conditions. Part of our datasets include ground truth positioning, indoor with a Vicon motion capture system and outdoor with a Leica theodolite tracker. The overall median accuracy of localization—achieved by combining all four modalities—was 1.2% and 1.4% of the total distance traveled for indoor and outdoor environments, respectively. To identify the true limits of the proposed data fusion, we propose and employ a novel experimental evaluation procedure based on failure case scenarios. In this way, we address the common issues such as slippage, reduced camera field of view, and limited laser rangefinder range, together with moving obstacles spoiling the metric map. We believe such a characterization of the failure cases is a first step toward identifying the behavior of state estimation under such conditions. We release all our datasets to the robotics community for possible benchmarking.
Koen V. Hindriks
added 3 research items
Exploration games are games where agents (or robots) need to search resources and retrieve these resources. In principle, performance in such games can be improved either by adding more agents or by exchanging more messages. However, both measures are not free of cost and it is important to be able to assess the trade-off between these costs and the potential performance gain. The focus of this paper is on improving our understanding of the performance gain that can be achieved either by adding more agents or by increasing the communication load. Performance gain moreover is studied by taking several other important factors into account such as environment topology and size, resource-redundancy, and task size. Our results suggest that there does not exist a decision function that dominates all other decision functions, i.e. is optimal for all conditions. Instead we find that (i) for different team sizes and communication strategies different agent decision functions perform optimal, a nd that (ii) optimality of decision functions also depends on environment and task parameters. We also find that it pays off to optimize for environment topologies.
Koen V. Hindriks
added a research item
Task allocation and management is crucial for human-robot collaboration in Urban Search And Rescue response efforts. The job of a mission team leader in managing tasks becomes complicated when adding multiple and different types of robots to the team. Therefore, to effectively accomplish mission objectives, shared situation awareness and task management support are essential. In this paper, we design and evaluate an ontology which provides a common vocabulary between team members, both humans and robots. The ontology is used for facilitating data sharing and mission execution, and providing the required automated task management support. Relevant domain entities, tasks, and their relationships are modeled in an ontology based on vocabulary commonly used by firemen, and a user interface is designed to provide task tracking and monitoring. The ontology design and interface are deployed in a search and rescue system and its use is evaluated by firemen in a task allocation and management scenario. Results provide support that the proposed ontology (1) facilitates information sharing during missions; (2) assists the team leader in task allocation and management; and (3) provides automated support for managing an Urban Search and Rescue mission.
Mark Neerincx
added a research item
As robots that share working and living environments with humans proliferate, human-robot teamwork (HRT) is becoming more relevant every day. By necessity, these HRT dynamics develop over time, as HRT can hardly happen only in the moment. What theories, algorithms, tools, computational models and design methodologies enable effective and safe longitudinal human-robot teaming? To address this question, we propose a half-day workshop on longitudinal human-robot teaming. This workshop seeks to bring together researchers from a wide array of disciplines with the focus of enabling humans and robots to better work together in real-life settings and over long-term. Sessions will consist of a mix of plenary talks by invited speakers and contributed papers/posters, and will encourage discussion and exchange of ideas amongst participants by having breakout groups and a panel discussion.
Mark Neerincx
added a research item
Artificial agents, such as robots, are increasingly deployed for teamwork in dynamic, high-demand environments. This paper presents a framework, which applies context information to establish task (re)allocations that improve human-robot team’s performance. Based on the framework, a model for adaptive automation was designed that takes the cognitive task load (CTL) of a human team member and the coordination costs of switching to a new task allocation into account. Based on these two context factors, it tries to optimize the level of autonomy of a robot for each task. The model was instantiated for a single human agent cooperating with a single robot in the urban search and rescue domain. A first experiment provided encouraging results: the cognitive task load of participants mostly reacted to the model as intended. Recommendations for improving the model are provided, such as adding more context information.
Mark Neerincx
added a research item
The design of cognitive agents involves a knowledge representation (KR) to formally represent and manipulate information relevant for that agent. In practice, agent programming frameworks are dedicated to a specific KR, limiting the use of other possible ones. In this paper we address the issue of creating a flexible choice for agent programmers regarding the technology they want to use. We propose a generic interface, that provides an easy choice of KR for cognitive agents. Our proposal is governed by a number of design principles, an analysis of functional requirements that cognitive agents pose towards a KR, and the identification of various features provided by KR technologies that the interface should capture. We provide two use-cases of the interface by describing its implementation for Prolog and OWL with rules.
Ivana Kruijff-Korbayova
added a research item
We provide key facts about the TRADR project deployment of ground and aerial robots in Amatrice, Italy, after the major earthquake in August 2016. The robots were used to collect data for 3D textured models of the interior and exterior of two badly damaged churches of high national heritage value.
Mark Neerincx
added a research item
We report on the latest large-scale disaster-response exercise conducted by our project, which involves a robotic system with both ground robots (UGVs) and aerial robots (UAVs). In particularly, we focus on aspects related to Human-Robot teaming, and the uptake of new technology by end-users.
Mark Neerincx
added a research item
Due to advances in technology, the world around us contains an increasing number of robots, virtual agents, and other intelligent systems. These systems all have a certain degree of autonomy. For the people who interact with an intelligent system it is important to obtain a good understanding of its degree of autonomy: what tasks can the system perform autonomously and to what extent? In this paper we therefore present a study on how a system’s characteristics affect people’s perception of its autonomy. This was investigated by asking fire-fighters to rate the autonomy of a number of search and rescue robots in different shapes and situations. In this paper, we identify the following seven aspects of perceived autonomy: time interval of interaction, obedience, informativeness, task complexity, task implication, physical appearance, and physical distance to human operator. The study showed that increased disobedience, task complexity and physical distance of a robot can increase perceived autonomy.
Mark Neerincx
added a research item
As robots are increasingly used in Search and Rescue (SAR) missions, it becomes highly relevant to study how SAR robots can be developed and deployed in a responsible way. In contrast to some other robot application domains, e.g. military and healthcare, the ethics of robot-assisted SAR are relatively under examined. This paper aims to fill this gap by assessing and analyzing important values and value tensions of stakeholders of SAR robots. The paper describes the outcomes of several Value Assessment workshops that were conducted with rescue workers, in the context of a European research project on robot-assisted SAR (the TRADR project). The workshop outcomes are analyzed and key ethical concerns and dilemmas are identified and discussed. Several recommendations for future ethics research leading to responsible development and deployment of SAR robots are provided.
Mark Neerincx
added a research item
Concurrent telecontrol of the chassis and camera of an Unmanned Ground Vehicle (UGV) is a demanding task for Urban Search and Rescue (USAR) teams. The standard way of controlling UGVs is called Tank Control (TC), but there is reason to believe that Free Look Control (FLC), a control mode used in games, could reduce this load substantially by decoupling, and providing separate controls for, camera translation and rotation. The general hypothesis is that FLC (1) reduces robot operators' workload and (2) enhances their performance for dynamic and time-critical USAR scenarios. A game-based environment was set-up to systematically compare FLC with TC in two typical search and rescue tasks: navigation and exploration. The results show that FLC improves mission performance in both exploration (search) and path following (navigation) scenarios. In the former, more objects were found, and in the latter shorter navigation times were achieved. FLC also caused lower workload and stress levels in both scenarios, without inducing a significant difference in the number of collisions. Finally, FLC was preferred by 75% of the subjects for exploration, and 56% for path following.
Mark Neerincx
added a research item
Integrating cognitive agents and robots into teams that operate in high-demand situations involves mutual and context-dependent behaviors of the human and agent/robot team-members. We propose a cognitive engineering method that includes the development of Interaction Design patterns for such systems as re-usable, theoretically and empirically founded, design solutions. This paper presents an overview of the background, the method and three example patterns.