Conference Paper

MR-Drone: Mixed Reality-Based Drone-Assisted Search and Rescue System

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Natural disasters are increasing day by day as the climate crisis becomes more serious. In a disastrous situation, search and rescue missions are very risky, time-consuming, and resource constraints. A hybrid drone-assisted mixed reality-based system (termed as "MR-Drone") is proposed for conducting search and rescue operations very quickly and efficiently in a disastrous situation like an earthquake, floods, fire breakout, etc. In our proposed system, we use drone’s real-time steaming for generating a 3D map of the target area and this map can be shown in mixed reality device like HoloLens for monitoring the rescue mission. Multiple users can monitor the emergency together in a mixed reality environment using MR-Drone thus reduce the rescue time.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Most robots in urban search and rescue (USAR) fulfill tasks teleoperated by human operators (Such as controlling robot movement to avoid obstacles and explore unknown environments). The operator has to know the location of the robot and find the position of the target (victim). This paper presents an augmented reality system using a Kinect sensor on a customly designed rescue robot. Firstly, Simultaneous Localization and Mapping (SLAM) using RGB-D cameras is running to get the position and posture of the robot. Secondly, a deep learning method is adopted to obtain the location of the target. Finally, we place an AR marker of the target in the global coordinate and display it on the operator's screen to indicate the target even when the target is out of the camera's field of view. The experimental results show that the proposed system can be applied to help humans interact with robots.
Conference Paper
Full-text available
The fast-paced development of Unmanned Aerial Vehicles (UAVs) and their use in different domains, opens a new paradigm on their use in natural disaster management. In UAV-assisted disaster management applications, UAVs not only survey the affected area but also assist in establishing the communication network between the disaster survivors, rescue teams and nearest available cellular infrastructure. This paper identifies main disaster management applications of UAV networks and discusses open research issues related to UAV-assisted disaster management.
Conference Paper
Full-text available
Wilderness Search and Rescue can benefit from aerial imagery of the search area. Mini Unmanned Aerial Vehicles can potentially provide such imagery, provided that the autonomy, search algorithms, and operator control unit are designed to support coordinated human-robot search teams. Using results from formal analyses of the WiSAR problem domain, we summarize and discuss information flow requirements for WiSAR with an eye toward the efficient use of mUAVs to support search. We then identify and discuss three different operational paradigms for performing field searches, and identify influences that affect which human-robot team paradigm is best. Since the likely location of a missing person is key in determining the best paradigm given the circumstances, we report on preliminary efforts to model the behavior of missing persons in a given situation. Throughout the paper, we use information obtained from subject matter experts from Utah County Search and Rescue, and report experiences and "lessons learned" from a series of trials using human-robot teams to perform mock searches.
Conference Paper
There are many applications for outdoor navigation but still less application for indoor navigation are available. In complex unknown indoor environments such as public buildings, railway stations, airports, hospitals, is difficult and time consuming to find the destination. In this paper, we design a mixed reality based indoor navigation system to assist people to navigate in an unknown indoor environment. We use mixed reality head mounted display HoloLens for this prototype. HoloLens scan the environment, find user position in the environment and show the shortest path to the target area. User can select their destination by the graphical user interface display in the HoloLens and also via voice command. Then our system shows the visual augmented path to the user. We use A* pathfinding algorithm for finding the shortest path from the user to the destination.
Conference Paper
Mixed Reality (MR) applications are becoming very popular in different sectors such as communication, education and entertainment for its extensive application areas and also for the wide adoption of mobile and especially wearable devices. Due to weak computational efficiency and short battery life of these devices, MR applications performance can be hampered. Offloading the MR application burden to the cloud server can be a solution to this problem but this approach also creates high communication latency. Mobile Edge Computing (MEC) is an emerging technology which brings cloud close to the user proximity at the base station and utilizes radio access network for maintaining communication with the users. This paper presents a MEC based MR application for assisting blind and visually impaired people. In the proposed scheme, the computational task is offloaded to the nearest MEC server in order to prolong the battery life of the MR devices. Finally, experimental results based on latency and energy consumption are discussed in order to demonstrate the feasibility of the proposed scheme.
Conference Paper
Natural or man-made disasters often result in trapped victims under rubble piles. In such emergency response situations, Urban Search and Rescue (USaR) teams have to make quick decisions to determine the location of possible trapped humans. The fast 3D modelling of collapsed buildings using images from Unmanned Aerial Vehicles (UAVs) can significantly help the USaR operations and improve disaster response. The a-priori establishment of a proper workflow for fast and reliable image-based 3D modelling and the careful parameterization in every step of the photogrammetric process are crucial aspects that ensure the readiness in an emergency situation. This paper evaluates powerful commercial and open-source software for the creation of 3D models of disaster scenes using UAV imagery for rapid response situations and conducts a thorough analysis on the parameters of the various modelling steps that may lead to the desired results for USaR operations. The main result of our analysis is the establishment of optimized photogrammetric procedures with the scope of fast 3D modelling of disaster scenes, to assist USaR teams and increase survival rates.