Chapter

Virtual Environment Mapping Module to Manage Intelligent Flight in an Indoor Drone

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper presents a Virtual Environment Mapping (VEM) module assembled in an indoor drone in order to be used by creative industries. This module is in charge of allowing users to capture the environment where the final recording will take place. Having a virtual representation of the environment allows both the photography director and the director to test different camera trajectories, points of view, speeds and camera configuration without the need to be physically in the recording set. The digitization of the scene will be performed with a 3D camera on-board of the drone. This paper discusses the overall VEM architecture, taking into account the requirements it has to fulfil. It will also present a working demo of the system, with the communication infrastructure in place and with a proof of concept of the main components of system.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Many researchers from academia and industry are investigating closely how to control an autonomous mobile robot, especially Unmanned Aerial Vehicles (UAV). This paper shows the ability of a low-cost quadcopter, "Parrot AR-Drone 2.0", to navigate a predetermined route. The path is obtained by optimized path planning algorithm. A generic Simulated Annealing (SA) optimization algorithm is implemented to generate the obstacle-free path. This path is divided into several waypoints, which are navigated by the drone in various experiments. The position and orientation of the quadcopter are estimated with the incremental motion estimation approach using Inertial Measurement Unit (IMU), which is mounted on the drone. The quadcopter is controlled via Simulink model with PID, which manipulates the drone internal controller for the pitch, roll, yaw and vertical speed. Four different experiments were tested to evaluate the performance of the proposed algorithm and the obtained results indicate the high performance of the quadcopter and its applicability in various navigation applications.
Article
Full-text available
The inclusion of embedded sensors into a networked system provides useful information for many applications. A Distributed Control System (DCS) is one of the clearest examples where processing and communications are constrained by the client's requirements and the capacity of the system. An embedded sensor with advanced processing and communications capabilities supplies high level information, abstracting from the data acquisition process and objects recognition mechanisms. The implementation of an embedded sensor/actuator as a Smart Resource permits clients to access sensor information through distributed network services. Smart resources can offer sensor services as well as computing, communications and peripheral access by implementing a self-aware based adaptation mechanism which adapts the execution profile to the context. On the other hand, information integrity must be ensured when computing processes are dynamically adapted. Therefore, the processing must be adapted to perform tasks in a certain lapse of time but always ensuring a minimum process quality. In the same way, communications must try to reduce the data traffic without excluding relevant information. The main objective of the paper is to present a dynamic configuration mechanism to adapt the sensor processing and communication to the client's requirements in the DCS. This paper describes an implementation of a smart resource based on a Red, Green, Blue, and Depth (RGBD) sensor in order to test the dynamic configuration mechanism presented.
Article
Full-text available
Capturing real-world objects with laser-scanning technology has become an everyday task. Recently, the acquisition of dynamic scenes at interactive frame rates has become feasible. A high-quality visualization of the resulting point cloud stream would require a per-frame reconstruction of object surfaces. Unfortunately, reconstruction computations are still too time-consuming to be applied interactively. In this paper we present a local surface reconstruction and visualization technique that provides interactive feedback for reasonably sized point clouds, while achieving high image quality. Our method is performed entirely on the GPU and in screen space, exploiting the efficiency of the common rasterization pipeline. The approach is very general, as no assumption is made about point connectivity or sampling density. This naturally allows combining the outputs of multiple scanners in a single visualization, which is useful for many virtual and augmented reality applications.
Conference Paper
Full-text available
We present an algorithm for real-time level of detail reduct ion and display of high-complexity polygonal surface data. The algorithm uses a compact and efficient regular grid representation, an d em- ploys a variable screen-space threshold to bound the maximum er- ror of the projected image. A coarse level of simplification i s per- formed to select discrete levels of detail for blocks of the s urface mesh, followed by further simplification through repolygon aliza- tion in which individual mesh vertices are considered for re moval. These steps compute and generate the appropriate level of detail dynamically in real-time, minimizing the number of rendered poly- gons and allowing for smooth changes in resolution across areas of the surface. The algorithm has been implemented for approxi- mating and rendering digital terrain models and other heigh t fields, and consistently performs at interactive frame rates with h igh image quality.
Article
Full-text available
En este trabajo se presenta una estrategia de control no lineal para la estabilización de un helicóptero de cuatro rotores. El algoritmo de control se basa en el análisis de Lyapunov y en la técnica de las saturaciones anidadas. La estrategia de control es muy fácil de implementar, siendo tal que es independiente del helicóptero de cuatro rotores que se pretenda controlar. El ajuste de las ganancias del algoritmo de control es muy sencillo, en comparación con otros algoritmos propuestos en la literatura. La estrategia de control propuesta ha sido implementada sobre un sistema de tiempo real para el control de un mini- helicóptero de cuatro rotores. Los resultados obtenidos, comparados con un algo- ritmo lineal LQR, muestran que las prestaciones del algoritmo propuesto son muy superiores a las obtenidas a partir de un algoritmo lineal LQR
Article
Full-text available
Surface elements (surfels) are a powerful paradigm to efficiently render complex geometric objects at interactive frame rates. Unlike classical surface discretizations, i.e., triangles or quadrilateral meshes, surfels are point primitives without explicit connectivity. Surfel attributes comprise depth, texture color, normal, and others. As a pre-process, an octree-based surfel representation of a geometric object is computed. During sampling, surfel positions and normals are optionally perturbed, and different levels of texture colors are prefiltered and stored per surfel. During rendering, a hierarchical forward warping algorithm projects surfels to a z-buffer. A novel method called visibility splatting determines visible surfels and holes in the z-buffer. Visible surfels are shaded using texture filtering, Phong illumination, and environment mapping using per-surfel normals. Several methods of image reconstruction, including supersampling, offer flexible speed-quality tradeoffs. Due ...
Conference Paper
In this paper a complete framework is proposed, to deal with trajectory tracking and positioning with an AR.Drone Parrot quadrotor flying in indoor environments. The system runs in a centralized way, in a computer installed in a ground station, and is based on two main structures, namely a Kalman Filter (KF) to track the 3D position of the vehicle and a nonlinear controller to guide it in the accomplishment of its flight missions. The KF is designed aiming at estimating the states of the vehicle, fusing inertial and visual data. The nonlinear controller is designed with basis on a dynamic model of the AR.Drone, with the closed-loop stability proven using the theory of Lyapunov. Finally, experimental results are presented, which demonstrate the effectiveness of the proposed framework.
Article
Cameras are a natural fit for micro aerial vehicles (MAVs) due to their low weight, low power consumption, and two-dimensional field of view. However, computationally-intensive algorithms are required to infer the 3D structure of the environment from 2D image data. This requirement is made more difficult with the MAV's limited payload which only allows for one CPU board. Hence, we have to design efficient algorithms for state estimation, mapping, planning, and exploration. We implement a set of algorithms on two different vision-based MAV systems such that these algorithms enable the MAVs to map and explore unknown environments. By using both self-built and off-the-shelf systems, we show that our algorithms can be used on different platforms. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a front-looking stereo camera as the main sensor, we maintain a tiled octree-based 3D occupancy map. The MAV uses this map for local navigation and frontier-based exploration. In addition, we use a wall-following algorithm as an alternative exploration algorithm in open areas where frontier-based exploration under-performs. During the exploration, data is transmitted to the ground station which runs large-scale visual SLAM. We estimate the MAV's state with inertial data from an IMU together with metric velocity measurements from a custom-built optical flow sensor and pose estimates from visual odometry. We verify our approaches with experimental results, which to the best of our knowledge, demonstrate our MAVs to be the first vision-based MAVs to autonomously explore both indoor and outdoor environments.
Conference Paper
Simulators have played a critical role in robotics research as tools for quick and efficient testing of new concepts, strategies, and algorithms. To date, most simulators have been restricted to 2D worlds, and few have matured to the point where they are both highly capable and easily adaptable. Gazebo is designed to fill this niche by creating a 3D dynamic multi-robot environment capable of recreating the complex worlds that would be encountered by the next generation of mobile robots. Its open source status, fine grained control, and high fidelity place Gazebo in a unique position to become more than just a stepping stone between the drawing board and real hardware: data visualization, simulation of remote environments, and even reverse engineering of blackbox systems are all possible applications. Gazebo is developed in cooperation with the Player and Stage projects (Gerkey, B. P., et al., July 2003), (Gerkey, B. P., et al., May 2001), (Vaughan, R. T., et al., Oct. 2003), and is available from http://playerstage.sourceforge.net/gazebo/ gazebo.html.
Three-dimensional trajectory tracking of a quadrotor through PVA control
  • S E Martínez
  • M Tomas-Rodriguez
  • SE Martínez
Martínez, S. E., & Tomas-Rodriguez, M. (2014). Three-dimensional trajectory tracking of a quadrotor through PVA control. Revista Iberoamericana de Automática e Informática Industrial RIAI, 11(1), 54-67.