Article

Vision Based UAV Attitude Estimation: Progress and Insights.

Journal of Intelligent and Robotic Systems (Impact Factor: 0.81). 01/2012; 65:295-308. DOI: 10.1007/s10846-011-9588-y
Source: DBLP

ABSTRACT Unmanned aerial vehicles (UAVs) are increasingly replacing manned systems in situations that are dangerous, remote, or difficult
for manned aircraft to access. Its control tasks are empowered by computer vision technology. Visual sensors are robustly
used for stabilization as primary or at least secondary sensors. Hence, UAV stabilization by attitude estimation from visual
sensors is a very active research area. Vision based techniques are proving their effectiveness and robustness in handling
this problem. In this work a comprehensive review of UAV vision based attitude estimation approaches is covered, starting
from horizon based methods and passing by vanishing points, optical flow, and stereoscopic based techniques. A novel segmentation
approach for UAV attitude estimation based on polarization is proposed. Our future insightes for attitude estimation from
uncalibrated catadioptric sensors are also discussed.

3 Bookmarks
 · 
135 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper introduces a novel algorithm to obtain attitude estimations from low cost inertial measurement units including 3-axis accelerometer, 3-axis gyroscope and 3-axis magnetometer. This nonlinear attitude estimator is derived from Lyapunov’s theory and formulated in the special orthogonal group SO(3). The impact of the gyroscope bias is also assessed and an online estimator provided. The performance of the proposed estimator is validated and compared to current commonly used methods, namely the classical extended Kalman filter and two other nonlinear estimators in SO(3). Realistic simulations consider a quadcopter unmanned aerial vehicle subject to wind disturbances and whose sensors parameters have been identified from flight tests data.
    Journal of Intelligent and Robotic Systems 12/2014; · 0.81 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The aim of this paper is to present a method for integration of measurements provided by inertial sensors (gyroscopes and accelerometers), GPS and a video system in order to estimate position and attitude of an UAV (Unmanned Aerial Vehicle). Inertial sensors are widely used for aircraft navigation because they represent a low cost and compact solution, but their measurements suffer of several errors which cause a rapid divergence of position and attitude estimates. To avoid divergence inertial sensors are usually coupled with other systems as for example GNSS (Global Navigation Satellite System). In this paper it is examined the possibility to couple the inertial sensors also with a camera. A camera is generally installed on-board UAVs for surveillance purposes, it presents several advantages with respect to GNSS as for example great accuracy and higher data rate. Moreover, it can be used in urban area or, more in general, where multipath effects can forbid the application of GNSS. A camera, coupled with a video processing system, can provide attitude and position (up to a scale factor), but it has lower data rate than inertial sensors and its measurements have latencies which can prejudice the performances and the effectiveness of the flight control system. The integration of inertial sensors with a camera allows exploiting the better features of both the systems, providing better performances in position and attitude estimation.
    Information Fusion (FUSION), 2012 15th International Conference on; 01/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In computer vision systems an unpredictable image corruption can have significant impact on its usability. Image recovery methods for partial image damage, in particular in moving scenarios, can be crucial for recovering corrupted images. In these situations, image fusion techniques can be successfully applied to congregate information taken at different instants and from different points-of-view to recover damaged parts. In this article we propose a technique for temporal and spatial image fusion, based on fuzzy classification, which allows partial image recovery upon unexpected defects without user intervention. The method uses image alignment techniques and duplicated information from previous images to create fuzzy confidence maps. These maps are then used to detect damaged pixels and recover them using information from previous frames.
    2013 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2013); 07/2013

Full-text

Download
57 Downloads
Available from
May 20, 2014