Vision Based UAV Attitude Estimation: Progress and Insights

Journal of Intelligent and Robotic Systems (Impact Factor: 1.18). 01/2012; 65(1-4):295-308. DOI: 10.1007/s10846-011-9588-y
Source: DBLP


Unmanned aerial vehicles (UAVs) are increasingly replacing manned systems in situations that are dangerous, remote, or difficult
for manned aircraft to access. Its control tasks are empowered by computer vision technology. Visual sensors are robustly
used for stabilization as primary or at least secondary sensors. Hence, UAV stabilization by attitude estimation from visual
sensors is a very active research area. Vision based techniques are proving their effectiveness and robustness in handling
this problem. In this work a comprehensive review of UAV vision based attitude estimation approaches is covered, starting
from horizon based methods and passing by vanishing points, optical flow, and stereoscopic based techniques. A novel segmentation
approach for UAV attitude estimation based on polarization is proposed. Our future insightes for attitude estimation from
uncalibrated catadioptric sensors are also discussed.

Download full-text


Available from: Cédric Demonceaux,
  • Source
    • "Such is the case of vision-aided estimation using on board cameras, either fish eye [14], perspective[15] or even both [16]. Shabayek et al. recently published a survey of vision aided estimation methods [14]. When outdoor, it is also possible to use complementary GPS information [10], [17], or other sensors such as Doppler and Laser Radar [18], whereas indoor solutions may include laser range finding capabilities coupled with Simultaneous Location And Mapping (SLAM) algorithms [19]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper introduces a novel algorithm to obtain attitude estimations from low cost 9 Degree of Freedom Inertial Measurement Units. This nonlinear attitude estimator is formulated in the Special Orthogonal Group SO(3) based on the Lya-punov theory. The performance of the proposed estimator is compared to current commonly used methods, namely the Extended Kalman Filter and two other nonlinear estimators in SO(3), in computer simulations for a quadrotor Unmanned Aerial Vehicle.
    ICARSC 2014, IEEE International Conference on Autonomous Robot Systems and Competitions, Espinho, Portugal; 05/2014
  • Source
    • "In image processing scenarios where real-time information extraction is required and unpredictable image corruption can occur, image fusion techniques can be of significant importance, especially for unmanned vehicles guidance, such as, aerial vehicles [1], [2], mobile robots [3], [4] or even for planetary landing [5] [6] [7]. For example, in the image corruption scenario, shown in Figure 2, image fusion techniques can be applied by gathering information from the previous instants to recover information for current damaged pixels. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In computer vision systems an unpredictable image corruption can have significant impact on its usability. Image recovery methods for partial image damage, in particular in moving scenarios, can be crucial for recovering corrupted images. In these situations, image fusion techniques can be successfully applied to congregate information taken at different instants and from different points-of-view to recover damaged parts. In this article we propose a technique for temporal and spatial image fusion, based on fuzzy classification, which allows partial image recovery upon unexpected defects without user intervention. The method uses image alignment techniques and duplicated information from previous images to create fuzzy confidence maps. These maps are then used to detect damaged pixels and recover them using information from previous frames.
    2013 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2013); 07/2013
  • Source
    • "A good overview of the methods applied for attitude estimation using a video system is provided in [20]. These methods can be generally distinguished in: a) methods based on horizon detection, b) methods based on the analysis of the optical flow, c) methods based on the detection of salient features in the field of view. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper deals with the integration of measurements provided by inertial sensors (gyroscopes and accelerometers), GPS (Global Positioning System) and a video system in order to estimate position and attitude of an high altitude UAV (Unmanned Aerial Vehicle). In such a case, the vision algorithms present ambiguities due to the plane degeneracy. This ambiguity can be avoided fusing the video information with inertial sensors measurements. On the other hand, inertial sensors are widely used for aircraft navigation because they represent a low cost and compact solution, but their measurements suffer of several errors which cause a rapid divergence of position and attitude estimates. To avoid divergence, inertial sensors are usually coupled with other systems as for example GPS. A camera presents several advantages with respect to GPS as for example great accuracy and higher data rate. Moreover, it can be used in urban area or, more in general, where no useful GPS signal is present. On the contrary, it has lower data rate than inertial sensors and its measurements have latencies which can prejudice the performances and the effectiveness of the flight control system. The integration of inertial sensors with a camera allows exploiting the better features of both the systems, providing better performances in position and attitude estimation. The data fusion is performed via a multirate Unscented Kalman Filter (UKF) because of the nonlinear dynamic system equation. Experimental results show the effectiveness of the proposed method.
    Information Fusion (FUSION), 2013 16th International Conference on; 01/2013
Show more