# UAV altitude estimation by mixed stereoscopic vision

**ABSTRACT** Altitude is one of the most important parameters to be known for an Unmanned Aerial Vehicle (UAV) especially during critical maneuvers such as landing or steady flight. In this paper, we present mixed stereoscopic vision system made of a fish-eye camera and a perspective camera for altitude estimation. Contrary to classical stereoscopic systems based on feature matching, we propose a plane sweeping approach in order to estimate the altitude and consequently to detect the ground plane. Since there exists a homography between the two views and the sensor being calibrated and the attitude estimated by the fish-eye camera, the algorithm consists then in searching the altitude which verifies this homography. We show that this approach is robust and accurate, and a CPU implementation allows a real time estimation. Experimental results on real sequences of a small UAV demonstrate the effectiveness of the approach.

**1**Bookmark

**·**

**192**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**This paper introduces a novel algorithm to obtain attitude estimations from low cost inertial measurement units including 3-axis accelerometer, 3-axis gyroscope and 3-axis magnetometer. This nonlinear attitude estimator is derived from Lyapunov’s theory and formulated in the special orthogonal group SO(3). The impact of the gyroscope bias is also assessed and an online estimator provided. The performance of the proposed estimator is validated and compared to current commonly used methods, namely the classical extended Kalman filter and two other nonlinear estimators in SO(3). Realistic simulations consider a quadcopter unmanned aerial vehicle subject to wind disturbances and whose sensors parameters have been identified from flight tests data.Journal of Intelligent and Robotic Systems 12/2014; · 0.81 Impact Factor - SourceAvailable from: François Rameau[Show abstract] [Hide abstract]

**ABSTRACT:**The primary goal of this thesis is to elaborate a binocular vision system using two different types of camera. The system studied here is composed of one omnidirectional camera coupled with a PTZ camera. This heterogeneous association of cameras having different characteristics is called a hybrid stereo-vision system. The couple composed of these two cameras combines the advantages given by both of them, that is to say a large field of view and an accurate vision of a particular Region of interest with an adjustable level of details using the zoom. In this thesis, we are presenting multiple contributions in visual tracking using omnidirectional sensors, PTZ camera self calibration, hybrid vision system calibration and structure from motion using a hybrid stereo-vision system.12/2014, Degree: PhD, Supervisor: David Fofi, Cédric Demonceaux, Désiré Sidibé - SourceAvailable from: Syaril Azrad[Show abstract] [Hide abstract]

**ABSTRACT:**Localization of Small-Size Unmanned Air Vehicles (UAVs) such as the Quadrotors in Global Positioning System (GPS)-denied environment such as indoors has been done using various techniques. Most of the experiment indoors that requires localization of UAVs, used cameras or ul-trasonic sensors installed indoor or applied indoor environment modification such as patching (Infra Red) IR and visual markers. While these systems have high accuracy for the UAV localization, they are expensive and have less practicality in real situations. In this paper a system consisting of a stereo camera embedded on a quadrotor UAV (QUAV) for indoor localization was proposed. The optical flow data from the stereo camera then are fused with attitude and acceleration data from our sen-sors to get better estimation of the quadrotor location. The quadrotor altitude is estimated using Scale Invariant Feature Transform (SIFT) Feature Stereo Matching in addition to the one computed using optical flow. To avoid latency due to computational time, image processing and the quadrotor control are processed threads and core allocation. The performance of our QUAV altitude estimation is better compared to single-camera embedded QUAVs due to the stereo camera triangulation, where it leads to better estimation of the x-y position using optical flow when fused together.Applied Mechanics and Materials 07/2014; 629(2014):270-277.

Page 1

UAV Altitude Estimation by Mixed Stereoscopic Vision

Damien Eynard and Pascal Vasseur and C´ edric Demonceaux and Vincent Fr´ emont

Abstract—Altitude is one of the most important parameters

to be known for an Unmanned Aerial Vehicle (UAV) especially

during critical maneuvers such as landing or steady flight. In

this paper, we present mixed stereoscopic vision system made

of a fish-eye camera and a perspective camera for altitude

estimation. Contrary to classical stereoscopic systems based on

feature matching, we propose a plane sweeping approach in

order to estimate the altitude and consequently to detect the

ground plane. Since there exists a homography between the two

views and the sensor being calibrated and the attitude estimated

by the fish-eye camera, the algorithm consists then in searching

the altitude which verifies this homography. We show that this

approach is robust and accurate, and a CPU implementation

allows a real time estimation. Experimental results on real

sequences of a small UAV demonstrate the effectiveness of the

approach.

I. INTRODUCTION

Unmanned Aerial Vehicles (UAVs) have received a lot of

attention in the last decade in order to increase their au-

tonomy. This autonomy includes the capacity of performing

maneuvers such as landing, takeoff or steady flight. Thus, a

fast and accurate estimation of parameters such as altitude,

attitude and velocities are required by the control loop. In this

paper, we propose a new mixed fish-eye/perspective stereo-

scopic vision system which is able to estimate autonomously

the altitude of the UAV but also to provide its attitude and

the free ground plane areas.

Altitude can be obtained by different techniques using

Global Positioning System (GPS), altimeter (laser or pres-

sure), radar or computer vision. However, standard GPS have

a vertical precision between 25 meters and 50 meters and are

sensitive to transmission interruptions in urban environment

for example. For pressure altimeters, the main drawback

is the dependance to pressure variation which implies an

accuracy error between 6% and 7%. Laser altimeters are

very accurate but require specific conditions about the reflec-

tion surface. Finally, radar sensors provide simultaneously

altitude and relief map but are active systems, possibly

detectable and energy consuming.

Computer vision techniques have been also increasingly

used during the last decade in order to estimate UAV

parameters and many systems have been proposed to measure

the altitude. Altitude estimation by vision systems presents

several advantages. First, it can be also used for other visual

tasks like obstacle avoidance, navigation or localization.

This work is supported by R´ egion Picardie Project ALTO

D.Eynard,P.Vasseur

LabofUniversityofPicardie

firstname.name@u-picardie.fr

V. Fr´ emont and P. Vasseur are with Heudiasyc Lab of University of Tech-

nology of Compi` egne, France. vincent.fremont@hds.utc.fr

andC.Demonceaux

JulesVerne,

arewithMIS

Amiens,France.

Next, cameras are passive systems which are low energy

consumers and which can provide a great amount of infor-

mation per second. The main difficulty in systems based

on vision consists in selecting an appropriate reference to

estimate the altitude. In [14], [28], [29], the authors propose

to use a downward-looking perspective camera in order to

estimate the altitude according to a predefined pattern fixed

on the ground. This kind of approach is interesting since it

requires a single camera, provides a complete pose and can

be used in real-time. Nevertheless it is limited to specific

environment equipped with artificial landmarks. A single

perspective camera has been also used in many systems

based on optical flow measurement [1], [4], [6] and [16].

These systems are inspired by bees and consist in deducing

the altitude according to the optical flow knowing the speed

of the camera. [4] proposes to also estimate the pitch in

order to correct the optical flow contrary to the others, which

may lead to an unstable system. An original work using a

single perspective camera has been also proposed in [7].

The authors use a technique based on the learning of the

mapping between the texture information contained in a top-

down aerial image to a possible altitude value. This learning

is made for different kinds of ground and a spatio-temporal

Markov Random Field. In [27], a multi view algorithm based

on the sequence obtained with a single camera is proposed

in order to compute a digital map of the ground. Rather

than using a single camera which may lead to an insufficient

amount of features between successive images, some authors

propose to use stereoscopic sensors [20][26]. The proposed

approaches are based on the matching of interest points in

order to deduce elevation maps of the ground.

In this paper, we also propose the use of an original

stereoscopic sensor slightly different from the previously

mentionned systems since it is constituted of two different

cameras respectively fisheye and perspective. The benefit

of large field of view cameras for UAVs has been already

demonstrated in different works such as [19] for navigation

or [12] for attitude estimation. The use of a mixed sensor

allows to obtain a large field of view with the fisheye lens

while the perspective camera provides a better accuracy in

the image. We just assume that the ground plane is dominant

in the perspective image and the stereovision sensor is

calibrated. In this way, there exists a homography between

the two images of the ground plane. Since, we are able

to estimate the attitude of the UAV by the omnidirectional

image as in [12] or [11], we can deduce the normal of

the ground plane and finally find the altitude which verifies

this homography (fig. 2, fig.4). Then we propose a plane

sweeping algorithm in order to solve this task.

hal-00522124, version 1 - 29 Sep 2010

Author manuscript, published in "2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010), Taiwan,

Province Of China (2010)"

Page 2

Fig. 1. Mixed system on a quadri-rotor

Briefly, our approach presents several contributions. First,

the system is able to estimate autonomously the altitude

without any other sensor and also provides the attitude and

the ground plane area. Next, we propose a correspondence-

free approach which allows to treat images with different

geometry (spherical and planar) and is particularly more ro-

bust than classical matching based stereoscopic approaches.

Finally, a CPU implementation allows a real time altitude

estimation on small UAV.

The rest of the paper is as follows. Part II deals with

the general principle with a global overview of the hybrid

system, its modeling and then the plane-sweeping. In part

III, we propose the plane-sweeping of mixed views to esti-

mate the altitude and segment the ground plane. We finally

present in part IV experimental results on real sequences

with a quantitative evaluation of the error and a real time

implementation on a small quadri-rotor UAV.

II. GENERAL PRINCIPLE

A. Hybrid sensor

1) Global Overview: We propose a mixed perspec-

tive/omnidirectional stereovision system (fig. 1) which is able

to estimate altitude in real time as well as the ground plane

and the attitude.

The advantage of the omnidirectional sensor is the wide

field of view while the drawbacks are the poor resolution

(particularly near the borders), non linear resolution of the

image and the distortions. Advantages of fisheye in compari-

son with catadioptric cameras are their reduced sensitivity to

vibrations and the suppression of the blind spot at the center

of the image.

On the other side, perspective cameras possess a good

and constant resolution, low distortions but a limited field of

view.

By combining a such mixed stereo rig, we can add

advantages of each sensor. The fisheye provides attitude

information (fig. 5) whereas perspective view can provide

motion information more precisely than on fisheye view.

Then the main problem in our system consists in matching

between the omnidirectional and the perspective images

because of the distortions. Different approaches are then

possible:

Fig. 2.Global overview of altitude estimation

• First, by knowing the intrinsic parameters of the fisheye

camera, a rectified equivalent perspective image could

be recovered in order to perform for example, a feature

matching. However, this approach requires different

processings such as warping and interpolation which

decrease a real time performances.

• Some recent works propose the unitary sphere as unified

space for central image processing and feature match-

ing. However, as previously, this solution can not be

implemented in real time and is not adapted for mixed

view.

• Finally, we propose a real time correspond-less ap-

proach which consists in comparing directly the images

without any feature extraction.

Since the altitude is estimated according to the ground

plane, we can use this plane as reference. In this way, we

will demonstrate that there exists a homography between

the omnidirectional and perspective images. The general

equation of a homography is H = R − Tn?

T define the rigid transformation between the two views, n?

is the normal of the plane in the first image and d is the

distance from the plane and the first camera. In our case, d

corresponds to the altitude (see fig. 4).

Consequently, if we are able to find this homography

between the two images, we can deduce d since R and T

can be known by calibration and n?can be computed using

attitude estimation methods based on omnidirectional vision

such as [3], [10], [11] or by IMU (see fig. 2).

2) Camera Models and Calibration: Despite fisheye

lenses cannot be classified as single viewpoint sensor [2],

we use the unitary sphere in order to model our camera [31].

Mei and Rives [25] have proposed a calibration method based

on this spherical model. This model is particularly accurate

and allows to model radial and tangential distortions of the

fisheye lens.

With the spherical model of [25], projection is divided

d, where R and

hal-00522124, version 1 - 29 Sep 2010

Page 3

in two steps. First, a world point xmis projected onto the

unit sphere xs through its center S. Then, this point xs is

projected to the image plane onto xithrough O. Parameter

ξ defines the distance between O and S. This parameter is

estimated during the calibration. Mixed stereo calibration is

obtained by an adaptation of [5].

B. Plane-sweeping

Plane-sweeping has been introduced by Collins [8]. First a

reference view has to be defined. Then for each normal and

each distance to a 3D plane, each warped image is compared

(eq. 1) by homography to the reference image. Let I(p)

be the intensity of the pixel p in the image I and I∗the

homography of the image I by the homography Hp.

I∗(p,d) = I(Hp) = I((R − Tn?

The best estimation of the homography H corresponds

to the minimum global error of the difference between the

warped image and the reference view. In our application,

we extend this aspect. We take perspective Ip view as the

reference. The manipulation with the neighborhood on a

plane is more feasible than on a sphere. The image obtained

with fisheye camera is projected on the sphere and then

on the reference plane by homography. Notice our cameras

have the same orientation. The region in consideration on the

fisheye gets fewer distortions and better resolution than the

rest of the image. Then those two images can be compared

by subtraction (see fig. 3). We note Ipthe perspective image,

Isthe fisheye image projected onto the sphere and I∗

image projected by homography onto the reference frame.

d)p)

(1)

sthe Is

Fig. 3. Mixed plane-sweeping.

III. PLANE-SWEEPING OF MIXED VIEWS

We propose to estimate the altitude d and to segment the

ground plane by mixed plane-sweeping with R,T,n?known

by calibration and attitude estimation. First of all, we will

present the homography used in our models. Secondly, we

will expose the plane-sweeping of mixed views algorithm.

A. Sphere to plane homography

Given a mixed stereo rig modeled by a plane and a unit

sphere, we propose in this part to define the homography of

the 3D ground plane that exists between the two views from

different types of cameras (see fig. 4). In [17], a homography

links two projections of a 3D plane on two planes. In [24],

homography links two projections of a 3D plane on two

spheres:

H = R − Tn?

d

(2)

Let us consider:

• Xp, a point of the 3D plane projected on the perspective

view.

• X∗

another perspective view by homography.

• Xs, a point of the 3D plane projected on the sphere.

We have the following relation (3) for a homography

between two perspective views.

p, the projection of Xp from a perspective view to

X∗

p∼ H−1Xp

(3)

We replace those two planes of projection by a planar and

a spherical projection. We get eq. 4 up to scale:

Xs∼

X∗

||X∗

p

p||∼ H−1Xp

(4)

Fig. 4.Sphere/plane homography.

As regard what we have said previously, homography H

depends on R, T, attitude ? n and altitude d. Rotation R and

translation T are obtained by calibration. Normal ? n to the

ground plane can be obtained by [3], [10], [11] or by inertial

system. [3] work has been tested on a fisheye view (fig. 5).

Finally, we will estimate the altitude d by plane-sweeping.

B. Algorithm

To estimate the altitude parameter, our plane-sweeping

algorithm performs a top-down search of the altitude. Let

dminand dmaxbe the minimum and maximum altitude to

estimate. In each iteration, the best altitude?

(see algorithm 1). In this algorithm, the mask G corresponds

to the segmented ground plane. Pixels corresponding to the

ground plane are in white color in figures 6(c). In order to

obtain a real time method, we propose to estimate the altitude

dkis estimated

from a range dk?[dmin,dmax], then the mask G is updated

hal-00522124, version 1 - 29 Sep 2010

Page 4

Algorithm 1 Altitude and ground plane segmentation algo-

rithm - initialization

Estimation(dmin,dmax,s,∆d)

{Initialization}

a0=?

G = {p?Pixels}

while |?

?

{Estimation of inliers/outliers mask}

G = {p?Pixels,

{Estimation of the new range depending on sampling}

ak+1=?

k = k + 1

end while

Return?

at time t using time t − 1. We define ∆d the tolerance of

altitude and s the step.

The estimation is performed in two phases:

• Initialization: we estimate the best altitude which is in

a wide range of altitudes (algorithm 1).

• During flight: we use the altitude estimated in the ini-

tialization phase to obtain a narrower range dt?[dt−1−

rd,dt−1+ rd] by substituting in (algo. 1) dmin =

dt−1− rd, dmax = dt−1+ rd with rd computed in

(eq. 5,6). This range depends on the vertical velocity

vv of UAV (about ±5000mm/s) and the hardware

computation power in frames per second noted fps.

d−1= dmin

b0=?d0= dmax

dk−?

dk+1 = argmind∈{bk−ak

I∗

dk−1| > ∆d do

{Estimation of the best altitude}

s−1t+ak;t?[0,s−1]}(?

|?

p?G|IP(p) −

S(p,dk)|)

p1?Wp|IP(p1)−I∗

?

s−1

s−1

S(p1,? d)|

p1?WpIP(p1)

< thres}

dk+1−bk−ak

dk+1+bk−ak

bk+1=?

dk

rd=

vv

fps

(5)

dt?[dt−1− rd,dt−1+ rd]

(6)

As we will see in the following section, our algorithm is

able to compute both the altitude and the ground plane in

real time with mixed stereo system.

IV. EXPERIMENTAL RESULTS

We can distinguish our results in two parts: firstly, images

are processed offline, with standard cameras like Sony XCD-

V50CR. In the second part, images are processed online

for embedded applications with micro-cameras with M12

fisheye and perspective lens. In each experiment, we estimate

the attitude with an inertial central (IMU) to validate our

approach.

A. Attitude estimation by fisheye lens

For the attitude estimation, we get results from an IMU

to have the less error and the best computation time. In the

Fig. 5. Error of attitude estimation = 0.69◦- red lines are detected edges,

green lines are 3D lines projected in the image

future, we will use an adaptation of [12] and [3]. The error

introduced with this method has a maximum of 3◦. Our

algorithm is insensitive to low attitude errors. For example,

with synthetic images, an error between 3◦and 5◦of attitude

estimation will introduce an error between 0.1% and 0.4%

for altitude estimation. This method tested on our fisheye

lens estimates the attitude as well as catadioptric lens does

(see fig. 5).

B. Altitude estimation and/or ground plane segmentation

Then, we present two cases of experimental results where

real altitude is estimated by a laser telemeter and error

computed like error =(estim altitude−real altitude)

• the first, made with two cameras with a 447mm base-

line. It is fixed on a pneumatic telescopic mast. Altitude

and ground plane estimation are performed offline on a

GPU.

• the second, made with two micro cameras with a

314mm baseline. It is embedded on a compact UAV.

Altitude estimation is performed online by CPU pro-

cessing.

For the first experiment, in the one hand, we observe an

accurate estimation of altitude on free ground plane (tab. I)

with an error between 0.18% and 3.14% in case of a free

ground plane. In case of obstacles on the ground plane, we

observe a higher error, between 7.52% and 8.82%. In the

other hand we observe that higher is our system, less accurate

is the estimation because of the decrease of resolution in

function of altitude. Moreover, the accuracy depends on the

size of the baseline. Finally those results are well adapted for

our application. Accuracy is needed during the two phases

of landing and taking off i.e. near to the ground plane.

real altitude

.

Type

Ground

Ground

Ground truth

2187mm

3244mm

Estim. altitude

2200mm

3250mm

Error

0.59%

0.18%

Ground + obstacles

(low contrast)

Ground

Ground

Ground + obstacles

3244mm

4072mm

5076mm

4080mm

3488mm

4200mm

5202mm

4440mm

7.52%

3.14%

2.48%

8.82%

TABLE I

ALTITUDE GROUND PLANE ESTIMATION WITH AND WITHOUT OBSTACLE

- ALGORITHM PARAMETERS FOR THIS TEST: s = 6, thres = 25

hal-00522124, version 1 - 29 Sep 2010

Page 5

The second aspect of our algorithm is the segmentation

of ground plane, well estimated for contrasted areas. In case

of a plane without obstacles, the pneumatic telescopic mast,

where cameras are fixed, is well represented by outliers (in

dark on the image) (fig. 6(c)). For an image composed of a

dominant ground plane and walls, ground plane is segmented

as inliers while walls are segmented as outliers. The detected

area of inliers is 31% while an accurate estimation of inliers

area gives 54%. Our algorithm allows to segment globally

inliers/outliers to estimate dominant ground plane for our

application.

An aspect to improve in our algorithm is the case of poorly

textured planes. When the ground plane or outliers (wall,

objects) are homogeneous or poorly textured, outliers/inliers

segmentation becomes difficult.

(a)(b)

(c) (d)

(e)

Fig. 6. Altitude and ground plane segmentation - 4.8% of inliers - Fisheye

view (a), perspective view (b), ground plane segmentation (c), sphere to plane

homography (d), reference and homography comparison (e)

For the second experiment we implemented our system on

a small quadri-rotor (see fig.1). Micro cameras embedded

on the UAV (see fig. 7) are plugged on external laptop to

perform online altitude estimation. We tested the accuracy by

comparing altitudes estimated by planesweeping to altitudes

estimated by laser telemeter (Fig.8). On this figure, altitude

is well estimated for the range of altitude corresponding to

the landing and taking off phases of an UAV. The mean

error is 2.41%. An attached video shows an example of this

experiment and others are available [35].

Fig. 7.Embedded view of UAV - Est. altitude 1378mm

550??

750??

950??

1150??

1350??

1550??

1750??

1950??

2150??

Al#tude?? (mm)??

Experiment??

Laser?? Planesweeping??

Fig. 8.Comparison between laser altimeter and planesweeping

C. Performance on GPU, CPU and embedded boards

First, we developed this algorithm on GPU with brook+

for ATI that allows to get real time (30Hz) frame-rate to

estimate both altitude and segment the ground plane. This

algorithm has been tested on ATI 4850 with E8400 3Ghz

CPU.

Then, we implemented this algorithm on CPU with a

lot of optimizations and without the segmentation of the

ground plane. With this implementation, we get min :

80Hz,mean : 180Hz,max : 250Hz that is above video

framerate which allows us to process our algorithm online.

The platform for those tests is a Macbook Pro with a CPU

C2D P8400 2.26Ghz. A demonstration has been developed

[33]. We use a stereo rig with uEye cameras and get the

normal with an IMU. During this demonstration, the system

is able to estimate altitude in real time with robustness and

accuracy.

An embedded version of our algorithm has been exported

on the ARM of a Gumstix Overo Fire with OMAP3530 ARM

@600Mhz based processor. With this implementation we get

a framerate around 5Hz that is not enough for real time

applications but relatively interesting for the ratio power/size.

By developing those algorithms both on GPU, CPU and

embedded board we get interesting results. For ground plane

estimation and altitude estimated together, results are real

time and can be implemented on an UAV with GPU. For

altitude estimation only, computation time is faster and can

be implemented on smaller quadri-rotor UAV.

We implemented and validated the altitude estimation

by CPU on a light quadri-rotor [33]. Algorithm has been

performed on a macbook pro in real-time and during the

hal-00522124, version 1 - 29 Sep 2010

Page 6

flight.

V. CONCLUSIONS AND FUTURE WORKS

We have presented in this paper a hybrid stereo sys-

tem. Mixed cameras are related by a homography which

allows to estimate both altitude and ground plane using

plane-sweeping. Compared to matching algorithms based on

feature matching, plane-sweeping is a correspondence-free

algorithm. It consists of directly comparing images. This

algorithm tests a range of altitudes and extracts the best

one where the global error is minimum. First, we have

implemented this algorithm on GPU and have presented good

preliminary results in video [34]. Then, we implemented the

algorithm of altitude estimation on CPU. A version of laptop

used for demonstration has a framerate around 180Hz and

has been implemented on a real UAV. A second version has

been developed on an embedded board which has a size of

stick of gum with a framerate of 5Hz. Notice on the video a

bigger computation time due to the processing of the video

realized during the flight.

Perspectives of this work will be to improve our algorithm

for the segmentation on poorly textured surfaces and to

implement an onboard version of our approach.

VI. ACKNOWLEDGMENTS

This work is supported by R´ egion Picardie Project ALTO.

Experimentations have been realized on UAV platform of

Heudiasyc with cooperation of Luis-Rodolfo GARCIA-

CARRILLO and Eduardo RONDON. A mixed calibration

software has been developed with cooperation of Guillaume

CARON.

REFERENCES

[1] G. Barrows and C. Neely and K. Miller, ”Optic flow sensors for MAV

navigation,” In Fixed and Flapping Wing Aerodynamics for Micro Air

Vehicle Applications, ser. Progress in Astronautics and Aeronautics,

T. J. Mueller, Ed. AIAA, 2001, vol. 195, pp. 557-574.

[2] S. Baker, and S. K. Nayar, ”A Theory of Single-Viewpoint Catadiop-

tric Image Formation,” International Journal of Computer Vision,1999.

[3] J-C Bazin, I. Kweon, C. Demonceaux, P. Vasseur, ”UAV Attitude Es-

timation by Vanishing Points in Catadioptric Images”, In Proceedings

of IEEE International Conference on Robotics and Automation, 2008.

[4] A. Beyeler and C. Mattiussi and J.C. Zufferey and D. Floreano,

”Vision-based Altitude and Pitch Estimation for Ultra-light Indoor

Microflyers,” In Proceedings of IEEE International Conference on

Robotics and Automation, 2006.

[5] G. Caron, E. Marchand, E. Mouaddib. ”Single Viewpoint Stereoscopic

Sensor Calibration”, In Int. Symp. on Image/Video Communications

over fixed and mobile networks, 2010.

[6] J. Chahl and M. Srinivasan and and H. Zhang, ”Landing strategies

in honeybees and applications to uninhabited airborne vehicles,” In

The International Journal of Robotics Research, vol. 23, no. 2, pp.

101-110, 2004.

[7] A. Cherian and J. Andersh and V. Morellas and N. Papanikolopoulos

and B. Mettler, ”Autonomous Altitude Estimation Of A UAV Using

A Single Onboard Camera,” In Proceedings of IEEE International

Conference on Intelligent and Robotic Systems, 2009.

[8] R. T. Collins, ”A space-sweep approach to true multi-image matching,”

In Proceedings of IEEE Computer Vision and Pattern Recognition,

1996.

[9] J. Courbon, Y. Mezouar, L. Eck, P. Martinet, ”A Generic Fisheye

camera model,” In Proceedings of IEEE International Conference on

Intelligent and Robotic Systems, 2007.

[10] C. Demonceaux, P. Vasseur, C. P´ egard, ”Robust Attitude Estimation

with Catadioptric Vision,” In IEEE/RSJ International Conference on

Intelligent Robots and Systems, 2006.

[11] C. Demonceaux and P. Vasseur and C. P´ egard, ”Omnidirectional

vision on UAV for attitude computation,” In Proceedings of IEEE

International Conference on Robotics and Automation, 2006.

[12] C. Demonceaux and P. Vasseur and C. P´ egard, ”UAV Attitude Compu-

tation by Omnidirectional Vision in Urban Environment,” In Proceed-

ings of IEEE International Conference on Robotics and Automation,

2007.

[13] D. Gallup, J.-M. Frahm, P. Mordohai, Y. Qingxiong and M. Polle-

feys, ”Real-Time Plane-Sweeping Stereo with Multiple Sweeping

Directions,” In Proceedings of IEEE Computer Vision and Pattern

Recognition, 2007.

[14] P. Garcia-Padro and G. Sukhatme and J. Montgomery, ”Towards

visionbased safe landing for an autonomous helicopter,” In Robotics

and Autonomous Systems, 2000.

[15] I. Geys, T. P. Koninckx, L. V. Gool, ”Fast Interpolated Cameras by

combining a GPU based Plane Sweep with a Max-Flow Regularisation

Algorithm,” In Proceedings of IEEE 3D Processing, Visualization and

Transmission, 2004.

[16] W. Green and P. Oh and K. Sevcik and G. Barrows, ”Autonomous

landing for indoor flying robots using optic flow,” in ASME Interna-

tional Mechanical Engineering Congress and Exposition, vol. 2, 2003,

pp. 1347-1352.

[17] R. Hartley and A. Zisserman, ”Multiple View Geometry in computer

vision,” Cambridge 2cnd edition, 2003.

[18] J. Hoffmann, M. Jungel and M. Lotzsch, ”Vision Based System for

Goal-Directed Obstacle Avoidance,” In 8th International Workshop on

RoboCup, 2004.

[19] S. Hrabar and G. Sukhatme, ”Omnidirectional vision for an au-

tonomous helicopter,” In Proceedings of IEEE International Confer-

ence on Robotics and Automation, 2004.

[20] I.K. Jung and S. Lacroix, ”High resolution terrain mapping using low

altitude aerial stereo imagery,” In Proceedings of IEEE International

Conference on Computer Vision, 2003.

[21] Y. Kim and H. Kim, ”Layered ground floor detection for vision-

based mobile robot navigation,” In Proceedings IEEE International

Conference on Robotics and Automation, 2004.

[22] B. Liang and N. Pears ”Visual Navigation using Planar Homogra-

phies,” In Proceedings of IEEE International Conference on Robotics

and Automation, 2002.

[23] Y. Ma, S. Soatto, J. Kosecka, ans S. Shankar Sastry, ”An invitation to

3D Vision,” Springer, 2003.

[24] C. Mei, S. Benhimane, E. Malis and P. Rives, ”Homography-based

Tracking for Central Catadioptric Cameras,” In Proceedings of IEEE

Intelligent Robots and Systems, 2006.

[25] C. Mei and P. Rives, ”Single View Point Omnidirectional Camera

Calibration from Planar Grids,” In Proceedings of IEEE International

Conference on Robotics and Automation, 2007

[26] M. Meingast and C. Geyer and Shankar Sastry, ”Vision Based Terrain

Recovery for Landing Unmanned Aerial Vehicles,” In Proceedings of

IEEE International Conference on Decision and Control, 2004.

[27] M. Sanfourche and G. Besnerais and S. Foliguet, ”Height estimation

using aerial side looking image sequences,” In ISPRS, Vol.XXXIV,

Part3/W8, Munich, 2003.

[28] S. Saripalli and J. Montgomery and G. Sukhatme, ”Vision based

autonomous landing of an unmanned aerial vehicle,” In Proceedings

of IEEE International Conference on Robotics and Automation, 2002.

[29] C. Sharp and O. Shakernia, and S. Sastry, ”A vision system for landing

an unmanned aerial vehicle,” In Proceedings of IEEE International

Conference on Robotics and Automation, 2001.

[30] P. Sturm, ”Mixing Catadioptric and Perspective Cameras,” In Proceed-

ings of IEEE Omnidirectional Vision,2002

[31] X. Ying and Z. Hu, ”Can We Consider Central Catadioptric Cameras

and Fisheye Cameras within a Unified Imaging Model,” In European

Conference on Computer Vision, 2004.

[32] J. Zhou and B. Li, ”Robust Ground Plane Detection with Normalized

Sequences from a Robot Platform,” In Proceedings of IEEE Interna-

tional Conference on Image Processing, 2006

[33] http://www.youtube.com/watch?v=NqGL8h9zGd0, 2010

[34] http://www.youtube.com/watch?v=UjiT3q9VN1g, 2010

[35] http://www.youtube.com/watch?v=ubXzf0eLud4, 2010

hal-00522124, version 1 - 29 Sep 2010