Conference PaperPDF Available

Performance evaluation in obstacle avoidance

Authors:
Performance Evaluation in Obstacle Avoidance
Clint Nous, Roland Meertens, Christophe De Wagter and Guido de Croon
AbstractNo quantitative procedure currently exists to eval-
uate the obstacle avoidance capabilities of robotic systems. Such
an evaluation method is not only needed to compare different
avoidance methods, but also to determine the operational limits
of autonomous systems. This work proposes an evaluation
framework which can find such limits. The framework com-
prises two types of tests: detection tests and avoidance tests.
For each type, both environment and performance metrics
need to be defined. For detection tests such metrics are well
known, but for avoidance tests such metrics are not readily
available. Therefore a new set of metrics is proposed. The
framework is applied to a UAV that uses stereo vision to detect
obstacles. Three different avoidance methods are compared in
environments of varying difficulty.
I. INTRODUCTION
Autonomous flight with aerial robots has many promising
applications, such as surveillance, inspection or package
delivery. To carry out such tasks a reliable obstacle avoidance
system is essential. Many obstacle avoidance systems are
available [1], but it is unknown what the performance of
these systems is. No quantitative evaluation procedure to
measure the performance of obstacle avoidance systems is
available. Such a procedure is required to determine the
reliability of obstacle avoidance systems and to find in what
conditions these systems can safely operate. Knowledge of
these operational conditions is crucial when deploying UAVs
for practical applications.
The functioning of an obstacle avoidance system is often
demonstrated in a single environment. Environments found
in the literature are diverse, with among others: forests,
buildings, hallways and sparse obstacle courses. It is dif-
ficult to make performance predictions based on a single
environment since the performance of an obstacle avoidance
system depends on the environment in which it operates. For
the same reason it is difficult to compare obstacle avoidance
methods which are demonstrated in different environments.
Therefore a standard evaluation procedure is needed. Such
a procedure would: (1) provide a quantitative measure of
the system performance, (2) assist in designing engineering
solutions, (3) compare obstacle avoidance methods and (4)
allow accurate assessment of the state of the art. Without a
good evaluation method the development of new algorithms
Micro Air Vehicle laboratory, Control and Simulation department,
Delft University of Technology g.c.h.e.decroon@tudelft.nl.
(c)2016 IEEE. Personal use of this material is permitted. Permission from
IEEE must be obtained for all other users, including reprinting/ republishing
this material for advertising or promotional purposes, creating new collective
works for resale or redistribution to servers or lists, or reuse of any
copyrighted components of this work in other works. Original publication:
http://ieeexplore.ieee.org/document/7759532/.
Fig. 1: We introduce a framework for evaluating obstacle
avoidance methods and apply it to a drone with an omnidirec-
tional stereo vision system (left). A novelty is the inclusion
of quantifiable environment complexity metrics (right).
is likely to lead to ad-hoc solutions, which is currently seen
in the field of obstacle avoidance.
An attempt to create such an evaluation method is pro-
posed by Mettler et al. [2]. In this method six simple obstacle
courses and an urban environment are used to test the per-
formance of an obstacle avoidance algorithm. Unfortunately
Mettler does not motivate the choice for the selected tests and
no explanation is given on how representative these tests are
for the overall performance of a system.
Besides this work and some papers in which obstacle
avoidance algorithms are compared [3] [4], researchers have
not attempted to quantify or benchmark the performance
of obstacle avoidance algorithms. This is remarkable, con-
sidering the countless research contributions done in the
field. Such evaluation methods and benchmark data sets are
common practice in other research fields such as computer
vision or control engineering [5].
In this paper an evaluation framework is proposed which
makes it possible to quantify the performance of an obstacle
avoidance system. A key aspect of this framework is to
quantify the environment using specific metrics. First the
evaluation framework and its metrics are discussed (Sec-
tion II & III) after which the framework is applied to a
robotic application (Section IV).
II. EVALUATION FRAMEWORK
Developing an evaluation framework for obstacle avoid-
ance is difficult due to three aspects: (1) Performance de-
pends on the complete control loop, (2) There is a high
variety of operational conditions, (3) Each obstacle avoidance
method is developed for a different set of environments.
The complete control loop of an obstacle avoidance system
is shown in Figure 2. In the figure four main functions are
Fig. 2: Obstacle avoidance control loop in which the performance dependencies of four functions(A-D) are visualised.
identified (A-D), which are shown by dotted squares. For
each function a ‘cloud’ is drawn to visualise the factors that
impact the performance of each function. So it specifies in
which way the performance depends on the control loop.
This paper focuses on the obstacle detection function
(A) and the calculation of the avoidance manoeuvre (B),
these are the primary functions of an obstacle avoidance
system. The factors which influence the performance of the
detection function are visualised by the green cloud. It can
be seen that three factors are present: state, noise and the
environment. The state determines the distance to obstacles
and also the velocity. Both influence detection performance.
The noise arrow represents the internal noise of the sensor
and has a direct impact on the measurement. Finally the
environment conditions to which the detection sensor is
sensitive is represented by ‘Environment A’.
The factors that influence the performance of function
B is shown by the blue ‘cloud’. All blocks in the control
loop affect this performance. Also noise, disturbances and
external conditions are included. Lastly the environment
for which the performance of the avoidance manoeuvre is
sensitive is included and represented as Environment B. For
completeness also the ‘clouds’ of function C and D are
drawn.
Now an overview is presented of what affects the per-
formance of obstacle avoidance functions, it can be used to
define evaluation tests. Each factor in the clouds of function
A and B can be used as an independent variable in a
performance tests. In this paper the focus is put on the
environment factor. The others factors are assumed to be
constant, also two-factor interactions are not considered in
this paper.
To quantify the environment factor, (such that it can be
used as an independent variable) it needs to be described
with quantitative metrics. When the environment is not
described using specific metrics it becomes difficult to make
performance predictions. The definition of these metrics will
be discussed in the next section. Besides the independent
variable a dependent variable is needed to quantify the per-
formance. So far it has not been discussed how performance
is defined. This can be done by specifying a performance
metric. Both metrics are discussed for function A (detection)
and function B (avoidance) in the next section.
III. ENVIRONMENT AND PERFORMANCE METRICS
The environment and performance metrics selected for the
evaluation measurements depend on the obstacle avoidance
system that is used. For the detection function these metrics
depend on the sensor and its processing, for the avoidance
function these metrics depend on the avoidance algorithm.
Also some general characteristics influence the type of
metric. For instance, an obstacle avoidance task with static
obstacles requires different environment metrics than an
obstacle avoidance task with dynamic obstacles. In the
dynamic case, additional factors such as the velocities and
reactions of the obstacles play an important role and should
be captured by the environment metrics. The characteristic
whether the task has the UAV move in 2D or 3D has a similar
influence on the metrics, as 2D environment metrics may not
directly generalise to the 3D case. Although the framework
is applicable to different such characteristics, here the focus
will be on metrics for a 2D environment with static obstacles.
A. Detection
The goal of the detection measurement is to determine
under what conditions obstacles can be detected. To define
metrics for these tests a broad overview of detection methods
is required. The main methods can be divided based on the
detection sensor that is used [6]. The six main sensors are:
monocular vision, stereo vision, infrared, sonar, laser and
radar. In this discussion cooperative sensors such as ADS-B
are omitted. First the environment metrics are discussed.
1) Environment metric: Each sensor is sensitive to differ-
ent environment characteristics. Fortunately these are fairly
well known. An overview of these metrics can be seen in
Table I. In the table the relevant metrics for each sensor
is specified. The distance, for example, is relevant for all
sensors but the illumination is only relevant for monocular
and stereo vision.
TABLE I: Relevant environment metrics for detection sen-
sors
Monocular vision Stereo vision Infrared Sonar Laser Radar
Distance X X X X X X
Reflectivity X X X X X
Illumination X X
Texture X X
Illumination (Infrared) X
Inclination X X X
Transparency X X X
Radar cross section X
2) Performance metric: The most straightforward perfor-
mance metrics for detection are the distance error and vari-
ance. Another metric is the Receiver Operating Characteristic
(ROC) curve in which true positives and false positives are
plotted as function of a threshold. A third metric seen in
literature is the computational time. Since the computational
time typically does not depend on the environment, the
metric is not seen as a performance metric, but rather as
a condition.
B. Avoidance
The goal of the avoidance tests is to determine under
which conditions detected obstacles can be avoided. Again
relevant metrics need to be selected which are dependent on
the method that is used. Unfortunately no simple division can
be made between the wide diversity of avoidance methods.
Methods vary from simple rule based instructions to complex
path planners. Even within path planning a large variety exist.
All these methods can be sensitive to different environment
conditions. A similar table as Table I needs to be constructed
in which the columns represent the methods and the rows
the complexity metrics. But is unknown what these metrics
should be. In the following a novel set of such metrics is
proposed.
1) Environment metric: For detection sensors the relation
between the environment and the performance is often dis-
cussed in literature, but for avoidance algorithms it is not. As
mentioned in the introduction, most research contributions
use a specific environment without a motivation or specific
metric. Therefore not much is known about the performance
of these algorithms in different environments.
Only a few environment metrics are seen in literature: the
width of the obstacles, in-between distance or the density
[7],[8]. These metrics do not take the size of the UAV into
account, while this is essential. It is more challenging for a
UAV with a radius of 0.5m to fly trough obstacles with an in-
between distance of 1.0m, than it is for a UAV with a radius
of 0.1m. In the following a new set of non-dimensional
environment metrics is proposed which take the properties
of the UAV into account.
a) Traversability: The fist metric is the traversability
which is related to the obstacle density. An obstacle avoid-
ance task becomes more difficult when the distance between
obstacles becomes smaller. This difficulty is quantified by
the traversability. The traversability is calculated by selecting
random positions with multiple random headings in a flight
area. For these positions and headings the maximum distance
sis determined in which no obstacle is present (when flying
with a constant heading). The average of these distances over
nsamples gives a measure of how densely packed the envi-
ronment is and therefore how challenging the performance
task is. In Figure 1 the blue lines display these distances
for four positions with multiple headings. This calculation
is shown by Equation 1. In Equation 1 the value is divided
through the radius rof the UAV to make the metric non-
dimensional.
T raver sability =1
n·r
n
X
0
s(1)
b) Collision state percentage: The second metric is the
collision state percentage, which combines the dynamical
constraints of the robot with the available free space. For
example, for a fixed wing UAV, flying inside a room with
obstacles becomes more difficult when the UAV’s turning ra-
dius increases, but also when the size of the room decreases.
This effect is quantified by calculating the percentage of
states for which a collision is unavoidable given the robot’s
dynamical constraints. When all possible trajectories lead to
a collision, the state is marked as a ‘collision state’. These
are shown by the red dots in Figure 1 for initial headings
of 0rad. In the figure, the propagated trajectories have been
calculated using a minimum turning radius which depends
on the UAV dynamics, the velocity and the delay in the
system. Of course, for quad rotors also a maximum braking
acceleration (minimum braking distance) can be taken into
account.
c) Average avoidance length: The third proposed met-
ric evaluates the size of the obstacle. This metric quantifies
the difference between a forest environment with thin ob-
stacles and a building environment with wide obstacles. It
is calculated by averaging the needed lateral movement to
avoid obstacles at each time-step during a flight. The lateral
movement is the sum of the (projected) width of the obstacle
and the radius of the UAV, as shown by the green lines
in Figure 1. Again the metric is made non-dimensional by
dividing it through the radius of the UAV.
d) Other metrics: Two other metrics, which are not
discussed in detail, are the average orientation of obstacles
and the percentage of dead-ends in an airspace. These
have been inspired by known weak points in force field
methods and path planners. This set of five metrics could be
expanded further by introducing more novel metrics. Please
note that none of the environment metrics assume specific
geometric attributes of the obstacles, so that they are also
valid for irregularly shaped obstacles. Moreover, while we
present the metrics for a 2D scenario, generalisation to 3D
is straightforward for most metrics.
2) Performance metrics: The performance metrics are
again fairly straightforward. Generally two main types of
performance metrics are seen: success rate and path opti-
mality. There exists a hierarchy between success rate and
path optimality, since path optimality can only be determined
when an obstacle is successfully avoided.
For the success rate three scenarios are distinguished:
successful flights which reach their goal safely, unsuccessful
flights which do not reach the goal but do not collide either,
and flights which lead to a collision. The performance metric
specifies the percentage for each scenario.
Two secondary performance metrics evaluate the optimal-
ity of the successful flights. This can be done in multiple
ways. In this framework the choice is made to focus on two
optimality metrics: travelled distance and average velocity.
The duration and energy usage can be derived from these
metrics.
IV. EVALUATION OF A ROBOTIC APPLICATION
In this section the framework is applied to a UAV equipped
with six stereo vision systems placed in a hexagon (see
Figure 6). In the following subsections the performances of
three different avoidance strategies are determined.
A. Detection Performance
The six 4-gram stereo cameras [9] create a 360view of
the environment. The cameras produce two small images of
96 ×128 px for which pixels are sparsely matched on the
stereo board to create a disparity map. This map is created
using the Sum-of-Absolute-Difference scheme presented in
[10]. The algorithm runs at 10 fps. For stereo vision five
performance tests are suggested in Table I. Three of these
tests are presented in the following paragraphs.
1) Distance: For the first test the distance is selected as
independent variable. The performance effect of the obstacle
distance is measured by placing an obstacle of 1×1m
(Figure 3) in front of the stereo-camera. The distance to
this obstacle is varied from 0.5m to 4m. The result can
be seen in Figure 3. The figure shows that an obstacle
0 1 2 3 4
Real distance [m]
0
2
4
6
Measured distance [m]
Fig. 3: Test obstacle (left); Results distance test, red = ground
truth, blue = stereo measurements (right).
can be measured accurately up to three meters. For larger
distances the standard deviation increases rapidly. At four
meters a standard deviation of more than 1m is present.
This increase in variance is fundamental to stereo vision. So
for this detection sensor a range of 3m can be assumed.
2) Illumination: The second selected environment metric
is the illumination. Measurements were conducted in a
theatre in which the light exposure could be controlled.
Tests in an illumination regime from 284 lx to 0.05 lx
were conducted. The results of these measurements are
shown in Figure 4. A decrease in illuminance results in a
small increase in detection error, but it remains remarkably
accurate for distances of 0.5to 3meters. The stereo camera
is still able to detect obstacles up to an illumination of 100.7
lx.A different behaviour is seen at a distance of 4m. At an
illumination of 100.85 lx the distance estimate increases up to
a value of 9m. When the illuminance is lowered further the
distant measurement drops to a value of 0.4m. The reason of
this large underestimation is caused by different illumination
of the left and right camera. So not only the illuminance
is critical in detecting obstacles but also the distribution of
light. According to Figure 4 a minimal illuminance of 7lx
is required to obtain results at which the different distances
can be well discerned.
-1 0 1 2 3
log Illuminance [lx]
-2
0
2
4
6
8
10
Distance [m]
Fig. 4: Results illuminance test, distance of 0.5 = blue to 4 m
= red (left); Disparity map from test obstacle at 2m (right).
3) Texture contrast: The third metric is texture contrast.
One ‘ray’ in Figure 3 is analysed. In each ray the grey-scale
is decreased from white in the centre to black at the edges.
By doing this the contrast increases from zero to a maximum
between black and white. At a certain contrast the stereo
algorithm is able to find a match, which is shown in Figure 4.
The distance measurement of the matches found inside the
red area are shown in Figure 5. The figure shows that the
0 0.5 1
Normalized contrast
0
0.5
1
1.5
2
2.5
3
Distance [m]
-1 0 1 2 3
log Illuminance [lx]
0
20
40
60
80
100
120
Contrast [8-bit grayscale]
Fig. 5: Results contrast test (left); Contrast vs illuminance
test, blue = required contrast, red = max contrast (right).
contrast does not influence the accuracy of the measurement
but rather the point at which a detection is possible. Since
a pixel difference depends on the contrast of the obstacle
and the illuminance, it would be interesting to see how this
switching point changes when the illuminance is decreased.
This relation is shown in Figure 5. The red line in the
figure shows the difference in 8-bit gray-scale at the edge of
the panel. The contrast decreases when the illumination is
decreased. The blue line shows the 8-bit gray-scale value
at which a first match is found. A minimum contrast of
20 8-bit gray-scale is required to find a match. This value
increases when the illuminance is decreased. This effect can
be explained by the increased noise in the image. Since a
point is only accepted when it ’stand outs’ from the other
pixels, a higher value is needed to find a disparity.
B. Avoidance
Now that an idea of the detection performance is present,
the performance of the avoidance manoeuvre can be anal-
ysed. Three avoidance strategies are applied: A force field
method based on the work from Kandil et al. [11], a potential
field method based on the work from Huang et al. [12] and
a simple rule based method. This method selects the heading
closest to the goal heading in which no obstacle is present.
The open source code is part of the Paparazzi project.
-1 0 1 2 3 4
Y [m]
-1
0
1
2
3
4
X [m]
Fig. 6: Snapshot (left) and plotted top-view (right), of a
traversability performance test
For these methods it is not clear which metrics are
relevant. Therefore all proposed metrics in Section III are
used for the performance test. The measurements can be
performed in a simulation or in real-flight. For both mea-
surements it is important to state the assumptions made in
the complete control loop as described in Section II. Here
the results of real-flight tests are presented. Videos of the ex-
periments can be found in the MAV-lab’s YouTube play list..
The measurements are performed on an ARDrone2.0 using
the paparazzi autopilot. An INDI attitude controller [13] is
used combined with a PID velocity controller. The states
(drone position, velocity) are estimated using an Optitrack
system. Actuator noise and disturbances are those of the real
system at the time of the test. The tests are performed in an
arena with an illuminance of 300 lx, with objects which have
a high contrast. This is done such that the influence of the
detection sensor on the avoidance performance is minimised.
The remaining noise is a function of the distance as seen
in Figure 3. In the following the performance is discussed
for three environment metrics: traversability, collision state
factor and average avoidance length.
1) Traversability: The performance under different
traversability values is tested by decreasing the distance
between three obstacles. The obstacles consist of 1m square
blocks and are changed from an in-between distance of 1m
up to 3m. The test with the lowest traversability factor is
shown in Figure 6.
A total of five tests were performed. Each test consisted
of five flights. The results of these tests can be seen in
Figure 7 and 8. Figure 7 shows the percentages of the three
flight scenarios discussed in section III.
Fig. 7: Percentage of reaching the goal (green), not reaching
the goal but no collision (orange), and collisions (red).
It can be seen that all three methods are not able the reach
their goal for a traversability of 3.5. For a value of 6.1and
higher the majority of flights for the force and potential field
method are successful. The rule based method however still
has a high collision percentage at this value. This higher
percentage is likely the cost for the relative high velocity.
This velocity difference can be seen in Figure 8. The figures
shows the path optimality of the successful flights for each
method.
Fig. 8: Path optimality vs traversability; Force field (blue),
Potential field (Red), Rule based (yellow).
2) Collision state percentage: The performance under dif-
ferent collision state percentages is measured by decreasing
the room in which it flies. To prevent the UAV from only
flying in straight lines a square obstacle of 1m is placed in
the middle of the flight arena. A reference velocity of 1m/s is
used. An example of a test-flight is shown in Figure 9. Again
the percentages of the three flight scenarios are evaluated, as
well as the path optimality. The force field method performed
the best, it was able to perform successful flights up to a
collision state percentage of 19%. The potential field and rule
based method are less successful and able to avoid obstacles
up to a state factor of 13%. For path optimality the same
-3 -2 -1 0 1 2
Y [m]
-3
-2
-1
0
1
2
X [m]
Fig. 9: Snapshot (left) and plotted top-view (right), of a
collision state factor performance test
hierarchy was present as the one shown in Figure 8. Also a
decreased average velocity for all methods could be observed
when the collision state factor was increased.
3) Average avoidance length: The effect of the length
of the avoidance manoeuvre is measured by increasing the
width of a single obstacle in front of the UAV. The smallest
obstacle has a width of 0.3m, the largest a width of 4m.
The results of these flights are shown Figure 10.
Fig. 10: Percentage of reaching the goal (green), not reaching
the goal but no collision (orange), and collisions (red).
Figure 10 shows that the force field method clearly de-
pends on the avoidance length. For a value of 3.7or higher
it is not able to avoid the obstacle. Instead it gets stuck into
a local minimum. The potential field method is able to avoid
obstacles for all values. The rule based method is able to
avoid large obstacles but unable to avoid small ones. Again
the path optimality was analysed. Similar travelled distances
were observed for the potential and rule based method. The
force field method was less optimal, with larger distances.
C. Conclusion performance tests
The shown test results provide a quantitative analyses of
an obstacle avoidance method. Such analysis is new in the
field of obstacle avoidance and can be used to define the
operational conditions in which an obstacle avoidance system
can safely fly. The results also provide a comparison between
three avoidance methods. The choice for which method is the
best depends on the design requirements.
For the detection the found performance limits can be
summarised as follows: Distance <3m, Illuminance >7
lx, and Texture >20 8-bit grayscale of contrast. For the
avoidance test the results are summarised in Table II.
TABLE II: Performance summary
Force Field Potential Field Rule Based
Traversability >3.9 >3.9 >5.9
Collision state factor <19 <13 <13
Avoidance length <2.4 ALL >1.6
Orientation angle >1 ALL ALL
Dead-End factor = 0 = 0 ALL
The table shows the limits of the proposed methods for
each metric. The dead-end factor and average orientation are
included as well. Such a table could be used by engineers to
design obstacle avoidance systems which can operate under
a specific set of operational conditions.
V. CONCLUSION
A new framework was proposed, which allows the quan-
tification of the strengths and weaknesses of an obstacle
avoidance system. The framework identifies parts of the
entire obstacle avoidance control loop that can be tested sep-
arately, and introduces novel performance and environment
metrics. The application of the framework to a specific UAV
2D avoidance task shows that the metrics allow to identify
the limits of the avoidance system in an objective and quan-
tifiable manner. In this sense, the framework hopefully forms
an important step towards a more solid design, evaluation,
and comparison of obstacle avoidance methods for robotics.
REFERENCES
[1] C. Goerzen, Z. Kong, and B. Mettler, A survey of motion planning
algorithms from the perspective of autonomous UAV guidance, 2010,
vol. 57.
[2] B. Mettler, Z. Kong, C. Goerzen, and M. Whalley, “Benchmarking
of obstacle field navigation algorithms for autonomous helicopters,”
Proceedings of the 66th Annual Forum of the American Helicopter
Society, pp. 1–18, 2010.
[3] A. Alexopoulos, A. Kandil, P. Orzechowski, and E. Badreddin, “A
Comparative Study of Collision Avoidance Techniques for Unmanned
Aerial Vehicles,” 2013 IEEE International Conference on Systems,
Man, and Cybernetics, pp. 1969–1974, 2013.
[4] A. Yufka, “Performance Comparison of Bug Algorithms,” Iats, 2009.
[5] N. Thacker, a. Clark, J. Barron, J. Rossbeveridge, P. Courtney,
W. Crum, V. Ramesh, and C. Clark, “Performance characterization
in computer vision: A guide to best practices,” Computer Vision
and Image Understanding, vol. 109, pp. 305–334, 2008. [Online].
Available: http://dx.doi.org/10.1016/j.cviu.2007.04.006
[6] B. M. Albaker and N. a. Rahim, “Unmanned aircraft collision detection
and resolution: Concept and survey,Proceedings of the 2010 5th
IEEE Conference on Industrial Electronics and Applications, pp. 248–
253, 2010.
[7] K. Sebesta and J. Baillieul, “Animal-inspired agile flight using optical
flow sensing,Proceedings of the IEEE Conference on Decision and
Control, no. 1, pp. 3727–3734, 2012.
[8] S. Karaman and E. Frazzoli, “High-speed flight in an ergodic for-
est,” Proceedings - IEEE International Conference on Robotics and
Automation, pp. 2899–2906, 2012.
[9] C. De Wagter, S. Tijmons, B. D. W. Remes, and G. C. H. E. de Croon,
“Autonomous Flight of a 20-gram Flapping Wing MAV with a 4-gram
Onboard Stereo Vision System,IEEE International Conference on
Robotics & Automation (ICRA), pp. 4982–4987, 2014.
[10] D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense
two-frame stereo correspondence algorithms,” International Journal of
Computer Vision, vol. 47, no. 1-3, pp. 7–42, 2002.
[11] A. a. Kandil, A. Wagner, A. Gotta, and E. Badreddin, “Collision
avoidance in a recursive nested behaviour control structure for un-
manned aerial vehicles,” Conference Proceedings - IEEE International
Conference on Systems, Man and Cybernetics, pp. 4276–4281, 2010.
[12] W. H. Huang, B. R. Fajen, J. R. Fink, and W. H. Warren, “Visual
navigation and obstacle avoidance using a steering potential function,
Robotics and Autonomous Systems, vol. 54, pp. 288–299, 2006.
[13] E. J. J. Smeur, Q. Chu, and G. C. H. E. de Croon, “Adaptive
Incremental Nonlinear Dynamic Inversion for Attitude Control of
Micro Air Vehicles,” Journal of Guidance, Control, and Dynamics,
vol. 39, no. 3, pp. 450–461, 2016.
... Such comparisons are hardly done in the literature, which makes it hard to compare different studies on obstacle avoidance in general. In [53], we have presented several objective, quantitative metrics that are able to characterize the difficulty for a robot to avoid obstacles in a given environment. One of the insights behind these metrics is that they have to take into account the size and dynamics of the robot. ...
... Please note that the traversability is a more effective metric than 'obstacle density' (e.g., surface of the flight area covered by obstacles [34]), as the latter would give the same value for a single big obstacle in a corner of a room as for many thin obstacles in the middle of a room. Second, [53] also introduced the collision state percentage, which is based on an important aspect not covered by the traversability. Specifically, the traversability gives the same value for two robots of identical radius but traveling at different speeds. ...
... We have introduced other metrics in [53], but these two metrics are very relevant to indoor autonomous flight. They give a more formal explanation of why obstacle avoidance becomes easier if a vehicle becomes smaller and is at least able to fly slower. ...
... Such comparisons are hardly done in the literature, which makes it hard to compare different studies on obstacle avoidance in general. In [53], we have presented several objective, quantitative metrics that are able to characterize the difficulty for a robot to avoid obstacles in a given environment. One of the insights behind these metrics is that they have to take into account the size and dynamics of the robot. ...
... Please note that the traversability is a more effective metric than 'obstacle density' (e.g., surface of the flight area covered by obstacles [34]), as the latter would give the same value for a single big obstacle in a corner of a room as for many thin obstacles in the middle of a room. Second, [53] also introduced the collision state percentage, which is based on an important aspect not covered by the traversability. Specifically, the traversability gives the same value for two robots of identical radius but traveling at different speeds. ...
... We have introduced other metrics in [53], but these two metrics are very relevant to indoor autonomous flight. They give a more formal explanation of why obstacle avoidance becomes easier if a vehicle becomes smaller and is at least able to fly slower. ...
Conference Paper
Full-text available
Indoor navigation has been a major focus of drone research over the last few decades. The main reason for the term "indoor" came from the fact that in outdoor environments, drones could rely on global navigation systems such as GPS for their position and velocity estimates. By focusing on unknown indoor environments, the research had to focus on solutions using onboard sensors and processing. In this article, we present an overview of the state of the art and remaining challenges in this area, with a focus on small drones.
... The experimental focus on this study was performed to support the main contribution in the mapping task. The further benchmarking of the trajectory planner in experiments and simulation, for instance, based on the benchmarks (Mettler, Kong, Goerzen, & Whalley, 2010;Nous, Meertens, Wagter, & de Croon, 2016), is left as future work. ...
Article
Full-text available
Achieving the autonomous deployment of aerial robots in unknown outdoor environments using only onboard computation is a challenging task. In this study, we have developed a solution to demonstrate the feasibility of autonomously deploying drones in unknown outdoor environments, with the main capability of providing an obstacle map of the area of interest in a short period of time. We focus on use cases where no obstacle maps are available beforehand, for instance, in search and rescue scenarios, and on increasing the autonomy of drones in such situations. Our vision‐based mapping approach consists of two separate steps. First, the drone performs an overview flight at a safe altitude acquiring overlapping nadir images, while creating a high‐quality sparse map of the environment by using a state‐of‐the‐art photogrammetry method. Second, this map is georeferenced, densified by fitting a mesh model and converted into an Octomap obstacle map, which can be continuously updated while performing a task of interest near the ground or in the vicinity of objects. The generation of the overview obstacle map is performed in almost real time on the onboard computer of the drone, a map of size 100 m 75 × m is created in ≈2.75 min, therefore, with enough time remaining for the drone to execute other tasks inside the area of interest during the same flight. We evaluate quantitatively the accuracy of the acquired map and the characteristics of the planned trajectories. We further demonstrate experimentally the safe navigation of the drone in an area mapped with our proposed approach
... The experimental focus on this study was performed to support the main contribution in the mapping task. The further benchmarking of the trajectory planner in experiments and simulation, for instance, based on the benchmarks (Mettler, Kong, Goerzen, & Whalley, 2010;Nous, Meertens, Wagter, & de Croon, 2016), is left as future work. ...
Article
Full-text available
Achieving the autonomous deployment of aerial robots in unknown outdoor environments using only onboard computation is a challenging task. In this study, we have developed a solution to demonstrate the feasibility of autonomously deploying drones in unknown outdoor environments, with the main capability of providing an obstacle map of the area of interest in a short period of time. We focus on use cases where no obstacle maps are available beforehand, for instance, in search and rescue scenarios, and on increasing the autonomy of drones in such situations. Our vision-based mapping approach consists of two separate steps. First, the drone performs an overview flight at a safe altitude acquiring overlapping nadir images, while creating a high-quality sparse map of the environment by using a state-of-the-art photogrammetry method. Second, this map is georeferenced, densified by fitting a mesh model and converted into an Octomap obstacle map, which can be continuously updated while performing a task of interest near the ground or in the vicinity of objects. The generation of the overview obstacle map is performed in almost real time on the onboard computer of the drone, a map of size 100 m × 75 m is created in ≈ 2.75 min , therefore, with enough time remaining for the drone to execute other tasks inside the area of interest during the same flight. We evaluate quantitatively the accuracy of the acquired map and the characteristics of the planned trajectories. We further demonstrate experimentally the safe navigation of the drone in an area mapped with our proposed approach.
... Unfortunately, these sensors suffer from limited range. For example, a Microsoft Kinect RGB-D sensor has a declared maximum range of 5 meters, while in stereo rigs with baselines compatible with MAV's sizes, performances start to degrade on distances larger than 3 meters [16]. Range can be slightly increased employing larger baselines, but on MAVs this is usually difficult, due to payload and encumbrance constraints. ...
Article
Full-text available
In this work, we give a new twist to monocular obstacle detection. Most of the existing approaches either rely on Visual SLAM systems or on depth estimation models to build 3D maps and detect obstacles. Despite their success, these methods are not specifically devised for monocular obstacle detection. In particular, they are not robust to appearance and camera intrinsics changes or texture-less scenarios. To overcome these limitations, we propose an end-to-end deep architecture that jointly learns to detect obstacle and estimate their depth. The multi task nature of this strategy strengthen both the obstacle detection task with more reliable bounding boxes and range measures and the depth estimation one with robustness to scenario changes. We call this architecture J-MOD$^{2}$ We prove the effectiveness of our approach with experiments on sequences with different appearance and focal lengths. Furthermore, we show its benefits on a set of simulated navigation experiments where a MAV explores an unknown scenario and plans safe trajectories by using our detection model.
Article
Full-text available
Incremental nonlinear dynamic inversion is a sensor-based control approach that promises to provide high-performance nonlinear control without requiring a detailed model of the controlled vehicle. In the context of attitude control of micro air vehicles, incremental nonlinear dynamic inversion only uses a control effectiveness model and uses estimates of the angular accelerations to replace the rest of the model. This paper provides solutions for two major challenges of incremental nonlinear dynamic inversion control: how to deal with measurement and actuator delays, and how to deal with a changing control effectiveness. The main contributions of this article are 1) a proposed method to correctly take into account the delays occurring when deriving angular accelerations from angular rate measurements; 2) the introduction of adaptive incremental nonlinear dynamic inversion, which can estimate the control effectiveness online, eliminating the need for manual parameter estimation or tuning; and 3) the incorporation of the momentum of the propellers in the controller. This controller is suitable for vehicles that experience a different control effectiveness across their flight envelope. Furthermore, this approach requires only very coarse knowledge of model parameters in advance. Real-world experiments show the high performance, disturbance rejection, and adaptiveness properties. Read More: http://arc.aiaa.org/doi/abs/10.2514/1.G001490
Article
Full-text available
This paper describes a framework for performance evaluation of autonomous guidance systems using benchmarks. The benchmarks consist of six simple problems containing two or fewer geometric primitives, a high resolution height map scanned from an urban area, and a simple vehicle model with hard velocity and acceleration constraints. Performance metrics used to compare trajectories are presented. Two trajectory optimization methods are used to generate near-optimal baseline trajectories: numerical optimization using Nonlinear Programming for simple problems, and the Motion Primitive Automaton for problems with complex terrain. As an example, the Obstacle Field Navigation system developed by the Army Aeroflightdynamics Directorate is compared to the baseline.
Conference Paper
Full-text available
Autonomous flight of Flapping Wing Micro Air Vehicles (FWMAVs) is a major challenge in the field of robotics, due to their light weight and the flapping-induced body motions. In this article, we present the first FWMAV with onboard vision processing for autonomous flight in generic environments. In particular, we introduce the DelFly ‘Explorer’, a 20-gram FWMAV equipped with a 0.98-gram autopilot and a 4.0-gram onboard stereo vision system. We explain the design choices that permit carrying the extended payload, while retaining the DelFly’s hover capabilities. In addition, we introduce a novel stereo vision algorithm, LongSeq, designed specifically to cope with the flapping motion and the desire to attain a computational effort tuned to the frame rate. The onboard stereo vision system is illustrated in the context of an obstacle avoidance task in an environment with sparse obstacles.
Conference Paper
Full-text available
In this study, Bug1, Bug2, and DistBug motion planning algorithms for mobile robots are simulated and their performances are compared. These motion planning algorithms are applied on a Pioneer mobile robot on the simulation environment of MobileSim. Sonar range sensors are used as the sensing elements. This study shows that mobile robots build a new motion planning using the bug's algorithms only if they meet an unknown obstacle during their motion to the goal. Each of the bug's algorithms is tested separately for an identical configuration space. At the end of this study, the performance comparison of the bug's algorithms is shown. Keywords: Bug, pioneer, robots, sonar, MobileSim Paper videos can be reached from this URL: http://www.youtube.com/playlist?list=PLENSkat0854tTGxwtIYy3JOrSsPGFx074
Article
Full-text available
This document provides a tutorial on performance characterization in computer vision. It explains why learning to characterize the performances of vision tech- niques is crucial to the discipline's development. It describes the usual proce- dure for evaluating vision algorithms and its statistical basis. The use of a soft- ware tool, a so-called test harness, for performing such evaluations is described. The approach is illustrated on an example technique.
Conference Paper
In this paper three collision avoidance methods for an unmanned aerial vehicle (UAV) are tested and compared to one another. The quadrocopter dynamic model with attitude and velocity controller, a trajectory generator and a selection of collision avoidance approaches were implemented. The first collision avoidance method is based on a geometric approach which computes a direction of avoidance from the flight direction and simple geometric equations. The second technique uses virtual repulsive force fields causing the UAV to be repelled by obstacles. The last method is a grid-based online path re-planning algorithm with A* search that finds a collision free path during flight. Various flight scenarios were defined including static and dynamic obstacles.
Article
Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods. Our taxonomy is designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms. We have also produced several new multi-frame stereo data sets with ground truth and are making both the code and data sets available on the Web. Finally, we include a comparative evaluation of a large set of today's best-performing stereo algorithms.
Conference Paper
In this paper, the collision avoidance behaviour in a recursive nested behaviour control structure for Unmanned Aerial Vehicles will be discussed. The architecture is an extension of the behaviour-based recursive control structure, which has been applied successfully to mobile robot applications. The system structure, as an abstraction of multiple cascaded control loops with feedback mechanisms, is robust against disturbances and model uncertainties. The obstacle awareness and collision avoidance are considered to be the most important issues in the field of Unmanned Aerial Vehicles (UAV), thus more attention will be paid to this topic. The collision avoidance utilizes a potential field method tailored for UAV application domain. The formulation of a repulsion forces based on radar sensor measurement is presented. The implementation of the collision avoidance system is also described.
Conference Paper
The utilization of unmanned aerial vehicles requires the ability to navigate in urban or unknown terrain where many moving and/or stationary obstacles of different types and sizes may endanger the safety of the mission. Large efforts have been addressed to resolve conflicts to unmanned aircraft. This paper explores the fundamental concept and presents an up-to-date survey of the collision sensing, detection and resolution methods those deployed for aircraft, especially for unmanned aerial vehicles. The collision avoidance concept is demonstrated through proposing generic functions carried by collision avoidance systems with special emphasis on the context aware implementation. These functions are then presented together with design factors that are used then to classify major collision avoidance methods.