Conference PaperPDF Available

Autonomous flight of small drones in indoor environments


Abstract and Figures

Indoor navigation has been a major focus of drone research over the last few decades. The main reason for the term "indoor" came from the fact that in outdoor environments, drones could rely on global navigation systems such as GPS for their position and velocity estimates. By focusing on unknown indoor environments, the research had to focus on solutions using onboard sensors and processing. In this article, we present an overview of the state of the art and remaining challenges in this area, with a focus on small drones.
Content may be subject to copyright.
Challenges of Autonomous Flight in Indoor Environments
Guido de Croon and Christophe De Wagter1
Abstract Indoor navigation has been a major focus of drone
research over the last few decades. The main reason for the
term “indoor” came from the fact that in outdoor environments,
drones could rely on global navigation systems such as GPS for
their position and velocity estimates. By focusing on unknown
indoor environments, the research had to focus on solutions
using onboard sensors and processing. In this article, we present
an overview of the state of the art and remaining challenges in
this area, with a focus on small drones.
The miniaturization of electronics has allowed the creation
of small flying robots, in the scientific domain termed Micro
Air Vehicles (MAVs), which are capable of performing
interesting missions, such as surveying an area of agricultural
land. Initially, research on MAVs focused on the use of fixed
wing MAVs in outdoor environments. Early autopilots often
used thermopiles that measured the temperature differences
between earth and sky to get absolute attitude measurements
[11], and the Global Positioning System (GPS) to get posi-
tion and velocity estimates useful for navigation. Both these
systems relied on the MAV flying above obstacle height in
an outdoor environment. Still, since the thermopiles were
quite sensitive to suboptimal weather conditions and gave
problems near obstacles upon landing, they were readily re-
placed by attitude measurements from Inertial Measurement
Units (IMUs). The combination of IMU and GPS ensured the
huge success of MAVs - now commonly called “drones”.
This success is partly due to flying high in the sky. In
an outdoor environment, far away from any obstacles, an
MAV just needs to know its attitude and position in order
to perform a valuable observation mission that otherwise
has to be performed with a much more expensive manned
alternative. This, while a driving or walking robot is typically
operating in a much more complex, cluttered environment,
in which physical interaction plays a much bigger role.
It may come as no surprise that, subsequently, researchers
started to direct their attention to flying closer to the ground,
or even in indoor environments. This change of environment
immediately raises problems such as obstacle avoidance and
velocity estimation that are difficult in general, but are even
more difficult on light-weight and relatively fast-moving
robots such as MAVs.
The research topic of “indoor autonomous flight” has been
a major challenge in the robotics domain for more than a
1Micro Air Vehicle Laboratory, Faculty of Aerospace
Engineering, Delft University of Technology, Delft, the
Netherlands., This work has been submitted to
the IEEE for possible publication. Copyright may be transferred without
notice, after which this version may no longer be accessible.
Fig. 1. Is it more difficult to fly indoors or outdoors? In this article,
we discuss the specific challenges of indoor environments for autonomous
flight, and the consequences these environments have on drone design
and artificial intelligence. Left: images from an indoor office environment.
Right: images from an outdoor environment.
decade, and an enormous amount of progress has been made
over the years. In this article, we discuss what we consider
the major issues, developments, and remaining challenges in
this area. Our focus will be on small to what many would
consider even tiny drones; one can think of weights from
1kg down to drones in the order of tens of grams. With
our discussion, we hope to provide novel insights into the
matters that are important for autonomous indoor flight.
The remainder of the article is structured as follows.
First we discuss whether and why indoor environments are
actually more difficult to fly in than outdoor environments
(Section II). Subsequently, we investigate the consequences
of the indoor environment on the suitability of different drone
designs and artificial intelligence techniques in Sections III
and IV. We draw conclusions in Section V.
Why is it so difficult to achieve autonomous flight in in-
door environments? Multiple reasons underly this difficulty,
two of which are discussed in the following subsections.
A. Difficulties of flying in indoor environments
Indoor environments are (semi-)closed spaces that gener-
ally contain less space for flight than most outdoor envi-
ronments. Often indoor environments are quite narrow and
cluttered with obstacles. These properties make it much
more difficult for drones to fly in. Although intuitively this
is quite evident, it must be noted that there is a gradual
scale from very narrow indoor environments (an office) to
Fig. 2. Left: The traversability of an environment is determined by
sampling positions and motion directions in the environment. For each
sampled position and direction, it is determined after which distance the
robot collides. The traversablity then is the expected distance-to-collision.
The bottom two rectangles show rooms that have an identical obstacle
density (obstacle surface / total surface), but which are of a different obstacle
avoidance difficulty, as captured by the traversability. Right: The collision
state percentage captures the part of the (state) space in which the robot
cannot avoid a collision anymore. The figure shows these areas for two
drones of equal size (red and green), but flying at a different speed and
hence with different ‘braking distances’. The bottom two rooms show two
very different flying entities. The room is much easier to fly in for the
housefly than for the large fixed wing UAV.
spacious indoor environments (a gym). Moreover, a similar
scale exists outdoors, where flying through an apple tree
orchard is more challenging than flying high over flat land.
So how does flying in an outdoor field with sparse trees
(e.g., [55]) compare to flying in a large office space (e.g.,
[22])? Such comparisons are hardly done in the literature,
which makes it hard to compare different studies on obstacle
avoidance in general. In [53], we have presented several
objective, quantitative metrics that are able to characterize
the difficulty for a robot to avoid obstacles in a given
environment. One of the insights behind these metrics is
that they have to take into account the size and dynamics
of the robot. A small office room provides ample space to a
relatively slow fly, but would be far too small for a 1 kg 1
m wing span fixed wing MAV.
Here we mention the two metrics that in our opinion
are most relevant to indoor flight. The first metric is the
traversability (Fig. 2, left), which represents the expected
distance that a robot can move straight before hitting an
obstacle. To determine the traversability T, the robot can
be (virtually) placed in ndifferent initial positions with
a random heading, always flying straight until it hits an
obstacle. This leads to the following formula:
where nis the number of initial positions and dis the
distance traveled until a collision. Please note that the
traversability is a more effective metric than ‘obstacle den-
sity’ (e.g., surface of the flight area covered by obstacles
[34]), as the latter would give the same value for a single
big obstacle in a corner of a room as for many thin obstacles
in the middle of a room.
Second, [53] also introduced the collision state percentage,
which is based on an important aspect not covered by the
traversability. Specifically, the traversability gives the same
value for two robots of identical radius but traveling at
different speeds. This, while it is obvious that flying twice as
fast in the same room is more difficult. The reason for this
is that flying faster requires the MAV to react earlier, as else
it may not be able to stop or turn in time. The collision state
percentage, then, is the percentage of the space in which
the MAV can no longer avoid a collision given a specific
flight speed. For a quad rotor this would depend on the
maximal deceleration, for a fixed wing on its flight speed
and corresponding maximal bank angle. Again, this metric
can be determined by means of sampling in the environment.
To determine this value, we typically use approximations
in order to get at least a coarse idea of the collision state
percentage. For example, one can assume that a quad rotor
corresponds to a sphere in 3D or a circle in 2D and that
it always just brakes while flying in a straight line (Fig. 2,
right). For a fixed wing, one can check if a circular trajectory
with maximum bank angle starting from the current position
is obstacle-free. Of course, for accurate estimates, one can
also use collision checking of more complex drone models
in a realistic simulator and / or use path planners instead of
an assumption on the avoidance maneuver.
We have introduced other metrics in [53], but these two
metrics are very relevant to indoor autonomous flight. They
give a more formal explanation of why obstacle avoidance
becomes easier if a vehicle becomes smaller and is at least
able to fly slower. Moreover, it allows to compare different
environments, be they indoor or outdoor (where for real
drones and environments it is best to create an approximative
simulation model). In particular, the metrics suggest that
indoor environments often represent a bigger challenge than
outdoor environments, because the enclosing elements such
as walls significantly reduce the traversability and increase
the collision state percentage.
Of course, there are properties of indoor environments
that make flying easier. A major such property is that
indoor environments are typically shielded much better from
wind and wind gusts than outdoor environments. Also other
weather phenomenon such as fog or rain do not play a role
in indoor environments.
B. Difficulties of sensing in indoor environments
Whereas the previous subsection treated the spatial layout
of the environment, indoor environments also differ from
outdoor environments, since they are most of the time made
by humans for humans.
In particular, both the ‘by humans’ and the ‘for humans’
has as consequence that objects in these environments are
very different, for instance in terms of visual appearance,
from natural objects in outdoor environments. Especially for
robotic vision this has enormous consequences. Generally
speaking, indoor environments have a much bigger variety
of colors and textures than outdoor environments (see, e.g.,
the pictures in Figure 1). At the same time, indoors there
can be big close-to-texture-less objects such as white walls.
These objects pose particular challenges, as well-known
visual cues relying on projective geometry such as stereo
vision and optical flow require texture for matching. This
is the reason why still in many studies experiments are
performed in very well-textured indoor spaces, or texture is
applied on purpose to help the robot [72]. A lot of texture
is also not always good; Objects such as walls can feature
very repetitive textures, which also represent a difficulty
for the mentioned visual cues. Other human-specific objects
also pose a problem especially for vision, think of large
window panes and mirrors. Visually detecting these objects
is obviously possible - as we humans are in most cases able
to - but requires more complex visual processing.
The visibility of the sky in outdoor environments is also
an essential difference with indoor environments, as it has a
major influence on lighting and also is itself quite recogniz-
able. In [43], [17], it is proposed to use only the detection
of the sky to avoid obstacles, a strategy that obviously is
not applicable indoors. Also the direction of the main light
source in the scene is a cue that can be used outdoors for
attitude control, but indoor becomes much less reliable. It is
assumed to play a role in the attitude estimation of insects
[14], although insects are obviously still able to control their
attitude in indoor environments. Moreover, the polarization
of the sky is used by many insects as a compass [48], while
the MEMS-based magnetometers used for determining the
heading of MAVs do not work well in indoor environments
due to electro-magnetic disturbances by the used materials
(metals) and electronic devices in indoor environments. It
is the same use of materials that often blocks the line-of-
sight from MAVs to satellites in orbit, and causes multi-
path effects, significantly deteriorating position and velocity
estimates by means of the Global Positioning System (GPS).
This means that indoors, MAVs have to rely on other sensors
if they want to determine heading, position, and velocity.
Flying in an indoor environment has radical consequences
for the type of drone that can be used. As reasoned above,
they would have to be small and at least be able to fly slowly
- while in the same time having to deal with less wind. These
requirements favor certain drone designs over others.
In particular, they are very detrimental for fixed wing
MAVs, which rely on their wing surface area and flight speed
for lift. The smaller size and slower flight speed also leads to
a different aerodynamic regime, which is captured by a lower
Reynolds number. Flight at lower Reynolds numbers means
more viscous, turbulent air flow, which further reduces the
lift provided by fixed wings [61].
Rotorcraft make use of similar aerodynamic phenomenons,
but still achieve sufficient lift by having the rotors spin
around at very high angular speeds. A down-side of these
fast-spinning rotors is that if they collide with an obstacle,
the vehicle immediately looses lift and flips over. Of course,
the rotors can be protected at the cost of some extra struc-
tural weight [9], [50]. Still, almost all indoor experiments
performed in the literature have been done with rotorcraft,
Fixedwing Weight Size Sensor
MC2 [73] 10.3 g 36 cm 1D CMOS camera
Ladybird [10] 46 g 12.0 cm ADNS9500 OF
ARDrone [12] 420 g 58.4 cm Sonar, HD camera,
bottom camera
Asctec 1650 g 65.1 cm Laser scanner,
Pelican [35] camera.
Flapping wing
DelFly [19] 16 g 28.0 cm Stereo VGA
Lighter Than Air
Blimp2b [71] 200 g 110 cm TSL3301 1D camera
Gimbal [9] 385 g 34 cm Collision resistant
which have as a big advantage that they are relatively
easy in terms of physical design. A main challenge with
rotorcraft is that they are inherently unstable, meaning that
they require active attitude control. On the short term, this
can be provided by an IMU and autopilot. However, on the
longer term, uncorrected attitude estimates will start to have
a bias, resulting in an acceleration of the drone to a specific
direction. In narrow indoor environments, only a few seconds
of position drift will be sufficient to hit an obstacle. The
attitude biases can be corrected for by means of velocity or
position measurements. This is a key reason why initially
indoor environments were so challenging. Outdoors, GPS
measurements can provide both desired quantities, while
indoors, solutions now at least provide velocity by relying
on a combination of visual measurements and additional
sensors. Examples are the combination of optical flow from a
down-looking camera and a downward-pointing sonar [12],
and the combination of more general visual odometry and
inertial measurements (Visual Inertial Odometry) [60], [64],
[40]. Of course, position estimates can also be obtained,
for instance by means of full-fledged visual Simultaneous
Localization And Mapping (visual SLAM) [4], [25]. These
methods will be discussed later in more detail, but for now it
is sufficient to realize that this initial challenge is for a large
part solved, especially if the MAV has sufficient sensors and
computation power.
The requirements for indoor flight actually fit very well
with flapping wing designs. Inspired by natural fliers, these
MAVs also make use of unsteady aerodynamic phenomenon
common to low Reynolds numbers. The comparison with
rotorcraft is an active scientific debate, but it seems that at
small scales, when the viscous effects of the air become more
predominant, flapping wing propulsion can be more efficient.
For now, however, the design of flapping wing MAVs is
much more complex than that of rotorcraft, making it a
research topic of itself [69], [36], [41], [19]. Some successful
designs have been created, though, and they already show
some interesting properties. For instance, they can fly both
fast and slow, where faster flight is more efficient since the
wings then also provide lift. As an example, at efficient fast
forward speed, the 16 gram DelFly II can fly for 25 minutes,
which is a flight time far beyond the reach of smaller but
heavier quadrotors (e.g., the ladybirds used in [44]). Flapping
wing MAVs also do not need high speeds of the wings, and
at the end of each stroke have 0 velocity. As a consequence,
when they collide with an obstacle, they gently bounce off.
In comparison, rotorcraft loose lift at the side of the obstacle
and are at risk to flip over. Interestingly, flapping wing MAVs
with tails have a passively stable attitude. This means that
they do not require active attitude control, and – in the
absence of external disturbances – a tailed flapping wing
MAV will just fly straight until it collides with an obstacle.
This desirable property has enabled early experiments with
obstacle avoidance [16], [20] in the time that attitude control
on such small vehicles was still a considerable challenge. On
the down-side tailed designs are less agile, can only hover
with considerable difficulty [67], and are easily perturbed
even by modest drafts. Tailless flapping wing MAVs that
steer with their wings are a solution to these problems [36],
[41], [54], but then again are inherently unstable in attitude,
requiring active attitude control.
Another category not often studied is the lighter-than-
air type of drone, such as blimps [71], [62]. Depending
on the weight that has to be carried, these designs can
actually be quite large. A large surface makes such a drone
sensitive to drafts, while the control authority of these drones
is typically not great. A mission type in which this type
of drone could excel is of course a long-term observation
mission, in which flight time is at a prime. Other drone types
could perform such missions only when able to perch in one
way or another [38], [63], [45]. Lighter-than-air drones are
perhaps the category that can best handle collisions – not
incurring damage to the environment and – except in the
case of very sharp extremities on the obstacle – not losing
its lift capability in any way itself.
A final category that merits attention is the ‘hybrid’
category that combines a fixed wing with a set of rotors
that allow it to also hover and perform vertical take-offs
and landing (VTOL). Outdoors, such vehicles promise to
combine the VTOL capabilities of pure rotorcraft with the
flight range and endurance of fixed wings [33], [58], [32],
[21]. However, due to the reasons set out for the fixed
wings above, this category may not have the same advantages
indoors. Still, there may be a niche for such vehicles in larger
indoor environments.
The evaluation of drone types above focused on objective
properties such as their ability to generate lift while flying
fast or slow, on the passive or active attitude stability, and the
way in which they deal with collisions. This last property is
especially important, since indoor environments are typically
populated by humans, making safety of utmost importance.
The presence of humans also suggests that successful indoor
drone designs will also depend on more ‘soft’, subjective
properties. In our experience, people regard different drone
Sensor Weight (grams) Power (W)
Laser scanner 160 grams 2.5 W
(Hokuyo URG-04LX-UG01)
Radar 750 grams 30 W
Sonar 4.5grams 0.05 W
Tiny infrared sensor 0.5grams 0.06 W
Stereo Camera 9.4grams 3.5 W
(Intel Real Sense)
Monocular CMOS camera 1.0grams 0.1 W
Event-based camera 23 grams 0.15 W
(eDVS Inilabs)
designs in a different manner, with perhaps more sympathy
for flapping wing and lighter-than-air drones than rotorcraft.
Of course, this observation may be subjective, as the authors
are heavily invested in flapping wing MAV design, but the
topic definitely merits more attention.
Flying indoors with small drones also poses an enormous
challenge for drone intelligence. Specifically, the artificial
intelligence that is highly successful on self-driving cars [65],
[29], is difficult to miniaturize to small drones. This applies
to both the involved sensors and computational power.
A. Sensing
On the sensor side, active sensors such as laser scanners
have been extremely successful. However, by their nature,
they are quite power hungry and require active mechanisms
to see distances at many positions in the field of view. At
the cost of just scanning distances in a plane, they have been
miniaturized for use on quad rotors [5], [30].
The favorite exteroceptive sensor for MAVs is the camera
- being a power-efficient, passive sensor that captures rich
information in a large field of view. The main challenge with
a camera is that it provides a lot of data (pixel values), but
not directly the type of information that can be used for nav-
igation. Consequently, most of the literature on autonomous
flight has focused on retrieving 3D information from the 2D
images. It was mentioned above that many successful algo-
rithms now exist for instance for visual odometry [26] and
even SLAM [25]. As these types of algorithms are based on
projective geometry, they initially had significant weaknesses
(only sparse 3D measurements, problems dealing with blur,
little texture, etc.). That is why some approaches tried to
complement the cues from projective geometry with visual
appearance cues, which are captured by still images [18].
The work by Saxena [59] focused on seeing dense distances
in still images, which has spurred a whole research field -
where now robots learn to see distances by themselves [39],
[28], [70]. We believe that these self-learning algorithms for
depth perception are an important key to solving many of
the problems mentioned under the variety of objects present
in human-made indoor environments.
Even though the camera is itself power efficient, it cannot
be separated from the subsequent processing, which can be
a very power hungry activity. This is, for small drones,
currently the major challenge. One may adopt a strategy of
leaning back and hoping that this problem will be solved by
others attempting to uphold Moore’s law (increasing compu-
tational budgets) and Koomey’s law (less energy expenditure
for computations). However, in aerospace there is always a
huge drive for efficiency, even more so on tiny vehicles. To
illustrate, the 16 gram DelFly II uses on average 1 Watt for
flying. Spending 9 Watt on GPU processing of images then
is out of the question, as it would have a deleterious effect
on the flight range and endurance. It is important to note that
many nonlinear effects are in play here. For example, using
twice the amount of power does not lead to half the flight
time, but less, as batteries drain faster at a higher load.
A promising vision sensor for tackling the above-
mentioned problems is the event-based camera (e.g., [15]).
Instead of capturing full images at a fixed frame rate, it
captures light changes at individual pixels, sending such
events asynchronously, at very high frequencies. Event-
based cameras are highly power efficient, can capture rapid
changes in the environment, and have a high dynamic
range. Moreover, the events coming from the camera lend
themselves well for processing by spiking neural networks
[42], which hold the potential to perform power-efficient
processing. Challenges in this area lie in the improvement
of the hardware (achieving a lower weight of the sensing
package and higher resolutions) and the development of
novel vision algorithms able to handle the event-based vision
inputs. Initial successes have been obtained with event-based
cameras, even already performing indoor SLAM [68].
B. Acting
The same strife for efficiency applies to the determina-
tion of actions. A highly successful combination on larger
robots is to combine metric SLAM with a planner that
can determine the right sequence of actions to reach a
desired navigation goal [66]. However, this combination is
computationally quite expensive. Here, we will focus on the
alternatives that promise to be much more efficient.
The alternatives for action determination are typically des-
ignated as ‘behavior-based’ or ‘reactive’ approaches. Broadly
speaking, the idea behind these approaches is that detailed
models or maps of the robot’s environment are not necessary
to achieve successful behavior [3], [13]. Furthermore, robots
can save on computational complexity by exploiting proper-
ties of their own body and sensors and of the environment,
e.g., by means of sensory-motor coordination [52]. Smart,
but simple behaviors can suffice to solve complex problems.
Many fields of research could be said to fall into this
category, such as purposive vision [1], active vision [7], [2],
evolutionary robotics [51], ecological robotics [24], and a lot
of the work performed in bio-inspired robotics [27], [57], [6].
Whenever these approaches are referred to from outside
of the respective communities, the emphasis is mostly on
their limitations. Although we agree that these approaches
are still limited, they are not limited in the way typically
described. In particular, the word ‘reactive’ is often equated
with ‘memoryless’, after which it is noted that memoryless
control is obviously very limited. Although in itself this is
true, the controllers proposed in the fields above are often not
memoryless at all. Much of the work in evolutionary robotics
focuses on neural networks with memory [8], and also many
bio-inspired studies involve memory [56]. The latest kid on
the block may be end-to-end deep reinforcement learning
[47], which also departs from a predetermined sense-think-
act cycle, and leaves any internal representations up to the
neural learning process. The idea is that not constraining
the representation by our human preconceptions on the task,
may lead to novel solutions. Obviously, this approach will be
memoryless if the network is completely feed-forward and
all inputs to the network are current sensory inputs. However,
often memory structures such as Long Short Term Memory
(LSTM) nodes are used [31].
Arguably though, it may be said that after initial promising
results of the behavior-based approach, it has yet failed to
scale up to more complex tasks. Successes have been booked
with ‘limited’ tasks, such as landing, height control, and
obstacle avoidance. However, the main missing element may
actually be a promising alternative to SLAM-based naviga-
tion. In evolutionary robotics, we do not know of any works
that touch upon this issue, but in the deep reinforcement
learning field, some works go in this direction [46], [37].
Still, for now it is hard to imagine these developed methods
to work in an unknown environment not encountered in
simulation. Perhaps the most promising approaches are bio-
inspired studies that focus on visual odometry and homing,
inspired by theories on how ants find their nest location
after a foraging trip [49], [23]. The trade-off that is made
then, is that the robot employing such a method may not be
able to plan an efficient path to any known location in the
environment (as it could with SLAM), but it is able to return
home after exploring the environment. Until now, these
methods have only been applied for short trajectories either
outdoors [23] or in virtual environments [49]. Moreover, the
employed vision techniques still are actually computationally
and memory-wise quite expensive.
In this short article, we do far from justice to all the
fields mentioned above, which aim to provide robots with
a computationally efficient artificial intelligence for solving
complex tasks. Our main goal here was to make clear that
behavior-based approaches are less limited than often thought
- as they are not by definition memoryless - and that the
main hurdle to be taken is to achieve successful navigation in
unknown, indoor environments of computationally extremely
restricted drones.
In this short position paper, we have argued that indoor
autonomous flight is indeed a challenging topic, that still
merits special attention. We argued that the properties of
indoor environments present a drive towards small drones
that are able to fly slowly. This favors designs such as
rotary, flapping wing, and lighter-than-air MAVs, all with
their advantages and challenges, where we believe that the
final success of these design types will not only depend
on objective properties such as safety and price but also
on subjective properties such as how pleasant small drones
are found to be. Having small drones also has significant
consequences on the type of artificial intelligence that they
need to use for autonomous flight. We have argued that
more bio-inspired or behavior-based approaches are very
promising for small drones, but that the main challenge lies
in the development of a successful navigation capability in
unknown environments.
So, although an impressive amount of progress has been
made in autonomous indoor flight, many challenges remain
before small drones can reliably fly in indoor environments.
The state-of-the-art is such, though, that research on specific
indoor applications should commence, in order to identify
the difficult practical and technological challenges that still
lie between the current state-of-the-art and successful real-
world applications.
[1] John Aloimonos. Purposive and qualitative active vision. In Pattern
Recognition, 1990. Proceedings., 10th International Conference on,
volume 1, pages 346–360. IEEE, 1990.
[2] John Aloimonos, Isaac Weiss, and Amit Bandyopadhyay. Active
vision. International journal of computer vision, 1(4):333–356, 1988.
[3] Ronald C Arkin. Behavior-based robotics. MIT press, 1998.
[4] Jorge Artieda, Jos´
e M Sebastian, Pascual Campoy, Juan F Correa,
an F Mondrag´
on, Carol Mart´
ınez, and Miguel Olivares. Visual 3-d
slam from uavs. Journal of Intelligent and Robotic Systems, 55(4-
5):299, 2009.
[5] Abraham Bachrach, Ruijie He, and Nicholas Roy. Autonomous flight
in unknown indoor environments. International Journal of Micro Air
Vehicles, 1(4):217–228, 2009.
[6] Emily Baird, Mandyam V Srinivasan, Shaowu Zhang, Richard Lam-
ont, and Ann Cowling. Visual control of flight speed and height in
the honeybee. In International Conference on Simulation of Adaptive
Behavior, pages 40–51. Springer, 2006.
[7] Dana H Ballard. Animate vision. Artificial intelligence, 48(1):57–86,
[8] Randall D Beer. On the dynamics of small continuous-time recurrent
neural networks. Adaptive Behavior, 3(4):469–509, 1995.
[9] Adrien Briod, Przemyslaw Kornatowski, Jean-Christophe Zufferey,
and Dario Floreano. A collision-resilient flying robot. Journal of
Field Robotics, 31(4):496–509, 2014.
[10] Adrien Briod, Jean-Christophe Zufferey, and Dario Floreano. Optic-
flow based control of a 46g quadrotor. In Workshop on Vision-based
Closed-Loop Control and Navigation of Micro Helicopters in GPS-
denied Environments, IROS 2013, number EPFL-CONF-189879, 2013.
[11] Pascal Brisset, Antoine Drouin, Michel Gorraz, Pierre-Selim Huard,
and Jeremy Tyler. The Paparazzi Solution. In MAV 2006, 2nd US-
European Competition and Workshop on Micro Air Vehicles, page pp
xxxx, Sandestin, United States, October 2006.
[12] Pierre-Jean Bristeau, Franc¸ois Callou, David Vissiere, and Nicolas
Petit. The navigation and control technology inside the ar. drone micro
uav. IFAC Proceedings Volumes, 44(1):1477–1484, 2011.
[13] Rodney Allen Brooks. Cambrian intelligence: The early history of the
new AI, volume 97. MIT press Cambridge, MA, 1999.
[14] Javaan Chahl and Akiko Mizutani. Biomimetic attitude and orientation
sensors. IEEE Sensors Journal, 12(2):289–297, 2012.
[15] Jorg Conradt, Raphael Berner, Matthew Cook, and Tobi Delbruck. An
embedded aer dynamic vision sensor for low-latency pole balancing.
In Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th
International Conference on, pages 780–785. IEEE, 2009.
[16] G. C. H. E. de Croon, K. M. E. De Clercq, Rick Ruijsink, Bart Remes,
and Christophe De Wagter. Design, aerodynamics, and vision-based
control of the delfly. International Journal of Micro Air Vehicles,
1(2):71–97, June 2009.
[17] G. C. H. E. de Croon, C. De Wagter, B. D. W. Remes, and R. Ruijsink.
Sky segmentation approach to obstacle avoidance. In Aerospace
Conference, 2011 IEEE, pages 1–16, Big Sky, MT, USA, March 2011.
[18] G. C. H. E. de Croon, E. De Weerdt, C. De Wagter, B. D. W. Remes,
and Rick Ruijsink. The appearance variation cue for obstacle avoid-
ance. In Robotics and Biomimetics (ROBIO), 2010 IEEE International
Conference on, pages 1606–1611, Tianjin, China, December 2010.
[19] Guido C. H. E. de Croon, Mustafa Perc¸in, Bart D. W. Remes,
Rick Ruijsink, and Christophe De Wagter. The DelFly: Design,
Aerodynamics, and Artificial Intelligence of a Flapping Wing Robot.
Springer Netherlands, 1 edition, 2016.
[20] Guido CHE de Croon, MA Groen, Christophe De Wagter, Bart Remes,
Rick Ruijsink, and Bas W van Oudheusden. Design, aerodynamics and
autonomy of the delfly. Bioinspiration & biomimetics, 7(2):025003,
[21] Christophe De Wagter, Rick Ruijsink, Ewoud Smeur, Kevin van
Hecke, Freek van Tienen, Erik van der Horst, and Bart Remes. Design,
control and visual navigation of the delftacopter. arXiv preprint
arXiv:1701.00860, 2017.
[22] Christophe De Wagter, Sjoerd Tijmons, Bart DW Remes, and
Guido CHE de Croon. Autonomous flight of a 20-gram flapping wing
mav with a 4-gram onboard stereo vision system. In Robotics and
Automation (ICRA), 2014 IEEE International Conference on, pages
4982–4987. IEEE, 2014.
[23] Aymeric Denuelle and Mandyam V Srinivasan. Snapshot-based
navigation for the guidance of uas. In Proceedings of the Australasian
Conference on Robotics and Automation, Canberra, Australia, pages
2–4, 2015.
[24] Andrew P Duchon, Leslie Pack Kaelbling, and William H Warren.
Ecological robotics. Adaptive Behavior, 6(3-4):473–507, 1998.
[25] Jakob Engel, Thomas Sch¨
ops, and Daniel Cremers. Lsd-slam: Large-
scale direct monocular slam. In European Conference on Computer
Vision, pages 834–849. Springer, 2014.
[26] Christian Forster, Matia Pizzoli, and Davide Scaramuzza. Svo: Fast
semi-direct monocular visual odometry. In Robotics and Automation
(ICRA), 2014 IEEE International Conference on, pages 15–22. IEEE,
[27] Nicolas Franceschini, Jean-Marc Pichon, and Christian Blanes. From
insect vision to robot vision. Phil. Trans. R. Soc. Lond. B,
337(1281):283–294, 1992.
[28] Ravi Garg, Vijay Kumar BG, Gustavo Carneiro, and Ian Reid. Un-
supervised cnn for single view depth estimation: Geometry to the
rescue. In European Conference on Computer Vision, pages 740–756.
Springer, 2016.
[29] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for
autonomous driving? the kitti vision benchmark suite. In Computer
Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on,
pages 3354–3361. IEEE, 2012.
[30] Slawomir Grzonka, Giorgio Grisetti, and Wolfram Burgard. A fully au-
tonomous indoor quadrotor. IEEE Transactions on Robotics, 28(1):90–
100, 2012.
[31] Sepp Hochreiter and J¨
urgen Schmidhuber. Long short-term memory.
Neural computation, 9(8):1735–1780, 1997.
[32] Menno Hochstenbach, Cyriel Notteboom, Bart Theys, and Joris
De Schutter. Design and control of an unmanned aerial vehicle for
autonomous parcel delivery with transition from vertical take-off to
forward flight–vertikul, a quadcopter tailsitter. International Journal
of Micro Air Vehicles, 7(4):395–405, 2015.
[33] J Holsten, T Ostermann, and D Moormann. Design and wind tunnel
tests of a tiltwing uav. CEAS Aeronautical Journal, 2(1-4):69–79,
[34] S. Karaman and E. Frazzoli. High-speed flight in an ergodic forest.
In 2012 IEEE International Conference on Robotics and Automation,
pages 2899–2906, May 2012.
[35] Lukas Kaul, Robert Zlot, and Michael Bosse. Continuous-time
three-dimensional mapping for micro aerial vehicles with a passively
actuated rotating laser scanner. Journal of Field Robotics, 33(1):103–
132, 2016.
[36] Matthew Keennon, Karl Klingebiel, and Henry Won. Development
of the nano hummingbird: A tailless flapping wing micro air vehicle.
In 50th AIAA aerospace sciences meeting including the new horizons
forum and aerospace exposition, page 588, 2012.
[37] Michał Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and
Wojciech Ja´
skowski. Vizdoom: A doom-based ai research platform
for visual reinforcement learning. In Computational Intelligence and
Games (CIG), 2016 IEEE Conference on, pages 1–8. IEEE, 2016.
[38] Mirko Kovaˇ
c, J¨
urg Germann, Christoph H¨
urzeler, Roland Y Siegwart,
and Dario Floreano. A perching mechanism for micro aerial vehicles.
Journal of Micro-Nano Mechatronics, 5(3-4):77–91, 2009.
[39] Kevin Lamers, Sjoerd Tijmons, Christophe De Wagter, and Guido
de Croon. Self-supervised monocular distance learning on a
lightweight micro air vehicle. In Intelligent Robots and Systems
(IROS), 2016 IEEE/RSJ International Conference on, pages 1779–
1784. IEEE, 2016.
[40] Stefan Leutenegger, Simon Lynen, Michael Bosse, Roland Siegwart,
and Paul Furgale. Keyframe-based visual–inertial odometry using non-
linear optimization. The International Journal of Robotics Research,
34(3):314–334, 2015.
[41] Kevin Y Ma, Pakpong Chirarattananon, Sawyer B Fuller, and Robert J
Wood. Controlled flight of a biologically inspired, insect-scale robot.
Science, 340(6132):603–607, 2013.
[42] Wolfgang Maass. Networks of spiking neurons: the third generation
of neural network models. Neural networks, 10(9):1659–1671, 1997.
[43] Tim G McGee, Raja Sengupta, and Karl Hedrick. Obstacle detection
for small autonomous aircraft using sky segmentation. In Robotics
and Automation, 2005. ICRA 2005. Proceedings of the 2005 IEEE
International Conference on, pages 4679–4684. IEEE, 2005.
[44] Kimberly McGuire, Guido de Croon, Christophe De Wagter, Karl
Tuyls, and Hilbert Kappen. Efficient optical flow and stereo vision for
velocity estimation and obstacle avoidance on an autonomous pocket
drone. IEEE Robotics and Automation Letters, 2(2):1070–1076, 2017.
[45] Daniel Mellinger, Nathan Michael, and Vijay Kumar. Trajectory gen-
eration and control for precise aggressive maneuvers with quadrotors.
The International Journal of Robotics Research, 31(5):664–674, 2012.
[46] Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hubert Soyer, Andrew J
Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre,
Koray Kavukcuoglu, et al. Learning to navigate in complex environ-
ments. arXiv preprint arXiv:1611.03673, 2016.
[47] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves,
Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Play-
ing atari with deep reinforcement learning. arXiv preprint
arXiv:1312.5602, 2013.
[48] Ralf M¨
oller, Dimitrios Lambrinos, Rolf Pfeifer, Thomas Labhart, and
udiger Wehner. Modeling ant navigation with an autonomous agent.
From animals to animats, 5:185–194, 1998.
[49] Ralf M¨
oller and Andrew Vardy. Local visual homing by matched-filter
descent in image distances. Biological cybernetics, 95(5):413–430,
[50] Yash Mulgaonkar, Anurag Makineni, Luis Guerrero-Bonilla, and Vijay
Kumar. Robust aerial robot swarms without collision avoidance. IEEE
Robotics and Automation Letters, 3(1):596–603, 2018.
[51] Stefano Nolfi, Josh C Bongard, Phil Husbands, and Dario Floreano.
Evolutionary robotics., 2016.
[52] Stefano Nolfi and Domenico Parisi. Exploiting the power of sensory-
motor coordination. In European Conference on Artificial Life, pages
173–182. Springer, 1999.
[53] Clint Nous, Roland Meertens, Christophe De Wagter, and Guido C.
H. E. de Croon. Performance evaluation in obstacle avoidance. In
Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International
Conference on, pages 3614–3619, Daejeon, South Korea, October
2016. IEEE.
[54] Hoang Vu Phan and Hoon Cheol Park. Remotely controlled flight of
an insect-like tailless flapping-wing micro air vehicle. In Ubiquitous
Robots and Ambient Intelligence (URAI), 2015 12th International
Conference on, pages 315–317. IEEE, 2015.
[55] St´
ephane Ross, Narek Melik-Barkhudarov, Kumar Shaurya Shankar,
Andreas Wendel, Debadeepta Dey, J Andrew Bagnell, and Martial
Hebert. Learning monocular reactive uav control in cluttered natural
environments. In Robotics and Automation (ICRA), 2013 IEEE
International Conference on, pages 1765–1772. IEEE, 2013.
[56] Franck Ruffier and Nicolas Franceschini. Octave: a bioinspired visuo-
motor control system for the guidance of micro-air-vehicles. In
Bioengineered and bioinspired systems, volume 5119, pages 1–13.
International Society for Optics and Photonics, 2003.
[57] Franck Ruffier, St´
ephane Viollet, S Amic, and N Franceschini. Bio-
inspired optical flow circuits for the visual guidance of micro air
vehicles. In Circuits and Systems, 2003. ISCAS’03. Proceedings of
the 2003 International Symposium on, volume 3, pages III–III. IEEE,
[58] Adnan S Saeed, Ahmad Bani Younes, Shafiqul Islam, Jorge Dias,
Lakmal Seneviratne, and Guowei Cai. A review on the platform
design, dynamic modeling and control of hybrid uavs. In Unmanned
Aircraft Systems (ICUAS), 2015 International Conference on, pages
806–815. IEEE, 2015.
[59] Ashutosh Saxena, Min Sun, and Andrew Y Ng. Make3d: Learning
3d scene structure from a single still image. IEEE transactions on
pattern analysis and machine intelligence, 31(5):824–840, 2009.
[60] Davide Scaramuzza and Friedrich Fraundorfer. Visual odometry
[tutorial]. IEEE robotics & automation magazine, 18(4):80–92, 2011.
[61] Wei Shyy, Yongsheng Lian, Jian Tang, Dragos Viieru, and Hao Liu.
Aerodynamics of low Reynolds number flyers, volume 22. Cambridge
University Press, 2007.
[62] Bastian Steder, Giorgio Grisetti, Cyrill Stachniss, and Wolfram Bur-
gard. Visual slam for flying vehicles. IEEE Transactions on Robotics,
24(5):1088–1093, 2008.
[63] Timothy Stirling, James Roberts, Jean-Christophe Zufferey, and Dario
Floreano. Indoor navigation with a swarm of flying robots. In Robotics
and Automation (ICRA), 2012 IEEE International Conference on,
pages 4641–4647. IEEE, 2012.
[64] Petri Tanskanen, Tobias Naegeli, Marc Pollefeys, and Otmar Hilliges.
Semi-direct ekf-based monocular visual-inertial odometry. In In-
telligent Robots and Systems (IROS), 2015 IEEE/RSJ International
Conference on, pages 6073–6078. IEEE, 2015.
[65] Sebastian Thrun, Mike Montemerlo, Hendrik Dahlkamp, David
Stavens, Andrei Aron, James Diebel, Philip Fong, John Gale, Morgan
Halpenny, Gabriel Hoffmann, et al. Stanley: The robot that won the
darpa grand challenge. Journal of field Robotics, 23(9):661–692, 2006.
[66] E. G. Tsardoulias, A. Iliakopoulou, A. Kargakos, and L. Petrou. A
review of global path planning methods for occupancy grid maps
regardless of obstacle density. Journal of Intelligent & Robotic
Systems, 84(1):829–858, Dec 2016.
[67] JL Verboom, Sjoerd Tijmons, C De Wagter, B Remes, Robert Babuska,
and Guido CHE de Croon. Attitude and altitude estimation and
control on board a flapping wing micro air vehicle. In Robotics and
Automation (ICRA), 2015 IEEE International Conference on, pages
5846–5851. IEEE, 2015.
[68] Antoni Rosinol Vidal, Henri Rebecq, Timo Horstschaefer, and Davide
Scaramuzza. Ultimate slam? combining events, images, and imu for
robust visual slam in hdr and high-speed scenarios. IEEE Robotics
and Automation Letters, 3(2):994–1001, 2018.
[69] Patrick Zdunich, Derek Bilyk, Marc MacMaster, David Loewen, James
DeLaurier, Roy Kornbluh, Tom Low, Scott Stanford, and Dennis
Holeman. Development and testing of the mentor flapping-wing micro
air vehicle. Journal of Aircraft, 44(5):1701–1711, 2007.
[70] Tinghui Zhou, Matthew Brown, Noah Snavely, and David G Lowe.
Unsupervised learning of depth and ego-motion from video. In CVPR,
volume 2, page 7, 2017.
[71] Jean-Christophe Zufferey, Alexis Guanella, Antoine Beyeler, and
Dario Floreano. Flying over the reality gap: From simulated to real
indoor airships. Autonomous Robots, 21(3):243–254, 2006.
[72] Jean-Christophe Zufferey, Adam Klaptocz, Antoine Beyeler, Jean-
Daniel Nicoud, and Dario Floreano. A 10-gram vision-based flying
robot. Advanced Robotics, 21(14):1671–1684, 2007.
[73] Jean-Christophe Zufferey, Adam Klaptocz, Antoine Beyeler, Jean-
Daniel Nicoud, and Dario Floreano. A 10-gram vision-based flying
robot. Advanced Robotics, 21(14):1671–1684, 2007.
Construction progress tracking and monitoring is a complex process, crucial for the smooth execution of projects and delivery of a high-quality product to the client. However, these tasks remain primarily manual, time-consuming and error prone, and can lead to cost and time overruns. The objective of the present research is to develop a system based on Simultaneous Localization and Mapping (SLAM) point clouds and Internet of Things (IoT) integrated with Building Information Modelling (BIM) to track the construction progress and to allow project managers to make informed decisions based upon reliable feedback. This paper has three parts: (1) the development of a new, more precise classification of the technologies and data used in the progress monitoring process, categorising them as support, data collection or data processing technologies; (2) the proposition of a framework for automated data collection and information treatment, offering a holistic solution for progress tracking and monitoring; (3) the presentation of a pilot-project partially implementing the proposed framework using BIM, SLAM, beacon technology and an automated rover in a real context. Several data acquisitions on construction sites were used to assess the capacity of the system for real-time reconstruction and progress tracking. This automized method is compared to the traditional manual capture of progress tracking. The results are discussed, and future work is identified.
Full-text available
Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. These cameras do not suffer from motion blur and have a very high dynamic range, which enables them to provide reliable visual information during high speed motions or in scenes characterized by high dynamic range. However, event cameras output only little information when the amount of motion is limited, such as in the case of almost still motion. Conversely, standard cameras provide instant and rich information about the environment most of the time (in low-speed and good lighting scenarios), but they fail severely in case of fast motions, or difficult lighting such as high dynamic range or low light scenes. In this paper, we present the first state estimation pipeline that leverages the complementary advantages of these two sensors by fusing in a tightly-coupled manner events, standard frames, and inertial measurements. We show on the publicly available Event Camera Dataset that our hybrid pipeline leads to an accuracy improvement of 130% over event-only pipelines, and 85% over standard-frames-only visual-inertial systems, while still being computationally tractable. Furthermore, we use our pipeline to demonstrate - to the best of our knowledge - the first autonomous quadrotor flight using an event camera for state estimation, unlocking flight scenarios that were not reachable with traditional visual-inertial odometry, such as low-light environments and high-dynamic range scenes.
Full-text available
Miniature Micro Aerial Vehicles (MAV) are very suitable for flying in indoor environments, but autonomous navigation is challenging due to their strict hardware limitations. This paper presents a highly efficient computer vision algorithm called Edge-FS for the determination of velocity and depth. It runs at 20 Hz on a 4 g stereo camera with an embedded STM32F4 microprocessor (168 MHz, 192 kB) and uses feature histograms to calculate optical flow and stereo disparity. The stereo-based distance estimates are used to scale the optical flow in order to retrieve the drone's velocity. The velocity and depth measurements are used for fully autonomous flight of a 40 g pocket drone only relying on on-board sensors. The method allows the MAV to control its velocity and avoid obstacles.
One of the main challenges in robot swarming arises from the need to design controllers that guarantee safety and motion planners that guarantee collision avoidance. This translates into sensors that measure proximity to neighbors and algorithms that scale exponentially with the number of robots. In many cases, these problems are solved very differently in nature, especially at smaller scales. The penalty due to collisions is small at these scales and sensors and controllers are not precise enough to guarantee collision-free trajectories. We explore a similar approach with small flying robots. We present a design that is capable of sustaining collisions, controllers that are able to recover from collisions, and simple motion planners that allow the robots to navigate without complete knowledge of the environment. We illustrate the potential of bio-inspired swarms using simulations and experimental results.
Conference Paper
We propose a direct (feature-less) monocular SLAM algorithm which, in contrast to current state-of-the-art regarding direct meth- ods, allows to build large-scale, consistent maps of the environment. Along with highly accurate pose estimation based on direct image alignment, the 3D environment is reconstructed in real-time as pose-graph of keyframes with associated semi-dense depth maps. These are obtained by filtering over a large number of pixelwise small-baseline stereo comparisons. The explicitly scale-drift aware formulation allows the approach to operate on challenging sequences including large variations in scene scale. Major enablers are two key novelties: (1) a novel direct tracking method which operates on sim(3), thereby explicitly detecting scale-drift, and (2) an elegant probabilistic solution to include the effect of noisy depth values into tracking. The resulting direct monocular SLAM system runs in real-time on a CPU.