Content uploaded by Javier V. Gómez
Author content
All content in this area was uploaded by Javier V. Gómez on Jul 14, 2014
Content may be subject to copyright.
Pilot-Scale Development of a UAV-UGV Hybrid
with Air-Based UGV Path Planning
Nikolas Giakoumidis, Jin U Bak
New York University Abu Dhabi
Abu Dhabi, UAE
{jub205; giakoumidis}@nyu.edu
Javier V. Gómez
Universidad Carlos III de Madrid
Madrid, Spain
jvgomez@ing.uc3m.es
Arber Llenga
Alexander Technological Educational Institute of
Thessaloniki
Thessaloniki, Greece
lengasandreas@gmail.com
Nikolaos Mavridis
New York University Abu Dhabi
Abu Dhabi, UAE
nikolaos.mavridis@nyu.edu
Abstract—Traditionally, UAVs and Mobile Robots are viewed
as two separate entities. However, upon closer examination of
their synergies, a more unified conception of a closely-coupled
system of the two could easily justify a view where both are
just seen as separable parts of the body of a unitary hybrid
symbiotic system – essentially, one robotic entity, whose body
parts can separate temporarily, and get together again later. In
this paper, we will describe a prototype system consisting of a
small-scale indoor pilot version of a much larger outdoors full-
scale system, as an illustration of this concept. Such indoor
pilot versions have multiple advantages, as we shall show. In
our prototype, a mobile robot UGV serves as a transport as
well as recharge station for a lightweight quad-rotor UAV,
while the UAV serves as a separable long-range vision system
for the UGV, providing top-down views of its environment,
which are stitched and transformed into maps, and which are
utilized towards the navigation of the robot hybrid. Multiple
avenues of extension of our system and the concept are also
introduced, illustrating the power of the separable-body
heterogeneous symbiotic multi-robot system concept.
Keywords- UAV, UGV, hybrid robot, pilot-scale
I. INTRODUCTION
Unmanned Aerial Vehicles are limited by strong constraints
regarding their capability of carrying large loads in relation to
their size. Furthermore, their flight time and range limitations
are also subject to such constraints, even more so for the case
of VTOL aircraft which require large amounts of energy even
for hovering to a point. Such limitations are also one of the
main reasons that engineers often under-instrument UAVs,
without being able to include in their payload heavy or
energy-hungry sensor, actuator, communication or processing
sub-systems, given weight and energy constraints. However,
UAVs still have a lot of advantages in comparison to mobile
robots: they have the capability of a large 3-D workspace
which usually has much fewer obstacles, can often have much
higher instantaneous velocities as compared to ground
vehicles. Most importantly, they can observe a much larger
area through their sensors given: their attitude which is much
higher, the much fewer occlusions that they are subject to, and
their enhanced ability to reach through flying positions and
viewpoints where occlusions are further minimized.
On the other hand, mobile robots have numerous
advantages too: they can carry a much larger load than
comparable UAVs, and thus they can carry much larger power
sources, as well as heavier or more power-hungry sensing,
actuation, and processing subsystems. However, given that
they have to stay in contact with the ground, they are much
more susceptible to obstacles, harsh terrain, as well as
occlusions and other sensing limitations.
The combination of these two types of robots, though, and
their conceptualization as a unitary heterogeneous hybrid
symbiotic system with on-the-fly separable and recombinable
body parts, can create highly beneficial synergies: for
example, sensing can become distributed, with top and
advance-longer-range views coming from the UAV, while the
power needs of the UAV as well transport and a safe docking
and hiding area can be provided by the ground vehicle. Thus,
the combined unitary hybrid can do much more as compared
to the sum of the abilities of its parts. Numerous examples of
application areas of such hybrid robots exist – autonomous
observation and fire-fighting of forest areas; safe navigation
and early hostility detection for mobile robots which are
situated in an unknown environment, even for space
applications. Of course, aircraft carriers are a good example of
a widely-deployed manned analogue of such hybrids.
In this paper, we will present an example pilot-scale
system, which was developed and tested indoors in our lab.
This indoor autonomous hybrid system consists of smaller-
size-and-power system as compared to a full-scale outdoor
implementation; however, it has several advantages when it
comes to research and development. First of all, cost:
algorithms which can be transferred with only a few
parameter adjustments to full-scale systems can be developed
cheaply, without the need for expensive hardware. Second,
minimization of test-time damages due to free fall from high
attitudes; and third, the possibility to utilize precise indoor
localization systems as well as other such equipment, which
could not be utilized during development time outdoors. Thus,
we believe that such small-scale pilots are highly valuable in
order to catalyze accelerated research that can later be
transferred to full-scale outdoor systems.
In this paper, we will start by presenting relevant existing
work, then we will talk about the methods used and the results
obtained, and provide a discussion, and finally, we will
conclude the paper, after having introduced the multiple
potential avenues for extension of the system and concept.
II. RELATED WORK
In the past, related research has taken place in the area of
multi-agent system such as UAV-UGV cooperation [1][2][3]
and multi-robots[4][5] like multi-UAVs[6][7] and multi-
UGVs[8][9][10]. UAVs and UGVs have several advantages
and disadvantages over each other in terms of mobility,
payload and perception abilities and past work[2][3] focuses
on combining UAVs’ and UGVs’ advantages together to
create a multi-robot system that can be implemented to do a
specific task as in [1][2][7]. There have been many approaches
for mapping and exploration of unknown environment with
multi-robot system [4][5][9][11] which consists of several
UAVs or UGVs, or both together as a system. Research in
multi-robot system has been done as it has many advantages
and application in many fields such as military, surveillance,
etc. In [7], a team of UAVs are utilized for cooperative forest
fire surveillance. Different types of vehicles can also be
grouped together where UAVs and UGVs can work
simultaneously to achieve a specific goal such as exploring
unknown area and creating map of the explored area. Studies
have been done not only on the application of the multi-robot
system but also in how to optimize the decision making for
exploration of the area [11] and path-planning [9] for the
mobile robots. When multiple robots are present and to be
controlled simultaneously, the system requires careful
attention on how each robot behaves in order to avoid any
collision or any unexpected scene [11]. Similar to our
research, considerable research has been done in mapping and
exploration of unknown environments [4][5][9][11]. But past
work tend to use one type of mobile robots, either UAV
[6][7][9] or UGV[3][10] to fullfill the task. Also it focuses on
outdoor activities using different sensors such as GPS, stereo-
vision cameras. In our work we combine a UAV with a UGV
that explores and maps the unknown indoor environment
using motion capture to obtain the location of the UAV and a
camera installed on the UAV to take aerial photos. There has
been a similar work of indoor environment mapping [8] using
UGVs and sick laser. In our work, UAV provides clear aerial
photos of the unknown environment and the main system
installed on UGV creates a map of the explored area by the
UAV then does the path-planning to move optimally from one
point to another on the map.
III. METHODS AND RESULTS
The purpose of the prototype system that was designed and
implemented is to carry out a mission of finding an optimal
navigation path for a mobile robot in an indoor environment,
starting from a fixed initial point and going towards a desired
target point, in minimum possible time. In the original
experiment, in order not to be obstructed with localization
which is not the focus of our attention, we worked on the
assumption that we can localize both the mobile robot as well
as the UAV in terms of position and pose, which were
obtained through a vicon motion capture system. Our system
though had no prior information regarding its local
environment and the obstacles around it, and no sensors on the
mobile robot were used towards that purpose. In order to
achieve our purpose, we utilized a number of hardware and
software modules, which we will describe below, as can be
seen in the system architecture in Fig. 1.
Fig. 1. Modular system architecture.
A. Unmanned Ground Vehicle (UGV)
The ground vehicle that was utilized is the Mobile Robots
Pioneer P3-DX, which is equipped with two active wheels of
constant direction placed in the front part of the robot, and one
passive wheel which is freely rotating depending on the
movement of the robot. This vehicle has the capability of
moving towards the front or back, and to rotate clockwise or
anticlockwise or to perform a combination of the above
motions. Given our computational needs, the robot was
equipped with a laptop computer, which was performing the
task of controlling the system, through the software that we
created. The motion of the UGV took place through the ARIA
software library, through C++ code, by exercising external
commands arriving through UDP sockets, and which are
translated to special commands sent via a serial to the robot’s
onboard microcontroller, which directly controls the motors of
the robot.
B. Localization Module
In order to know the position and pose of both the UAV
and UGV, we utilized a vicon motion capture systsm which is
comprised of six cameras placed perimetrically on the walls of
room where our experiments took place. Special custom
software is running on a dedicated PC which outputs a 2KHz
timeseries of high-precision 1mm accuracy measurements of
optical marker positions, that in our case were placed on the
UAV and UGV in order for us to calculate in real time their
position and pose. The output is sent out over Sockets using
the UDP protocol.
C. Unmanned Aerial Vehicle (UAV)
The flying part of the system is comprised by a budget-
cost quadrotor VTOL, namely the ARDrone by the Parrot
company, which can move within 3D space using a
proprietary stabilization and control algorithm in firmware. It
is equipped with two color cameras of 240x320 pixels
resolution, and sonar sensors for measuring attitude. Its
onboard wireless communications system uses Wifi (802.11)
through which UDP packets bidirectionally to communicate
with the client controller application. The commands sent to
the UAV in our case include setting desired Roll and Pitch
angles, desired Yaw rotation speed, and desired attitude
descent –ascent rate. The output of the system is the change of
the rotation speed for every rotor, so that the above desired
values can be sustained.
D. UAV Navigation Module
The UAV navigation and data collection algorithm was
implanted in the LabView environment. This application
controls the Pitch, Roll, Yaw, and Attitude of the UAV, and
receives telemetry as well as video from the UAV in real time.
Upon initial execution, the algorithm is in an idle state,
waiting to receive the “Job Start” command from the central
controller module. When it received this command, it starts
the data collection process from the localization subsystem
and from the UAV and then launches the UAV from the
mobile robot on which it was landed. After having stabilized
the flight altitude, it starts the navigation of the UAV, moving
for the purposes of our experiment between 21 predefined
setpoints above the unknown environment where the hybrid
system resides. These points were selected in order to provide
a sampling grid, and are dependent on the camera view angle,
the flight altitude, the flight time and other such
chracateristics. The density and number of points was also
selected on the basis of the performance of the stitching
algorithm, and the resulting navigation performance of the
maps that are generated through the resulting stitched images.
The controllers that were used are minimizing position,
direction, and altitude errors, are of PID type, and were tuned
using the Ziegler-Nichols method. If the UAV is within an
accuracy radius of 1.5cm then it remains there for
approximately 5 second, in order to be further stabilized and
so that the camera views the ground from a parallel plane. If
these conditions are reached, then the camera image is
transmitted. After all 21 points have been reached and images
acquired, the UAV is driven to landing mode, and after a
successful landing, the “Job Done” signal is sent to the central
controller, and the algorithm returns to idle mode.
E. Image Stitching (Mapping)
In this project, one of the most important tasks was to be
able to create a clear map of the explored area using aerial
photos taken from the UAV. This map is then processed for
path-planning for the UGV. In order to create a map of the
unknown environment, it requires sufficient number of photos
of the area so that different images can be stitched together to
create a bigger image of the explored area. In order to do the
stitching task, a programming language called MATLAB is
used with computer vision and image processing tool boxes.
To stitch images, Scale-invariant feature transform
algorithm[12] which is also known as SIFT has been used
which is one of the most widely used algorithm in computer
vision to detect and extract key points in images. First, two
images which overlap each other are taken and SIFT algorithm
is applied on each image to extract key points in the images.
After extracting key points from each image, Random Sample
Consensus[13], also known as RANSAC algorithm is used in
order to remove outliers of the key points found on each image
and then solve the correspondence problem. RANSAC is used
to find homography between the two images using a set of
corresponding points. When the corresponding points and
homography are known, two overlapping images can be
stitched together by aligning two images and stitching the
overlapping parts of the images.
F. Path-Planning and Navigation of UGV
a) Fast Marching and Path Planning
The Fast Marching method is a level set method, proposed
by Osher and Sethian [14], [15] to solve the Eikonal equation:
1=F x
( )
∇T x
( )
(1)
which describes the motion of a front wave propagating in a
non-homogeneous media where the speed F does not have to
be the same everywhere, but it is always non-negative. T(x) is
called the arrival function which computes the time the wave
will take to reach a point x. The T(x) function is originated by
a wave that grows from one single point (global minimum) at
the source. As F > 0 the wave only grows, and hence, points
farther from the source have greater T. A local minimum
would involve that a point has a lesser T value than a
neighbour point which is closer to the source, which is
impossible, as this neighbour must have been reached by the
wave before.
b) Application to Path Planning
The front wave expansion speed F can be directly assigned
from the values of the environment modeled in a gridmap,
where for each cell 0 means obstacles and collision-free space
is labeled as 1. To obtain a path from a point p to a point q a
wave is expanded from q until it reaches p. Due to wave
expansion properties, the wavefront will follow the shortest
time path between p and q. Even more, considering the
propagation speed as constant, the trajectory will also be the
shortest in terms of distance. The consequence is that the T
function will be local-minima-free, with one global minimum
at the goal point. Applying gradient descent over the T
function from the goal point, the initial point will be reached.
Fast Marching ensures that the path obtained is unique and
complete.
Fig. 2 shows the result of applying FM to find a trajectory.
The calculated path, although it is the shortest in length, runs
too close to obstacles so it might not be a safe path. This
causes the robot to go slowly when it is close to obstacles in
order to avoid collisions. Therefore the path might also not be
the shortest in time. In the following paragraphs we are
introducing two other different methods based on Fast
Marching which solve this problem: Fast Marching over the
Voronoi Diagram and Fast Marching Square (FM2).
Fig. 2. Binary map and the path obtained using Fast Marching. The front
wave is also drawn (red).
c) FM2 Path Planning Method
In the previous subsection, there was just one wave source
at the target point. Here, all the obstacles are a source of the
wave, and hence, several waves are being expanded at the
same time [16]. As pixels get far from the obstacles, the
computed T value is greater. This generates a new map that
can be seen as a slowness map. If we consider the T value as a
proportional measure to the maximum allowed speed of the
robot at each point, we can appreciate that speeds are lower
when a pixel is close to obstacles, and greater far from them.
In fact, a robot whose speed at each point is given by the T
value will never collide, as T! 0 when approaching to the
obstacles. Now, if we expand a wave from one point of the
gridmap, considering that the expansion speed F(x; y) = T(x;
y), being F(x; y) the speed at point x; y and T(x; y) the value
of the slowness map at x; y, we will have that the expansion
speed depends on the position. As the slowness map provides
the maximum safe speed of the robot, the obtained trajectory
is the fastest path (in time) assuming the robot moves at the
maximum allowed speed at every point. Fig. 3 shows the
performance of this algorithm.
Fig. 3. Left - Slowness map and the path obtained with FM2 Right -
Wavefronts at each iteration. The path is perpendicular to each front wave.
d) Integration of Fast Marching Square in the UAV-UGV
Cooperation
As detailed in the previous section, the FM2 algorithm is
based on a grid map representation of the environment.
Therefore, the processing of the stitched images obtained in
Section <write number> is focused on creating a binary map
in which the obstacles will be represented with black (value 0)
and the free space, in which the UGV is able to navigate, will
be white (value 1).
For this purpose, simple computer vision algorithms are
employed. The steps described following are highly dependent
on the application, the kind and size of UGV and also the type
of environment (city, forest, desert, etc). Assuming that the
obstacles in the environment use to be of a very different color
of the walkable ground, the algorithm used to find out the
occupancy map is the following:
The HSV histograms of the image are equalized in order to
ease the following steps. Ground segmentation based on HSV
color space values. All those objects which have a hue value
within a specific range are labeled as obstacles.
A binary image is created in which the obstacles have value
1 (white) and the rest of the pixels have value 0 (black). Since
this segmentation is likely to detect very small parts of the
image as obstacles (due to noise), a closing of the image is
necessary. This consists on an erosion of the image due to
remove the small objects of the image followed by a dilation,
which restores the size of the large obstacles. The size of the
structuring element for the closing depends on the cell size
employed the elevation of the UAV while obtaining the map,
and so on. Finally, to be able to use this map for the path
planning algorithm, the value for each pixel is inverted (Fig.
4).
We acknowledge that the obstacle detection algorithm can
be improved in order to detect better the obstacles without
hardcoding the range of the hue values, field in which there is
a lot of literature. However, this is not the main objective of
the paper, but the collaboration between UAV and UGV.
The goal point is labeled with a color label in the real
scenario and the image is cut in order to have this goal point
close to the (0,0) coordinates of the image. Using as initial
point the current coordinates of the UGV, the FM2 algorithm
is employed. An example of the path obtained is shown in the
Fig.5.
.
(a)
(b) (c)
(d) (e)
Fig. 4. Algorithm results. a) Initial stitched image. b) Value (third dimension)
of the initial map. In this case is the best parameter to use in the segmentation.
c) Value equalized. d) Value equalized binarized. e) Same as d) after the
closing operation and with the goal point in the (1,1) coordinates of the image.
Fig. 5. Left – Velocities map obtained from the computer vision algorithm.
Right - FM2 path obtained using the top point as an initial point and the
bottom left corner as a goal point.
IV. DISCUSSION OF FUTURE STEPS
There exist multiple avenues for extension of our system
and concept. First, one could extend the separable-body
unitary hybrid robots to include more than two robots, and
also many types of robots apart from UAVs and UGVs. An
interesting stream of research is taking place towards this
direction worldwide. Second, in the case of our system, we
plan to further extend our quantitative evaluations of
performance, and go through a further cycle of refinements
and augmentations. Finally, we plan to investigate the porting
of the system to a full-scale outdoors implementation, and
comment on the actual issues that arise during such an
attempt, in order to provide a number of guidelines for this
process, that could be potentially usable in other systems too.
V. CONCLUSION
Motivated by the numerous potential synergies that exist, in
this paper we have considered the conceptualization of a
combination of a UGV with a UAV as a unitary separable-
body hybrid heterogeneous system. After discussing the
numerous advantages of an intermediate pilot-scale system
before an actual full-scale implementation, we presented a
pilot-scale prototype. In our system, a mobile robot UGV
serves as a transport as well as recharge station for a
lightweight quad-rotor UAV, while the UAV serves as a
separable long-range vision system for the UGV, providing
top-down views of its environment, which are stitched and
transformed into maps, and which are utilized towards the
navigation of the robot hybrid. We presented in detail our
novel path planning and navigation method, as well as
numerous quantitative and graphical results. Multiple avenues
of extension of our system and the concept were also
introduced, illustrating the power of the separable-body
heterogeneous symbiotic multi-robot system concept.
REFERENCES
[1] H. G. Tanner, “Switched UAV-UGV cooperation scheme for target
detection,” IEEE International Conference on Robotics and Automation,
2007, pp. 3457-3462.
[2] E. Z. MacArthur, D. Carl and C. Crane, “Use of cooperative unmanned
air and ground vehicles for detection and disposal of
mines,” Proceedings of SPIE-The International Society for Optical
Engineering, 2005, Vol. 5999, pp. 94-101.
[3] K. DonaldE, E. Z. MacArthur, D. Carl and C. Crane “Unmanned
Ground Vehicle state estimation using an Unmanned Air Vehicle,”
Proceedings of the IEEE International Symposium on Computational
Intelligence in Robotics and Automation 2007, pp.473-478.
[4] W. Burgard, D. Fox, M. Moors, R. Simmons, and S. Thrun.
“Collaborative multi-robot exploration,” In Proceedings of the IEEE
International Conference on Robotics and Automation, 2000, pp.1340–
1345.
[5] R. G. Simmons , D. Apfelbaum , W. Burgard , D. Fox , M. Moors, S.
Thrun , H. L. S. Younes, “Coordination for multi-robot exploration and
mapping,” Proceedings of the Seventeenth National Conference on
Artificial Intelligence and Twelfth Conference on Innovative
Applications of Artificial Intelligence, 2000, pp.852-858.
[6] Y. Yang, A. Minai, M. Polycarpou, ‘‘Evidential map-building
approaches for multi-UAV cooperative search,’’ in Proceedings of the
American Control Conference, 2005, pp. 116–121.
[7] D.. W. Casbeer, D. B. Kingston, R. W. Beard, T. W. McLain, S. M. Li,
and R. Mehra, ‘‘Cooperative forest fire surveillance using a team of
small unmanned air vehicles,’’ International Journal of Systems
Sciences, 2006, vol. 37, no. 6, pp. 351– 360.
[8] H. H. Gonzalez-Banos, J. Latombe, “Navigation strategies for exploring
indoor environments,” Journal of Robotic Systems, 2002, 21(10-
11):pp.829–848.
[9] M.T. Bryson and S. Sukkarieh, “Co-operative localisation and mapping
for multiple UAVs in unknown environments,” IEEE/AIAA Aerospace
Conference, 2007, pp.1-12.
[10] K. Fregene, D. Kennedy, R. Madhavan, L. E. Parker, D. W. L. Wang,
“A class of intelligent agents for coordinated control of outdoor terrain
mapping UGVs,” Eng. Appl. of AI 18(5), 2005, pp.513-531.
[11] S. J. Moorehead, “Autonomous surface exploration for mobile robots,”
Technical Report CMU-RI-TR-01-30, The Robotics Institute, Carnegie
Melon University, 2001.
[12] D. G. Lowe "Object recognition from local scale-invariant features,"
In Proceedings of the International Conference on Computer Vision. 2.
1999, pp. 1150–1157.
[13] M. A. Fischler and R. C. Bolles. "Random Sample Consensus: A
paradigm for model fitting with applications to image analysis and
automated cartography". Comm. of the ACM 24 (6), 1981, pp. 381–395.
[14] S. Osher and J. A. Sethian, “Fronts propagating with curvature
dependent speed: Algorithms based on Hamilton-Jacobi formulations,”
Journal of Computational Physics, no. 79, 1988, pp. 12–49.
[15] J. A. Sethian, “A Fast Marching LevelSet Method for Monotonically
Advancing Fronts,” Proceedings of the National Academy of Science,
vol. 93, no. 4, 1996, pp. 1591–1595.
[16] S. Garrido, L. Moreno, M. Abderrahim and D. Blanco, “FM2: A Real-
Time Sensor-Based Feedback Controller For Mobile Robots,”
International Journal of Robotics and Automation, vol. 24, no. 1, 2009,
pp. 48-65.