Content uploaded by Giuseppe Silano
Author content
All content in this area was uploaded by Giuseppe Silano on Sep 13, 2017
Content may be subject to copyright.
29.8637
“poster”
2017/8/29
page 1
i
i
i
i
i
i
i
i
An educational simulation platform for Unmanned Aerial Vehicles
aimed to detect and track moving objects
Giuseppe Silano and Luigi Iannelli
1. Motivation
During the last ten years, much effort has been put into the research field of (semi)
autonomous unmanned aerial vehicles (UAVs). Thus, by considering the strong
increase of the use of UAVs for
•inspection and surveillance purposes,
•detecting and tracking arbitrary moving objetcs,
it follows the need for tools that allow to understand what it happens when some
new applications are going to be developed.
The Goal: a complete software platform that gives the possibility to test differ-
ent algorithms for UAV moving in a simulated 3D environment is more and more
important for the whole design process, as well as for the educational purposes.
2. System Description
Several tools (either open source or proprietary) are available: Gazebo, V-REP,
Webots, . . . . We looked at the “simplest” (familiar with) solution for control
engineering students: Matlab/Simulink and the MathWorks Virtual Reality Toolbox.
The toolbox simulates a 3D world to observe the interaction between complex
dynamic systems and the surrounding environment. In particular the tool has been
employed for simulating a drone following a car. Given the particular scenario, we
started from one of the example available on the MathWorks platform describing a
quite detailed model of the car dynamics where the car moves along a non trivial
path.
The key idea: the camera position and orientation is replaced by the drone position
and orientation so as determined by the dynamical equations [1].
Computing References
Generator
Dynamics
Control Drone
Matlab
Virtual
World
epx, epy
areamzr, ψr
xr, yr˙
zc,˙
ψc
φc, θcφd, θd, ψd
xd, yd, zd
IM G
Car
Figure 1:The control scheme. Subscript cindicates the commands, rindicate the
references and dindicate the drone.
3. Vision Based Target Detection
The camera extends the aircraft sensory capacity and, through that, it is possible
develop an automatic control that commands the UAV using the image-based visual
servoing approach. The vision based target detection has been subdivided in three
phases:
•Classifier Learning Phase, in which the machine learning technique
(Viola & Jones algorithm) is trained in order to detect the target.
A Matlab script manages the frames acquisition and the computing phase. To this
aim it has been simulated the drone moving along a spiral trajectory around the
car parked in its initial state. The script takes into account the slight differences
between the classic fixed and virtual world reference system.
Y
X
Z
β
α
r
Camera
Car
β∈0, π/2
α∈0,2π
A high number of images were needed in order to train the classifier. The images
have been divided into two groups: positive (that contains the target, 2626) and
negative images (5252) achieving a 1:2ratio.
•Bounding Box Selection, where an algorithm is designed to obtain a
unique box surrounding the target.
The car is only partially detected in spite of the high images number used in the
learning process, although there are no revelation errors. On the other hand, they
introduce enough “useful noise” to help the detection.
Maximum bounding boxAll bounding boxesAverage bounding box
The recognition of different bounding boxes requires an algorithm in order to obtain
a unique box surrounding the target. A Matlab script computes the “maximum”
bounding box achieved from the classifier.
•Tracking Algorithm, the classifier is replaced by the tracking algorithm
to reduce the computational burden.
The classifier only at the first step or in case of partial occlusion is used to detect
the target (the car). Otherwise, a Continuously Adaptive Mean-Shift (CAMShift)
algorithm performs tracking by searching for the probability distribution pattern of
a target in a local adaptive size window within the frame.
4. Flight Control System
In our application we considered a drone with four rotors, and a pose controller has
been designed based on a classical dynamic model, as described in [1]. The flight
control system has been split into two parts:
•Reference Generator, that uses the information extracted from the
images to generate the path to follow.
Frame
(x0, y0)
(ximg, yimg)
(xbb, ybb)
y
xIn the following phase the image
(ximg,yimg) and the bounding box
(xbb,ybb) centroids are computed,
as well as the distance vector
between the centroids and the
bounding box area, so as in [2].
The references generator is decomposed into two parts: the attitude and position
controller. It tunes the angles (ψd,φdand θd) trying to overlap the images and the
bounding box centroids. The angles are later used to tune the reference position.
Attitude Reference Control Position Reference Control
PIψrPIDzr
ximg
xbb ψinit ψrefr
∆zr
zinit
zr
+
−
+
+
+
−
+
+
epx∆ψrψr
PIθrPIyr
yimg
ybb θinit θrefr
∆yr
yinit
yr
+
−
+
+
+
−
+
+
epy∆θrθr
PIxr
arearef
areames
∆xr
xinit
+
−
+
+
xr
earea
•Integral Backstepping, used as the controller for the trajectory path
tracking.
The trajectory control strategy works for making the aircraft attitude and position
to follow the reference generator outputs.
References
[1] S. Bouabdallah and R. Siegwart, Backstepping and sliding-mode techniques
applied to an indoor micro quadrotor. In Proceedings of the 2005 IEEE In-
ternational Conference in Robotics and Automation, pages 2247–2252, April
2005.
[2] J. Pestana, J.L. Sanchez-Lopez, S. Saripalli, and P. Campoy. Computer vision
based general object following for gps-denied multirotor unmanned vehicles. In
American Control Conference, pages 1886–1891, 2014.
Universit`a degli Studi del Sannio
Dipartimento di Ingegneria, Benevento.
Web: https://www.ding.unisannio.it
E-mail: {giuseppe.silano,luigi.iannelli}@unisannnio.it