Conference PaperPDF Available

Abstract and Figures

This document describes an approach to develop a fully distributed multipurpose autonomous flight system. With a set of hardware, software, and standard flight procedures for multiple unmanned aerial vehicles (UAV), it is possible to achieve a relative low-cost plug and play fully-distributed architecture for multipurpose applications. The resulting system comprises an OptiTrack motion capture system , a Pixhawk flight controller, a Raspberry Pi companion computer, and the Robotic Operating System (ROS) for inter-node communication. The architecture leverages a secondary PID controller with the use of MAVROS, an open-source Python plugin for ROS, for onboard processing and interfacing with the flight controller. Featuring a procedure that receives the position vector from Optitrack System and returns the desired velocity vector for each time-step. This facilitates ease of integration for researchers. The result is a reliable, easy to use an autonomous system for multipurpose engineering research. To demonstrate its extensiveness, this paper shows experiments of a robotics navigation experiment utilizing the fundamentals of Markov Decision Processes (MDP) running at 60Hz, Wireless and with a network latency below 2ms. This paper reasons why fully distributed systems should be embraced as it maintains the reliability of the system with lower cost and easier implementation for the ground station. Combined with an intelligent choice approach for developing software architecture, it encourages and facilitates the use of autonomous systems for transdisciplinary research.
Content may be subject to copyright.
A Full Distributed Multipurpose Autonomous
Flight System Using 3D Position Tracking and
ROS
Gustavo Gargioni˚, Marco Peterson:, J. B. Persons;, Kevin Schroeder Ph.D.§, Jonathan Black Ph.D.
˚ : § ¶ Kevin T. Crofton Department of Aerospace and Ocean Engineering
Email:˚gargioni@vt.edu, :marco7@vt.edu, §schroeder@vt.edu, jonathan.black@vt.edu
;Bradley Department of Electrical and Computer Engineering
Email:;persons@vt.edu
Virginia Tech
Blacksburg, VA, 24073
Abstract—This document describes an approach to develop
a fully distributed multipurpose autonomous flight system.
With a set of hardware, software, and standard flight pro-
cedures for multiple unmanned aerial vehicles (UAV), it is
possible to achieve a relative low-cost plug and play fully-
distributed architecture for multipurpose applications. The
resulting system comprises an OptiTrack motion capture sys-
tem, a Pixhawk flight controller, a Raspberry Pi companion
computer, and the Robotic Operating System (ROS) for inter-
node communication. The architecture leverages a secondary
PID controller with the use of MAVROS, an open-source
Python plugin for ROS, for onboard processing and interfacing
with the flight controller. Featuring a procedure that receives
the position vector from Optitrack System and returns the
desired velocity vector for each time-step. This facilitates ease
of integration for researchers. The result is a reliable, easy
to use an autonomous system for multipurpose engineering
research. To demonstrate its extensiveness, this paper shows
experiments of a robotics navigation experiment utilizing the
fundamentals of Markov Decision Processes (MDP) running
at 60Hz, Wireless and with a network latency below 2ms.
This paper reasons why fully distributed systems should be
embraced as it maintains the reliability of the system with
lower cost and easier implementation for the ground station.
Combined with an intelligent choice approach for developing
software architecture, it encourages and facilitates the use of
autonomous systems for transdisciplinary research.
Contents
I Introduction and Motivation
II Hardware
III Software
IV Distributed ROS-Based System
V Markov Decision Process Implementation
VI Demonstrative Experiments
VII Conclusion
References
Biographies
I. INTRODUCTION AND MOTIVATIO N
In recent years, the use of the Robotic Operating System
[1] (ROS) in unmanned aerial vehicle research has become
ubiquitous, particularly for studying dynamics models,
controls, and multi-drone operations. With the proliferation
of ever-more-capable drone platforms, sensors and add-ons,
the systems and research possibilities are increasing in
number and complexity. Because of complexity in balancing
system choice with each research project, developing a
system that can handle projects from across research
areas is challenging; as a result, a significant number
of robotics-based projects do not move past simulated
environments. Due in part to unique research focus and
approaches, a common solution is for each research group
in the same institution to develop its own autonomous
systems laboratory, a process which is both expensive and
time-consuming. Therefore, finding an optimum setup that
can encompass multipurpose research at a low marginal
cost is desired.
A key barrier to live experimentation is the overwhelming
amount of effort that researchers need to spend on how
to use and integrate autonomous systems with their
experiments. While simulation can be a cost-effective
means of testing, algorithms and systems must be verified
on real-world hardware before they are trusted by users.
Many graduate students, faculty, and researchers don’t step
further into UAS projects simply because this integration
requires additional skill sets and resources - such as
incorporating hardware like LIDAR and GPS or requiring
airspace and licensed UAS pilots. Therefore, a solution
to this problem might lie in reducing the complexity of
autonomous lab operations.
One means of simplifying operations and reducing
setup time between experiments is compartmentalization
of mission-specific hardware and software using
standardized interfaces with laboratory architecture.
Compartmentalization of components and encapsulation
of code provide a user-friendly environment for
multidisciplinary professionals that can smoothly transition
between different experiments. Not all researchers,
scientists, and engineers are professionals of computer
science and in most cases, they need a fast and easy
way to implement and run their experiments. A simple
flight application programming interface (API) available in
terms of position and velocity vectors would reduce the
developmental burden and may be a common ground for
all researchers.
To ensure the multipurpose laboratory environment is
accessible to a broad user base, there should be a low
marginal cost associated with experimentation. One way
to achieve this is by equipping the laboratory itself with
the sensors needed for localization. In this manner, the
cost of sensors is amortized over every experiment in the
lab. Without a requirement for onboard sensors, unmanned
systems used in testing can be simpler, cheaper, smaller,
and greater in number. By maintaining the sensors as part
of the lab architecture and using standardized interfaces,
researchers focused on behaviors and decision-making will
be able to offload requirements for sensor development and
integration. If onboard sensors are desired, the laboratory
sensors can later be used for truth data in experimentation.
An additional challenge encountered by researchers using
ROS with multiple agents is the fact that a completely
centralized system (i.e., one in which sensing, processing,
and coordination are done on a ground station) is limited
by ground station processing capacity and communications
network bandwidth and latency. Emulating a distributed
system with a centralized system also overlooks the
interaction of processing and communication and limits
the ability of a project to transition outside the laboratory.
Developing a distributed system which leverages onboard
processing is therefore important both to alleviate ground
station and network dependencies and to facilitate follow-on
in-situ experimentation.
An architecture which incorporates the above features is
described in this document. While the approach taken here
has been used elsewhere, detailed descriptions of how to
implement a successful laboratory architecture are hard to
come by. The authors hope to provide this paper as a
guide to others who are interested in establishing their
own autonomous systems laboratories. As an example of
the type of work that can be done using this architecture,
an implementation of a two-dimensional MDP algorithm
[2] is presented for path planning aboard a multi-rotor
aircraft. An MDP-based path planning algorithm is useful in
demonstrating this architecture because, unlike algorithms
such as dynamic programming, MDP requires the robot
in question to confirm its position prior to determining
its next action, necessitating communication between the
aircraft’s flight control system, path-planning algorithm, and
sensors throughout the mission. The paper explores two
different experiments using MDP-Based Navigation with
onboard drone computing and OptiTrack positioning. The
first experiment consists of running the algorithm pre-flight
and executing policy look-up in flight, while the second
experiment runs both value iteration and the policy selection
algorithm repeatedly while flying. The second approach
becomes quite interesting when adding dynamic obstacles
to the environment.
II. HARDWARE
Fig. 1. Communication architecture between OptiTrack 3D motion
tracking, Raspberry Pi,and ground station
A. Pixhawk
The PixHawk [3] ”is an independent open-hardware project
providing readily-available, low-cost, and high-end, autopi-
lot hardware designs to the academic, hobby and industrial
communities”[3]. This off-the-shelf flight controller is a
popular option for multi-rotor platforms. It is a balanced
choice for a low-cost board that has the capability of being
connected to a companion computer through its available
telemetry 2 port, as shown in Figure 2. In this architecture,
the Pixhawk is capable of accepting MAVLINK commands
using the PX4 stack controller developed by the community.
B. Raspberry Pi 3
The Raspberry Pi 3 [4] single board computer provides
computationally powerful and lightweight capabilities in a
compact, low-cost package. With the connection established
with the Pixhawk, the Raspberry Pi is powered via the GPIO
pins - one for ground and one for 5V. The other 2 GPIO pins
are used for UART communication between the Raspberry
Pi and the Pixhawk in order to send messages from the
Raspberry Pi to the Pixhawk flight controller, as shown in
Fig. 2.
Fig. 2. Pixhawk to Raspberry PI 3 GPIO Pin out
C. Drone Hardware
Although any airframe can be made to work in this system
architecture, the frames selected to be used were DJI Flame
Wheel F450 and F550 copter platforms, which were selected
due to their modular design and easily replaceable parts.
For single-aircraft operations, the F450 (quad-copter) was
selected as the standard for the architecture, as its smaller
size and lower thrust requirement make it more suitable for
indoor use. If a 3D printer is available, a customized 3D
printed frame could also be used in the system. Note that
different frame designs may lead to different flight dynamics,
and thus investigation of structural performance for multi-
rotors is another research field that can benefit from this
setup.
D. 3D Position Tracking Cameras System
There are several commercial systems in the market that can
provide 3D position tracking. There is also the possibility
of developing your own 3D position tracking system.
Although not low-cost, we decided to use the Optitrack
System in this paper because it is commonly used in the
field.
An OptiTrack System, including a motion capture software,
MOTIVE and 18 high-speed tracking cameras, Models 13
and 13W were used in this setup. This hardware is the
most expensive part of all setups that require 3D Position
Tracking. Distributing this capital cost among many exper-
iments and research groups is a primary motivator for a
multipurpose setup. The OptiTrack 3D Position Tracking
System [5] provides real-time position tracking of any object
outfitted with specialized reflective markers. The OptiTrack
system keeps track of position and orientation data accurate
to 1mm at up to 240 Hz, providing precision navigation
inputs in lieu of onboard sensors.
E. Networking Hardware
OptiTrack Switch - OptiTrack cameras use Cat 6
Ethernet connections for both data transfer and power.
A switch is required for both power distribution over
the camera array as well as a data consolidation
conduit for the Motive software to collect and process
incoming packets.
Local Network Router - In order to broadcast the col-
lected data throughout a local network as well as
allowing SSH connections to the Raspberry Pi, a simple
router meeting the 802.11ax standard will suffice.
III. SOF TWAR E
The software architecture proposed in this paper is one of
many possibilities. There may be other choices with different
results and implementation costs. Following subsections
present some key information on the proposed software
architecture.
A. QGroundControl
QGroundControl is an open source flight planning software
that allows users to flash and parameterize their vehicle as
desired. QGroundControl connects to the vehicle through
either direct USB cable or telemetry radio connection. With
the use of this software, we can also calibrate the vehicle’s
sensors, like the gyroscope, magnetometer, accelerometer,
and leveling from the horizon. This software also allows
users to map switches to radio controllers that UAS pilots
would traditionally use for control. Mapping a dedicated kill
switch for each aircraft is highly recommended for safety
purposes.
B. PX4 Firmware, MAVLINK Integration and Position Esti-
mator
In order to integrate PX4 Ardupilot [6] with the external
position tracking system, this paper chooses to use Mocap
Optitrack. To configure the PX4 Firmware to work with
Mocap Optitrack, an available position estimator need to
be selected from the parameter list. The selected PX4
firmware can be configured using LPE, EKF and EKF2
estimators, where all are extended Kalman filters for a
3D position and velocity states. The difference between
these options is: besides 3D position and velocity states,
the EKF estimates attitude and EKF2 estimates attitude
and wind. PX4 Ardupilot uses the following MAVLink
messages to get external position information, mapping them
to the following uORB topics: vehicle visual odometry and
vehicle mocap odometry topics. EKF estimator is present
throughout most firmware of PX4 and it only subscribes to
former topic. The LPE estimator subscribes to both topics,
and can hence process all the above MAVLINK messages
[7]. Therefore, in this architecture, the setup needs to use the
parameter at LPE instead of EKF due to the use of Mocap
Optitrack.
C. Motive (OptiTrack Software)
Motive [8] is engineered to track objects in six degrees
of freedom ”with exacting precision, with support for real-
time and offline workflows.” The Motive software package
enables users to calibrate and re-size the sensor suite, create
rigid bodies, and a myriad of other options to set up a 3D
tracking space for a given number of OptiTrack cameras.
Perhaps most importantly, the software provides the interface
to broadcast position data of any given rigid body to a series
of nodes as described in ”Networking Hardware”.
D. Robotic Operating System (ROS)
Although “ROS is a flexible framework for writing robot
software” [1] and could be used to centrally control all
experiment participants, in our configuration each vehicle
individually has its own Python script running in its
companion computer, a Raspberry Pi. These scripts make
all decisions governing each vehicle’s dynamics, providing
the advantages of a fully distributed system. However, ROS
is still a powerful framework which is used here to share
information throughout the system for multiple vehicles,
ground station and Optitrack system.
It could have been chosen a different middle-ware to handle
this task, however, ROS is a common choice in the field
and there isn’t much to lose by selecting it. Furthermore,
in the case of a specific ROS capability or tool is needed,
i.e. services, actions, rviz or rqt, the environment is ready.
Moreover, since ROS is active open source community, new
tools and capabilities can be incorporated with updates in
this same setup.
E. MAVROS
MAVROS[9] is an open source plugin that is used to
facilitate message communication between the flight con-
troller and ROS. After generating the desired message, ROS
will transmit throughout a UART connection between the
Raspberry Pi and the flight controller, Pixhawk. The physical
and software connection is individually available in this
architecture for each vehicle as we will explore in detail
in the next section.
F. Mocap Optitrack
Mocap Optitrack is another open source plugin. The Mocap
is a ROS package that translates the streaming data from
the Tracking Tool Software (Motive) to a geometry message
type PoseStamped [10] structure [11]. It also can transform
to TF, which “is a package that lets the user keep track of
multiple coordinate frames over time” [12]. In other words,
after the Optitrack software broadcast an object’s position
and orientation, Mocap Optitrack captures this transmission
and publishes into ROS each individual struct of type
PoseStamped corresponding to the respective vehicle’s ROS
node.
IV. DISTRIBUTED ROS-BASED SYSTEM
Rather than run all mission processes from a single com-
puter, this document describes a setup focused on minimiz-
ing the dependence of UAVs on the Ground Station. In our
architecture, all decision making processes are made in a
decentralized manner by each agent. Therefore, this setup
enables for a much lighter ground station computer, where
ROS is used only to distribute information throughout the
system. Adding and removing agents between experiments is
also an easy task and different experiments may be executed
independently in the same run.
The ground station comprises two computers. One is dedi-
cated to the OptiTrack motion capture process and another
runs the ROS core (Figure 3, in blue). Each vehicle has its
own flight controller, PixHawk, and a companion computer,
Raspberry Pi, on which all Python scripts are run indepen-
dently of the ground control station. See vehicle section,
highlighted in green in Fig. 3.
A. OptiTrack System Process
The data loop begins with the OptiTrack motion capture
system. The OptiTrack cameras are highly sensitive to a
specific visual signature of markers placed on the drone
or on other objects to be tracked. The system allows the
operator to group multiple markers together into rigid bodies
and then tracks the position and orientation of each body
created. Although the cameras are capable of operating up
to 240 frames per second, the broadcast is set to 60 frames
per second for latency purposes. The data is broadcast with
the use of OptiTrack’s motion capture software, MOTIVE. In
MOTIVE, each object receives an identification number for
the group of markers that defines a rigid body. These identi-
fication numbers are transmitted attached to data describing
each object’s position and orientation in North East Down
(NED) frames via multicast [13] within the local network.
This process is shown in Fig. 3, marker number 1.
B. ROS Conversion Process
The second computer used by the ground station is running
ROS with the Mocap OptiTrack Plugin. East North Up
(ENU) is the standard frame for ROS, while the Optitrack
System generates information in North East Down (NED).
Mocap OptiTrack - the plugin - gathers all information from
the incoming multicast transmission and converts it from
NED to ENU to individual PoseStamped information. The
PoseStamped data of each captured object is then published
via ROS topics to other ROS nodes in the system (e.g. UAS).
This process is shown in Fig. 3, marker number 2.
C. UAV Process
In each vehicle’s companion computer, Raspberry Pi,
a script is managing the vehicle’s mission. First, the
initial script will start and will commence vehicle takeoff
operations. Second, the script will enter into a loop where
it will attempt to switch the flight controller to off-board
mode, in which position and orientation information will
be provided by an off-board sensor (OptiTrack). After
achieving off-board mode, at each time step, a desired
velocity vector is sent from the Raspberry Pi via MAVLINK
Fig. 3. Process Information FlowChart Architecture
messages to the flight controller for execution.
Within each time step, the script gathers PoseStamped
data of its own current and desired positions from the
corresponding topic published on ROS. The position
and orientation data are passed into a PID controller,
implemented in a separate Python class, which returns a
set of desired velocities to be applied to the vehicle. The
expected velocity vector for the quad-copter used in the
sample experiment is ~v “ ru;v;w;zs, where u,v, and
ware linear velocities in XYZ axis, and zis the angular
velocity in the Z axis. This process occurs at a frequency
of 60Hz and is shown in Figure 3, marker 3.
Due to the encapsulation of the PID class, shown in Figure
3, marker 5, the experiment’s mission code can be developed
and implemented independently of the platform being used.
To demonstrate how this system can be used in experi-
mentation, the following sections discuss the application of
Markov decision processes to path planning onboard a UAV.
V. MA RKOV DECISION PROC ES S IMPLEMENTATION
As previously stated, compartmentalization of components
and encapsulation of code provide a user-friendly environ-
ment for multidisciplinary professionals that can smoothly
transition between different experiments. In this section, an
experiment needs a different mission controller approach.
To run this experiment, the only needed to be changed is to
substitute the mission-PID controller represented in Figure
3, marker 5 (the gray box). Details for this experiment is
below.
A. Environment Setup
The environment state space was set for a discrete two-
dimensional space. The agent, a quad-copter, was modeled
as a translational robot in XY-axis. The environment was
divided into a grid composed by cells: 10 in the X-axis by 5
in the Y-axis, each one with the same size of 82cm by 82cm.
This size was chosen due to the position-keeping limitations
of the drones’ flight-control software and gave the drones
adequate room for small deviations without leaving a grid
square. This standardized grid configuration simplified
conversion from the meters-based position information of
OptiTrack to the grid-based position information used in
the MDP algorithm.
The agent navigates based on a simple point-to-point path-
planning problem defined by the grid coordinates of the
aircraft, obstacles, goal, and possible states. The set of
actions for the agent was defined as the cardinal directions,
with four options for each state: move North, move South,
move East, and move West. A transition probability matrix
was implemented for the action set, with the probability of
transition set at 0.7 for the intended direction and 0.1 for
the other directions. While these probabilities overstate the
likelihood of moving in an unintended direction, there are
nonetheless variations in the quadcopter’s position-keeping
when changing speed or direction. Based on the chosen
transition probabilities, whenever the quad-copter is in a
grid square adjacent to an obstacle, the MDP algorithm
will calculate a non-zero probability of hitting that obstacle,
causing the aircraft to fly a conservative profile and steer
clear of obstacles when practicable.
B. MDP Algorithm
A generalized discrete value iteration and policy selection
algorithm was chosen for simplicity, as seen in Probabilistic
Robotics [14].
Algorithm MDP discrete value iteration():
for all x do
ˆ
Vpxq “ rmin
endfor
repeat until convergence
fori=1toNdo
ˆ
Vpxq “ γmaxurpxi, uq ` řN
j1Vpxjqppxj|u, xiqı
endfor
endrepeat
return ˆ
V
Algorithm policy MDP(x, ˆ
V):
return argmaxurpx, uq ` řN
j1ˆ
Vpxjqppxj|u, xiqı
ˆ
Vpxqnew value after next iteration, as function of x
Vpxqvalue prior to next iteration
xstate vector, with xjthe next state and xithe current
state
rreward, as a function of xand u
uaction taken (move north, south, east, or west)
C. MDP Python File
A python script class was implemented, namely
drone MDP.py. It consists of an MDP class definition
and separate discrete value iteration and policy selection
functions.
The separate value iteration and policy selection functions
enabled the experiment to run the value iteration once
(assuming a known, static environment) and select policies
upon arrival at each state or, alternatively, to run both
functions at each change of state (assuming incomplete
knowledge that is progressively updated or a dynamic
environment).
After several methods which convert environmental
elements’ motion capture coordinates to grid coordinates,
the next method of the class is the value iteration function,
propagate, which implements the discrete value iteration
algorithm. After assigning negative rewards to obstacles,
positive rewards to the goal, and zero rewards to other
squares in the grid space, propagate updates values based
on transition probabilities to other grid squares (states).
To discourage a solution which involves flying around the
perimeter of the simulated grid space (and along the very
real walls of the laboratory), the “out of bounds” squares
surrounding the configuration space are given negative
rewards equal to those of the obstacles.
The value iteration function employs a discount factor,
γ, which is set to 0.9. Value iteration is repeated as a
loop until the sum of all states, or game value, converges;
convergence in this implementation was defined as a
difference in successive game values of less than 10´5.
Other methods were implemented for the policy selection
function: find policy, which creates an array of optimal
policies indexed to the possible states using the policy MDP
algorithm above; and policy2nextGrid, which selects the
policy from this array which corresponds to the current
state. During flight, these functions are called continuously
until the goal position is reached.
D. ROS Integration
To provide the necessary input to the MDP algorithm and
transmit MDP outputs to the mission code, ROS was used as
a global variable repository and transmission method. First,
3D position data of all environment objects is published
from OptiTrack to any instances of the flight MDP class,
which contains the mission code. Next, flight MDP converts
that position data to XY grid coordinates, which are then
published on another ROS topic. drone MDP subscribes to
this topic and uses the XY grid coordinates to calculate
the optimal policy from that position, which it outputs in
the form of the desired next XY grid coordinates. Finally,
flight MDP converts these 2D grid coordinates into a desired
3D position, which it passes to the flight controller to solve
for a velocity vector and execute.
VI. DE MO NS TR ATIV E EXPERIMENTS
A. Static Experiments
For static experiments, the MDP-based path planning
algorithm is implemented using start, goal, and obstacle
locations hard-coded prior to flight. These experiments
are designed to test the drone’s ability to calculate the
correct path and translate the optimal policy at each
state into the correct physical action, using OptiTrack
telemetry for position-keeping. In the static experiments,
all obstacles are stationary and known in advance, and
propagate is called just once at the beginning of the mission.
A sample static scenario can be seen in Figures 4 and 5. In
this experiment the vehicle is taking off from position [0,2],
with obstacles at positions [[2,2],[5,1],[5,3],[9,3]]. The goal
is defined at position [8,3]. In this setup, the expected route
is visible while the risk on the progression is low in all
environment.
B. Dynamic Experiments
For dynamic experiments, no prior knowledge of the
operating environment is assumed; rather, markers on
the obstacles and goal are detected by OptiTrack and
communicated to the drone via ROS. The drone calls
propagate at a programmed rate of 2 Hz, enabling it to
react to changes in obstacle or goal configurations during
flight.
To demonstrate the possibilities of this capability, a scenario
was created in which one UAV, the ”infiltrator,” must find
a path past another UAV, the ”patrol,” while the patrol
maneuvers in a scripted fashion. The patrol UAV only
subscribes to its own position data and does not react to
the infiltrator. An image of the scenario layout can be seen
in Figure 6.
C. Results
1) Static Experiments: Repeated runs showed that the on-
board processor can run propagate and find the optimal
path in 32 to 88 ms with a convergence tolerance of 10´5.
The short processing time in this scenario indicates that
this path planning method can be applied to much larger
static environments without undue delay. One possibility
is to transition to a three-dimensional state space. Due to
limitations of the physical lab space in terms of verti-
cal maneuver, three-dimensional obstacles would have to
be relatively simple, but once an algorithm and control
scheme are in place, the same approach could be scaled to
larger environments. A challenge with implementing three-
dimensional obstacles in the current system architecture is
the requirement for the OptiTrack system to maintain line
of sight to the drones; any obstacles would need to be
sufficiently transparent (e.g. wireframe) to prevent loss of
tracking. The video links below further illustrate the results
of the static experiments:
Fig. 4. Quad-Copter maneuvering through a grid world using tradi-
tional dynamics and controls, Raspberry Pi, and ground station
Fig. 5. MDP-derived, north-up value map corresponding to the obstacle
configuration in Figure 4. The UAV is indicated by the black ’X’ and
its path by green arrows. Note that the photo of the lab in Figure 4
is taken from approximately cell [2,0] of this map looking east.
Quad-Copter MDP Navigation Using ROS & OptiTrack
https://youtu.be/bfgOII7fjsA
Quad-Copter MDP Drone Follow
https://youtu.be/ijEeRP9L Vc
Quad-Copter Selecting a Side When Given an Asym-
metrical Obstacle Arrangement
https://youtu.be/lHi5uz9RgRc
2) Dynamic Experiments: The system architecture demon-
strated the ability to provide real-time updates to the MDP
value function to allow for dynamic environments or limited
prior knowledge of the operating area. The results of the
Infiltrator vs. Patrol experiment can be seen in Figures 6
and 7. Of note, at the 5.1 second recalculation point, the
propagation algorithm arrived at a local maximum at state
[1,4], which was only resolved once the patrol UAV moved
out of its [1,7] position. The same issue occurred at the 15.9
second recalculation point but again was resolved when the
patrol UAV changed position.
The problem of local maxima is a known issue with MDP
algorithms like the one which was implemented here, though
it was not observed each time this experiment was run.
The simple fact of variable outcomes is a useful feature
of the live experimentation made possible by this system
architecture, as a simulation with consistent initial conditions
Fig. 6. Dynamic scenario in which the infiltrator must find a path
past a maneuvering patrol. Using position data from OptiTrack, the
infiltrator calculates a new MDP value map twice per second.
Fig. 7. Annotated value map produced during the dynamic experiment.
The ”infiltrator” is indicated by the black ’X’, the ’patrol’ and its path
are in yellow, and the time of each snapshot is listed on the left.
might not illuminate the problem as seen during live exper-
imentation. The video link below shows the drone behavior
seen in the dynamic experiment:
Quadcopter MDP Navigation with Dynamic Obstacle
https://youtu.be/It7kkVyM2l0
VII. CONCLUSION
While implementing a system as described here requires
some up-front cost and effort, the benefits to researchers
are obvious. The time frame to set up all hardware
and implement the software is around 2 weeks. The
encapsulation of the PID control inside the decision-making
process of the agent with the aid of an API manual enables
researchers to work separately with their scripts before
coming to the laboratory. Moreover, all that is required to
execute a new experiment is to copy the relevant files into
the UAV companion computer and execute the python script.
Using a wireless connection from their personal computer,
researchers can access the UAV companion computer
directly to modify their code between runs. Currently,
all data is saved in a CSV type file with all the objects’
positions, orientations, velocities and decision making
process data. This data is saved at the frequency of the
system, currently 60 Hz, individually by the UAV and
ground station referencing a synchronized ROS clock. Due
to the ease of modification and ready production of data,
time spent by each researcher in the laboratory is minimal;
the potential for higher use rates and multidisciplinary
research are significant advantages of this approach.
The limits of this architecture have yet to be explored.
Results demonstrate that an update frequency of 60Hz
synchronized through all independent entities on the
system is reasonable for small numbers of agents. This
was achieved while maintaining a lower network latency
below 5ms with multiple agents fling at the same time. The
system accepts running at higher frequencies up to 240Hz;
however, further exploration on latency needs to be done
when operating larger numbers of agents.
The combination of OptiTrack motion capture camera ar-
rays, ROS communication protocols, and distributed Rasp-
berry Pi / PixHawk controllers has proved to be a powerful
and flexible tool for experimentation. This project success-
fully applied this tool to 2D navigation in a 3D space,
providing experimental validation of onboard path planning
algorithms. The success and relative ease of operation of this
approach should prove encouraging to researchers who wish
to see their work emerge from simulation into the physical
domain.
REFERENCES
[1] R. O. System, “Getting started with ros,” December 2018. [Online].
Available: http://www.ros.org/
[2] R. Sulton and A. G. Barto, Reinforcement Learning: An Introduction
(Adaptive Computation and Machine Learning) (Adaptive Computa-
tion and Machine Learning series), 1998.
[3] PixHawk, “Pixhawk.” [Online]. Available: http://pixhawk.org/
[4] R. PiFoundation, “Raspberry pi,” October 2018. [Online]. Available:
https://www.raspberrypi.org/
[5] O. Systems, “Optitrack applications,” November 2018. [Online].
Available: https://optitrack.com
[6] A. D. Site, “Communicating with raspberry pi via mavlink.”
[Online]. Available: http://ardupilot.org/dev/docs/raspberry-pi-via-
mavlink.html
[7] ——, “Using vision or motion capture systems for position
estimation.” [Online]. Available: https://dev.px4.io/en/ros/external
position estimation.html
[8] O. Systems, “Optical motion capture software.” [Online]. Available:
hhttps://optitrack.com/products/motive/
[9] DroneCode, “Mavros.” [Online]. Available: https://dev.px4.io/en/ros/
mavros installation.html
[10] R. O. System, “Geometry messages.” [Online]. Available: http:
//docs.ros.org/lunar/api/geometry msgs/html/index-msg.html
[11] Z. Li, “Mocap optitrack,” October 2018. [Online]. Available:
https://dev.px4.io/en/ros/mavros installation.html
[12] R. O. System, “Tf package.” [Online]. Available: http://wiki.ros.org/tf
[13] C. Systems, “Introduction to ip multicast,” 2006. [Online]. Available:
https://www.asus.com/us/AiMesh/
[14] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. Cam-
bridge, Massachusetts: The MIT Press, 2005.
BIOGRAPHIES
Gustavo Gargioni received his B.S. degrees in
Mechanical/Industrial Engineering from Instituto
Maua de Tecnologia (Brazil) in 2002 and a
Business Specialization from Fundacao Getulio
Vargas (Brazil) in 2005. Since college, following
software and industrial entrepreneur career. While
C.E.O. in Ecoplasticos Industria de Reciclagem
Ltda, he received the 2011 Environmental Award
from Federacao das Industrias do Estado da Bahia
(FIEB, Brazil). Since 2017, he is an Aerospace
Engineering Ph.D. student in Kevin T. Crofton
Aerospace and Ocean Engineering Department at Virginia Tech and a
Graduate Research Assistant for The Hume Center for National and
Security Technology and for Center for Space Science and Engineering
Research (Space@VT). His current research activities include On-Orbit
Servicing focused in Autonomous Unmanned Aerospace Systems.
Marco Peterson received his B.S. and M.S. de-
grees in Computer Science from Virginia State
University in 2015. Is Also a Graduate of the
United States Army’s Aviation Operations and
Flight Schools and is currently serving as a
Rated Rotary-Wing Pilot. He is Currently pursu-
ing a PhD in Aerospace Engineering in Kevin
T. Crofton Aerospace and Ocean Engineering
Department at Virginia Tech with a research focus
in Unmanned Aerial platforms.
J.B. Persons received his B.S. degree in Mechan-
ical Engineering from the Massachusetts Insti-
tute of Technology in 2006. After graduation, he
served in the US Marine Corps as an F/A-18 pilot
and S&T Branch Head with the Marine Corps
Warfighting Laboratory. Since 2018, he has been a
direct-PhD student in the Bradley Department of
Electrical Engineering and a Graduate Research
Assistant for the Hume Center for National Secu-
rity and Technology at Virginia Tech.
Kevin Schroeder, PhD is a Research Faculty in
the Kevin T. Crofton Aerospace and Ocean En-
gineering Department at Virginia Tech. Kevin re-
ceived his B.S. in Engineering from Oral Roberts
University in 2014. He went on to receive his
Ph.D. in Mechanical Engineering from Virginia
Tech in 2017. As a Ph.D. candidate, Kevin studied
Entry, Descent, and Landing (EDL) systems and
became a NASA Innovative Advance Concepts
Fellow for his invention of TANDEM. Following
graduation, Kevin was hired by Virginia Tech
to work in the Center for Space Science and Engineering Research
(Space@VT). Currently, Kevin serves as the technical lead on multiple
projects with a focus on Astrodynamics, Optimal Decision Processes, and
Autonomy of Dynamical Systems.
Jonathan Black, PhD is a Professor in the Kevin
T. Crofton Department of Aerospace and Ocean
Engineering at Virginia Tech (VT), the Director
of the Aerospace Systems Lab of the Ted and
Karyn Hume Center for National Security and
Technology, a member of the Center for Space
Science and Engineering Research (Space@VT),
and the Northrop Grumman Senior Faculty Fel-
low in C4ISR. Prior to joining VT, Dr. Black
served as a faculty member in the Aeronautics
and Astronautics department at the Air Force
Institute of Technology (AFIT), Wright-Patterson Air Force Base, Ohio. Dr.
Black’s research interests include space and atmospheric vehicle dynamics,
linear and nonlinear control theory, autonomous vehicle design, struc-
tures, structural dynamics, advanced sensing technologies, space systems
engineering, and novel orbit analysis for a wide variety of military and
intelligence applications including large lightweight space structures, micro
UAV development, and taskable satellites.
... To achieve a hardware-in-the-loop testing simulation for a maneuvering system platform and associated sensors, building and expanded upon a robotics architecture referred to as Space-Drones. The first iteration of the SpaceDrones Platform was constructed out of necessity to develop a free flying sensor platform in 2018 [27]. At its heart, this architecture consists of one or more maneuvering robotic platforms controlled by a central ground station via centralized networking or decentralized decision-making between robotic nodes. ...
... Flight compute hardware and communication protocol, were built using the companion computer infrastructure, utilizing a PID flight controller (in our case, the pixhawk flight The first iteration of the SpaceDrones architecture [27] leveraged an onboard Raspberry Pi B+ computer due to its lightweight, low power consumption, and low cost. At only 5 volts, the power requirements could be satisfied via GPO pins, simplifying the drone's power distribution system while preserving limited power resources provided by an onboard battery. ...
... Another implementation of this system is to deploy onboard relative motion dynamics via drone controller augmentation. The SpaceDrones platform has already proven that it can do decentralized multi-drone swarming [27] [40], Which opens up a unique opportunity to label multiple drones as deputy and chief relative motion objects and then alter onboard ...
Full-text available
Thesis
The proliferation of reusable space vehicles has fundamentally changed how we inject assets into orbit and beyond, increasing the reliability and frequency of launches. Leading to the rapid development and adoption of new technologies into the Aerospace sector, such as computer vision (CV), machine learning (ML), and distributed networking. All these technologies are necessary to enable genuinely autonomous decision-making for space-borne platforms as our spacecraft travel further into the solar system, and our missions sets become more ambitious, requiring true ``human out of the loop" solutions for a wide range of engineering and operational problem sets. Deployment of systems proficient at classifying, tracking, capturing, and ultimately manipulating orbital assets and components for maintenance and assembly in the persistent dynamic environment of space and on the surface of other celestial bodies, tasks commonly referred to as On-Orbit Servicing and In Space Assembly, have a unique automation potential. Given the inherent dangers of manned space flight/extravehicular activity (EVAs) methods currently employed to perform spacecraft construction and maintenance tasking, coupled with the current limitation of long-duration human flight outside of low earth orbit, space robotics armed with generalized sensing and control machine learning architectures is a tremendous enabling technology. However, the large amounts of sensor data required to adequately train neural networks for these space domain tasks are either limited or non-existent, requiring alternate means of data collection/generation. Additionally, the wide-scale tools and methodologies required for hardware-in-the-loop simulation, testing, and validation of these new technologies outside of multimillion-dollar facilities are largely in their developmental stages. This dissertation proposes a novel approach for simulating space-based computer vision sensing and robotic control using both physical and virtual reality testing environments. This methodology is designed to both be affordable and expandable, enabling hardware-in-the-loop simulation and validation of space systems at large scale across multiple institutions. While the focus of the specific computer vision models in this paper are narrowly focused on solving imagery problems found on orbit, this work can be expanded to solve any problem set that requires robust onboard computer vision, robotic manipulation, and free flight capabilities.
... In the scientific community, one of the topics of major interest in the UAV field concerns autonomous navigation based on computer vision [1,2]. Many authors have proposed motion capture systems (mocap) [3][4][5][6][7][8], as they guarantee the highest performance. Given the high precision that these systems are able to achieve, they are often used to compare different control techniques. ...
... In order to investigate this new formulation of the problem, the classic UAV architecture shown in Figure 1 and composed of a micro-controller, sensors and a receiver capable of establishing a radio link for the drone commands is no longer sufficient. Instead, it is necessary to use a multilevel architecture, composed of both classic micro-controllers and other boards capable of processing images and complex algorithms, where each level must be able to communicate with the adjacent ones (see, e.g., [4]). As a consequence, the complexity of the drone inevitably increases. ...
... In fact, most of the scientific papers dealing with the development of technology and algorithms for autonomous UAVs use this platform [13]. Among the most important reasons for this choice, two are prominent: (i) the PixHawk board was one of the first and most complete open source platforms for drones; and (ii) some custom versions of the firmware give native support for a robot operating system (ROS), a popular framework for robot development and programming [4]. Indeed, usually on UAVs based on a multilevel hardware, an additional card running ROS is used. ...
Full-text available
Article
In this paper, a multilevel architecture able to interface an on-board computer with a generic UAV flight controller and its radio receiver is proposed. The computer board exploits the same standard communication protocol of UAV flight controllers and can easily access additional data, such as: (i) inertial sensor measurements coming from a multi-sensor board; (ii) global navigation satellite system (GNSS) coordinates; (iii) streaming video from one or more cameras; and (iv) operator commands from the remote control. In specific operating scenarios, the proposed platform is able to act as a “cyber pilot” which replaces the role of a human UAV operator, thus simplifying the development of complex tasks such as those based on computer vision and artificial intelligence (AI) algorithms which are typically employed in autonomous flight operations.
... Flight computer hardware and communication protocol are built using the companion computer infrastructure utilizing a commercial PID flight controller paired with a highperformance edge computer capable of executing complex workloads, including path planning, machine learning, and sensor integration. The first iteration of the SpaceDrones architecture [36] leveraged an onboard computer selected due to its lightweight and lowcost. At only 5 volts, the power requirements could be satisfied via GPO pins, simplifying the drone's power distribution system while preserving limited power resources provided by an onboard battery. ...
Full-text available
Article
The proliferation of reusable space vehicles has fundamentally changed how assets are injected into the low earth orbit and beyond, increasing both the reliability and frequency of launches. Consequently, it has led to the rapid development and adoption of new technologies in the aerospace sector, including computer vision (CV), machine learning (ML)/artificial intelligence (AI), and distributed networking. All these technologies are necessary to enable truly autonomous “Human-out-of-the-loop” mission tasking for spaceborne applications as spacecrafts travel further into the solar system and our missions become more ambitious. This paper proposes a novel approach for space-based computer vision sensing and machine learning simulation and validation using synthetically trained models to generate the large amounts of space-based imagery needed to train computer vision models. We also introduce a method of image data augmentation known as domain randomization to enhance machine learning performance in the dynamic domain of spaceborne computer vision to tackle unique space-based challenges such as orientation and lighting variations. These synthetically trained computer vision models then apply that capability for hardware-in-the-loop testing and evaluation via free-flying robotic platforms, thus enabling sensor-based orbital vehicle control, onboard decision making, and mobile manipulation similar to air-bearing table methods. Given the current energy constraints of space vehicles using solar-based power plants, cameras provide an energy-efficient means of situational awareness when compared to active sensing instruments. When coupled with computationally efficient machine learning algorithms and methods, it can enable space systems proficient in classifying, tracking, capturing, and ultimately manipulating objects for orbital/planetary assembly and maintenance (tasks commonly referred to as In-Space Assembly and On-Orbit Servicing). Given the inherent dangers of manned spaceflight/extravehicular activities (EVAs) currently employed to perform spacecraft maintenance and the current limitation of long-duration human spaceflight outside the low earth orbit, space robotics armed with generalized sensing and control and machine learning architecture have a unique automation potential. However, the tools and methodologies required for hardware-in-the-loop simulation, testing, and validation at a large scale and at an affordable price point are in developmental stages. By leveraging a drone’s free-flight maneuvering capability, theater projection technology, synthetically generated orbital and celestial environments, and machine learning, this work strives to build a robust hardware-in-the-loop testing suite. While the focus of the specific computer vision models in this paper is narrowed down to solving visual sensing problems in orbit, this work can very well be extended to solve any problem set that requires a robust onboard computer vision, robotic manipulation, and free-flight capabilities.
... A closed-loop simulation environment with a benchmark suite for UASs is introduced by Boroujerdian et al., who targets this approach of easy performance and reliability testing, both software and hardware [23]. A realworld implementation of such a concept is introduced by Gargioni et al., where an indoor multipurpose laboratory environment is described and tested [24]. The work presented in this paper is about creating a multipurpose framework for solving many of the same key challenges as in this work. ...
Full-text available
Conference Paper
The development of Unmanned Aerial Systems (UASs) continuously improves and advances the technology, which is a key enabler for many new applications, such as autonomous Beyond Visual Line of Sight (BVLOS) operations. However, ensuring a sufficient level of safety and performance of an UAS can be a challenging task, since it requires systematic experimental validation in order verify and document the reliability, robustness, and fault tolerance of the UAS and its critical components. In this paper we propose the UAV Auto Test Framework (UAVAT Framework), a framework for easy, systematic, and efficient experimental validation of the reliability, fault tolerance , and robustness of a multirotor small Unmanned Aerial Systems (sUAS) and its software and hardware components. We describe the hardware and software used for the UAVAT Framework setup, which consists of a Motion Capture (MoCap) system, a multirotor sUAS, and a tethered power system. In addition, we introduce the concept of test modules, a "plug-and-play" software component with a set of recommended guidelines for defining testing configurations and enabling easy reuse and distribution of software components for sUAS testing. The capabilities of the UAVAT Framework are demonstrated by presenting the results from two developed test modules targeting endurance testing of a multirotor sUAS and Fault Detection (FD) for abnormal behaviour detection.
... The results in [13] showed that accuracy depends on the implementation and methodology used. In particular, the technologies, which were applied in the development of autonomous vehicles, are global positioning systems by vision such as VICON [7,8], or Optitrack [14,15], Virtual Reality (VR) systems [16,17] and RF technology [18][19][20]. Meanwhile, the use of Ultra-Wide Band (UWB), part of the RF technologies, attracted interest during the last years [21], because with this technology it is also possible to obtain a good performance in terms of accuracy [22]. ...
Full-text available
Article
There are several tools, frameworks, and algorithms to solve information sharing from multiple tasks and robots. Some applications such as ROS, Kafka, and MAVLink cover most problems when using operating systems. However, they cannot be used for particular problems that demand optimization of resources. Therefore, the objective was to design a solution to fit the resources of small vehicles. The methodology consisted of defining the group of vehicles with low performance or are not compatible with high-level known applications; design a reduced, modular, and compatible architecture; design a producer-consumer algorithm that adjusts to the simultaneous localization and communication of multiple vehicles with UWB sensors; validate the operation with an interception task. The results showed the feasibility of performing architecture for embedded systems compatible with other applications managing information through the proposed algorithm allowed to complete the interception task between two vehicles. Another result was to determine the system’s efficiency by scaling the memory size and comparing its performance. The work’s contributions show the areas of opportunity to develop architectures focusing on the optimization of robot resources and complement existing ones.
... The MoCap computer runs Vicon Tracker, a proprietary software that allows users to configured and operate the motion tracking system. The motion control computer interfaces with the master computer and the individual flight control computers using a ROS-based MoCap package [50]. The primary goal of the motion tracking system is to establish an inertial frame , which allows us to track the pose and odometry of objects of interest. ...
Article
The next frontier in space exploration involves visiting some of the 2 million small bodies scattered throughout the solar system. However, these missions are expected to be challenging due to the surface irregularities of these bodies and the very low gravity, which makes steps like getting into orbit very complex. For these reasons, reconnaissance is crucial for small body exploration before taking on ambitious orbital, surface, and sample-return missions. Our previous work developed IDEAS, an automated design software for small body reconnaissance mission development using spacecraft swarms. A critical challenge to furthering such designs is the lack of hardware demonstration platforms for interplanetary spacecraft operations. In this paper, we present MAPS (Multi-Agent Photogrammetry of Small bodies), a hardware platform to demonstrate critical reconnaissance operations of multi-spacecraft missions identified by the IDEAS framework. MAPS uses UAVs as the autonomous agents that perform reconnaissance operations. The UAVs use their visual feed to generate a 3D surface map of a small body mockup, which is encountered along their flight path. In this paper, we examine the various design elements of a small body surface reconstruction mission inside the MAPS testbed. These elements are used for designing reference trajectories of the participating UAVs, which is enforced using a tracking feedback control law. We then formulate the small-body mapping problem as an MINLP problem, which is handled by the Automated Swarm Designer module of the IDEAS framework. The solutions are implemented inside the MAPS, and shape models generated from the UAV feeds are compared.
Conference Paper
New methods of effectively testing dynamics-sensitive small satellite hardware and software are needed to keep up with the ever-expanding small satellite industry. Testing spacecraft components increases Technology Readiness Levels and mitigates component failure risks, enabling demonstration of new technology concepts before deployment on-orbit. Evaluating hardware and software components for use in small satellite operations has typically been simulated or tested in physically constrained environments. More recently researchers have begun using multi-rotor aerial vehicles to mimic the orbital motion of such satellites, as they overcome some of the limitations inherent in other testing platforms. Typical multi-rotor vehicle configurations are capable of accurately simulating the translational dynamics of a small satellite, but are incapable of independently controlling their rotational dynamics due to under-actuation. In this paper, we evaluate the design of an existing fully-actuated multi-rotor vehicle by developing and integrating a model into the Gazebo simulator environment, where it is tasked with tracking various spacecraft position and attitude trajectories to evaluate its suitability as a small satellite dynamics simulator. In addition to the vehicle's nominal controller design, we explore controller designs using model reference adaptive control to determine if trajectory tracking accuracy can be improved. The current state of the smallsat dynamics simulator proves to be feasible for trajectories representing low-inclination orbits, with the nominal controller design outperforming a preliminary model reference adaptive controller design with regards to tracking accuracy.
Getting started with ros
  • R O System
R. O. System, "Getting started with ros," December 2018. [Online]. Available: http://www.ros.org/
Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning) (Adaptive Computation and Machine Learning series
  • R Sulton
  • A G Barto
R. Sulton and A. G. Barto, Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning) (Adaptive Computation and Machine Learning series), 1998.
Optitrack applications
  • O Systems
O. Systems, "Optitrack applications," November 2018. [Online].
Using vision or motion capture systems for position estimation
  • A D Site
A. D. Site, "Communicating with raspberry pi via mavlink." [Online]. Available: http://ardupilot.org/dev/docs/raspberry-pi-viamavlink.html [7] --, "Using vision or motion capture systems for position estimation." [Online]. Available: https://dev.px4.io/en/ros/external position estimation.html
Optical motion capture software
  • O Systems
O. Systems, "Optical motion capture software." [Online]. Available: hhttps://optitrack.com/products/motive/
Introduction to ip multicast
  • C Systems
C. Systems, "Introduction to ip multicast," 2006. [Online]. Available: https://www.asus.com/us/AiMesh/
Communicating with raspberry pi via mavlink
  • A D Site