Content uploaded by Sanjiv Singh
Author content
All content in this area was uploaded by Sanjiv Singh on Jul 06, 2016
Content may be subject to copyright.
TALOS: An Unmanned Cargo Delivery System
for Rotorcraft Landing to Unprepared Sites
J. Paduano, J. Wissler, G. Drozeski,
M. Piedmonte, N. Dadkhah, J. Francis, C.
Shortlidge, J. Bold, F. Langford,
M. Chaoui, C. J. Liu, E. Foster
jpaduano@aurora.aero
Aurora Flight Sciences, Inc.
Manassas, VA
S. Singh, L. Chamberlain, B. Hamner,
H. Cover, A. Stambler, A. Singh,
S. Nalbone, M. Bergerman
ssingh@nearearth.aero
Near Earth Autonomy
Pittsburgh, PA
S. Scherer, S. Choudhury, S. Maeta,
S. Arora, D. Althoff, D. Maturana
basti@cmu.edu
Carnegie Mellon University
Pittsburgh, PA
D. Limbaugh, J. Bona,
D. Barnhard, D. Chessar
dlimbaugh@kutta.com
Kutta Technologies, Inc.
Phoenix, AZ
D. Mindell
mindell@mit.edu
Massachusetts Institute
of Technology
Cambridge, MA
C. Dominguez1, B. Moon2
R. Strouse3, L. Papautsky
lpapautsky@ara.com
Applied Research Associates, Inc.
Dayton, OH
D. Cerchie, B. Chu, J. Graham C.
Cameron, M. Hardesty, R. Hehr
christine.cameron@boeing.com
The Boeing Company
Phoenix, AZ
ABSTRACT
This paper describes the Tactical Autonomous Aerial LOgistics System (TALOS), developed and flight tested
during the first phase of the Office of Naval Research (ONR) Autonomous Aerial Cargo/Utility System (AACUS)
Innovative Naval Prototype (INP) program. The ONR vision for AACUS is to create a retrofit
perception/planning/human interface system that enables autonomous take-off, flight, and landing of a full-scale
rotary-wing aircraft to and from austere, possibly-hostile landing zones, in a tactical manner, with minimal human
supervision. The goal of the AACUS system is to enable any Marine to supervise this capability from an intuitive
field interface (e.g. a smart phone, tablet, or military radio/interface). A key feature of the TALOS team’s
realization of the AACUS goal is its portability to any rotorcraft of sufficient payload capacity. It uses mostly-COTS
hardware and next-generation autonomy software, and employs modular, open interfaces, so that (1) new software
and hardware components can be integrated relatively easily and (2) perception, planning, and human interface
capabilities are “transition-able” to other applications and aircraft. This paper gives an overview of the system
architecture, the interaction between the TALOS components, and the demonstration results.
1 Currently at MITRE Corporation, cinguez234@gmail.com
2 Currently at Perigean Technologies LLC, brian@perigeantechnologies.com
3 Currently at Clutch Corporation, rstrouse@meetclutch.com
ACRONYMS
AACUS Autonomous Aerial Cargo/Utility System
AES AACUS Enabled System
ASR Assault Support Request
AVO Air Vehicle Operator
CONOPS Concept of Operations
COP Combat OutPost
COTS Commercial Off-The-Shelf
CoT Cursor on Target
EO Electro-Optical
FCS Flight Control System
FO Field Operator
FTP Flight Test Period
GCS Ground Control Station
HSI Human-Systems Interface
ICD Interface Control Document
INP Innovative Naval Prototype
IR Infrared
LP Landing Point
LZ Landing Zone
MOB Main Operating Base
MOSA Modular Open Software Architecture
NFZ No Fly Zone
RASCAL Rotorcraft Aircrew Systems Concept
Airborne Laboratory
SAV Safe Air Volume
SME Subject Matter Expert
STANAG NATO STANdardization AGreement
TALOS Tactical Autonomous Aerial LOgistics System
TZ Touchdown Zone
UAS Unmanned Aircraft System
ULB Unmanned Little Bird
VDL Vehicle Dynamics Lookup
VMS Vehicle Management System
VSM Vehicle Specific Module
VTOL Vertical Take-Off and Landing
1
INTRODUCTION
The main objective for AACUS is to deliver cargo by rotorcraft in unmapped, unprepared and potentially hostile
areas. As opposed to current Unmanned Aircraft Systems (UAS) missions, the Concept of Operations (CONOPS)
requires flight close to the terrain and self-determination of suitable landing locations. Little can be taken for
granted—not only are terrain maps not guaranteed to be accurate, but even designated landing sites might not be
suitable either because the specified landing point is too sloped, too rough, or has been designated by error. One of
the most significant program requirements with far reaching implications for admissible solutions is that landing
(even to a previously unmapped site) must be done without over flight.
This paper describes the system architecture, component functionalities, flight test results, and way forward for
AACUS/TALOS. AACUS advances the state of the art in autonomous operations in the areas of mission/route/
trajectory planning, human-system interfaces, and perception, enhancing complex resupply missions.
Background
Autonomous helicopter research, development, and flight test have been active since the early 90’s (Ref. 1), and
methodologies to perceive, localize, and avoid obstacles using lidar have been studied since the 2000 time frame
(Refs. 2-6). Perception and motion planning algorithms have been undergoing detailed development ever since
(Refs. 7-14), culminating in demonstration experiments on aircraft such as the Boeing Unmanned Little Bird (ULB,
a modified MH-6), the Sikorsky MATRIX, and the Army AMRDEC RASCAL UH-60 (Ref. 16). Detailed
algorithm development for perception, landing zone evaluation, safe landing area determination, trajectory planning,
trajectory following, and obstacle detection and avoidance have all been performed, and reported in the cited
publications. Much of this technology was incorporated into TALOS, and is not described in detail here. Instead,
this paper focuses on describing an integrated system that (1) interacts effectively with human operators, (2) retrofits
onto current and future unmanned rotorcraft, (3) uses open interfaces to enable different algorithms and components
to be ‘swapped out’ and (4) performs faster, more aggressive missions with less input from ground users by pushing
the state of the art in human interfaces, architectures for independent and human-collaborative decision-making,
perception, trajectory planning, and route planning. In addition, this paper augments other AACUS publications,
which are cited throughout this document, by detailing specific results achieved at the official ONR
AACUS/TALOS demonstrations on February 26-27, 2014, at Marine Corps Base Quantico, Virginia.
Organization
This paper is organized around the
architecture developed for TALOS (Figure 1),
so we begin by briefly describing it. A
modular architecture enables the participation
of our widely-dispersed team of experts in
Human-System Interfaces (HSIs), Perception,
and Planning. Subsystems (yellow blocks)
are tied together through formal interfaces
(blue circles) that are described by Interface
Control Documents (ICDs) governing the
interaction between the various components.
This serves two purposes: first, it conforms to
the Modular Open Software Architecture
(MOSA) concept of providing information
about how each component of the system
performs its function, as well as its inputs and
outputs. Second, it allows technology
providers to upgrade on a component-by-
component basis, delivering higher value to
the government over the long term. The
functional breakdown created by the TALOS
team is shown in Figure 1.
Figure 1 TALOS architecture, showing major components and
open interfaces. HSIs exist at the Combat Out Post (COP) and
the Main Operating Base (MOB), planning includes the Mission
Manager, Route Planner, and Trajectory Planner, and Perception
includes both sensor hardware integration, sensor fusion, and
perceptive processing. Vehicle Dynamics Lookup encapsulates
vehicle-specific data. Typical UAS elements are also shown; the
sum total of the modules in this diagram is called the AACUS-
Enabled System (AES).
2
Human-System Interfaces (HSIs): For the AACUS mission, a Field Operator (FO) at the COP where supplies are
needed is provided with a simple interface to request supplies, monitor the mission, and manage (at a high level) the
final stages of the helicopter approach and landing. An Air Vehicle Operator (AVO) at the MOB has supervisory
control of the aircraft through a Ground Control Station (GCS), and is provided with special displays and controls to
enable the unique aspects of the AACUS mission.
Planning Systems: Planning consists of three levels, all of which are onboard the aircraft. The top-level planning
and decision-making is handled by a Mission Manager, which sequences the system through the mission phases,
communicates with the ground, and handles contingencies initiated either on the ground or by pre-existing vehicle
management functions. The Mission Manager communicates with the Route Planner, which provides onboard
waypoint planning to carry out the desired mission. Finally, the Trajectory Planner is responsible for following the
planned route while addressing dynamic feasibility and responding to obstacles detected by the Perception system.
Perception System: The Perception system is the component of TALOS that includes not only software, but also
integrated sensors. The Perception system on TALOS consists of a scanning lidar, electro-optical (EO) and infrared
(IR) cameras, tightly-integrated GPS-INS for precise and accurate registration of lidar returns in inertial space, and
software for real-time processing of lidar and image data to detect obstacles en route and characterize the Landing
Zone (LZ). The Perception system must find a suitable landing site at a location that has never been visited before –
the FO is not responsible for preparing or judging feasibility of the landing site, so the Perception system must
provide this capability. Because some reactive functions must happen quickly, the Perception system and Trajectory
Planner contain some of the key autonomous functionalities of TALOS, as discussed in the Perception section.
This broad categorization of elements has been used to organize the development of TALOS, which is designed to
be retrofitted onto any STANAG 4586-compliant unmanned helicopter. For this effort, the Boeing Unmanned Little
Bird (ULB) was chosen as the demonstration platform.
The remainder of the paper is organized as follows: first, an AACUS mission is described, providing terminology
and motivating system requirements. Next, the planning systems are described, as these naturally flow from the
mission description. Human interfaces are next described, emphasizing the unique aspects of the problem and the
TALOS solution. Perception requirements and approach are then described, with particular emphasis on the
requirements and autonomous behavior during an aggressive approach to land. Finally, integration and testing on
the ULB is described, and demonstration results are provided.
MISSION DESCRIPTION
An AACUS mission is a substantial extension of a typical UAS mission. It begins with a delivery request from an
FO at an austere location, entered using the FO handheld device (a smart phone or tablet, as is projected to be part of
the future Marine’s standard equipment), or by radio. The request initiates an Assault Support Request (ASR) which
is managed by the AVO at the MOB. The request conforms to the ASR standards, and consists of information about
the Landing Point (LP – the precise GPS location where the helicopter is requested to land), the conditions at the
landing site (wind, threat zones, etc.), and the desired supplies. The ASR is handled using Cursor on Target (CoT)
protocols, which allow both the FO and the AVO to maintain situation awareness about the status of the request.
The AVO uses the TALOS automated route planning capabilities to plan the mission, from take-off at the MOB to
landing at the COP. After the vehicle is loaded and cleared for take-off, the AVO commands the launch and
monitors take-off, climb-out, and cruise to the desired location. As the rotorcraft approaches the beginning of the
approach, it requests permission to land – at this point the FO switched roles, from monitoring mission progress to
interacting with the AACUS-Enabled System (AES) – that is, the aircraft and associated onboard autonomy
functions.
En route to the COP, the AES behaves much like a typical UAS, flying a waypoint-based mission that conforms to
current and future UAS operational requirements. However, additional decision-making capabilities are provided,
which enable the AES to handle contingencies, provide for re-planning, and respond to pop-up No Fly Zones
(NFZs) and other events. Handling these next-generation mission capabilities ensures that TALOS is a fully
functional, end-to-end autonomous system.
3
The final approach and landing phase of the mission is the most demanding, and motivates many of the specialized
capabilities of TALOS. Because an austere LZ may include areas that are less desirable for landing, TALOS is
responsible for scanning the Touchdown Zone (TZ) – the immediate (e.g. 30 ft. radius) area around the FO-
designated LP, to determine if it is suitable for landing. Rocks and other obstacles, as well as terrain deviations
(slopes, ditches, small ridges, etc.), may make the TZ unsuitable. In this case, TALOS evaluates the entire LZ to
find alternate TZs, and interacts with the FO to ensure his/her awareness and acceptance of a safe alternate.
TALOS is designed to enable a tactical approach in dangerous situations – as such, it will direct the UAS to fly low
and fast toward the LZ. This requires the system to be aware of towers, trees, buildings, and other obstructions
between the vehicle and the desired LP, and to maneuver around these obstacles. Further, the AES must avoid
threat zones designated by the FO and AVO, including those that ‘pop up’ after the mission is planned. Finally,
several contingency situations exist that require the AES to either ‘wave off’ from the approach, or to abort the
mission altogether. These capabilities require coordinated utilization of the Perception system (e.g. to steer the
sensor between LZ evaluation and obstacle avoidance), aggressive trajectory planning, and decision-making
functions to handle contingencies, all while maintaining AVO situation awareness through the MOB GCS, and FO
situation awareness at the COP through the handheld interface.
Once the AES, in concert with the FO, has chosen a suitable TZ, maneuvered to a LP somewhere within the TZ, and
landed, the AES alerts the FO that the rotorcraft is ready to be unloaded. After unloading and clearance to take-off
from the FO, the AES takes off and departs from the COP. The AVO maintains supervisory control at all times, and
may alter the parameters governing the return-to-base flight route if necessary.
We now describe the TALOS components that enable this AACUS mission.
PLANNING SYSTEMS
TALOS is designed to retrofit to a fully-capable Vertical Take-Off and Landing (VTOL) UAS that can perform
standard UAS missions using the STANAG-4586 message set. TALOS capitalizes on the UAS ability to perform
Figure 2 AACUS mission. The Main Operating Base is in on the horizon, and the Landing Zone contains
various obstacles and terrain characteristics that must be evaluated during a high-speed tactical approach.
4
pre-flight, to take-off to a low hover (which could be as
low as 1 ft. off the ground), and to land from a low
hover on command4. However, during flight (between
these two low-hover states), TALOS replaces the
existing UAS waypoint-based outer control loops with
trajectory-based control, which provides more
aggressive, dynamically-feasible paths for the rotorcraft
to follow, enabling maneuvering near the LZ and in-
flight obstacle avoidance in response to detected
obstacles (trees, buildings, power lines, etc.).
Furthermore, TALOS provides mission-level autonomy
capabilities to support complex end-to-end missions
and next-generation route planning functionality.
The TALOS planning architecture is broken up into
three planning modules: the Mission Manager, which
provides high-level planning coordination and decision-
making; the Route Planner, which performs waypoint-
based planning; and the Trajectory Planner, which
plans trajectories for precise, aggressive maneuvering
and obstacle avoidance. Figure 3 shows the flow of information. High-level goals are defined by the AVO and
uplinked by the MOB GCS via STANAG-4586. The goal points are then provided to the Route Planner, which
generates the full set of waypoints describing the route. Routes can also be planned on the ground and uplinked, as
is common in current UAS. Waypoints are sent to the Trajectory Planner, which uses them as the basis for
perception-driven trajectory planning. Trajectories are transmitted to the UAS Flight Control System (FCS) using
splines. More details on each stage of this process are provided below.
Mission Definition and Route Planning. Most UAS follow waypoints that are planned on the ground, either by
hand or using a route planning tool. This prevents the system from intelligently re-planning in the case of
communication outage, or other contingencies, such as pop-up NFZs, which require quick on-line reaction. TALOS
overcomes these limitations by providing onboard route planning. Onboard route planning has been made viable for
current UAS by Kutta Inc., a member of our TALOS team that developed the onboard Route Planner used for
TALOS. The Kutta Route Planner and integration methodology were previously demonstrated on the Shadow UAS.
This Route Planner provides routes that keep the AES inside an AVO-defined ‘Safe Air Volume’ or SAV, which
could be a set of narrow corridors of operation, or, as in Figure 4, a more permissive planning volume. It
automatically plans within this volume, around NFZs,
and through pre-defined points in the mission, which
are called goal points. Figure 4 show a typical plan,
which has several goal points (including a take-off
point, an approach point, and a landing point) and
several automatically-generated waypoints that are
incorporated to ensure that the aircraft does not violate
the SAV or NFZs. By automating waypoint
generation and integrating the Route Planner on board
the AES, the system can re-plan from any point along
the route, even if communications or time-to-react is
limited.
Most currently fielded military UAS perform route-
planning on the ground, and many use STANAG-4586
to up-link the resulting missions to the UAS. After
receiving the waypoints, the onboard Vehicle
4 The ultimate goal of the AACUS program is to command the rotorcraft all the way to touchdown, but this was not
implemented during Phase 1. Instead, the ULB FCS performed the final landing (no human input).
Figure 3 TALOS planning flow, emphasizing the top-
down message flow through the various components
discussed in the text.
Figure 4 TALOS framework for mission and route
planning contains goal points, waypoints, a Safe Air
Volume (SAV, green area), and any number of pre-
planned and/or pop-up No-Fly Zones (NFZs, red areas).
Goal points are cyan circles, waypoints are cyan dots.
5
Management System (VMS) sequences them to the flight control system. TALOS is a retrofit system that interfaces
with the VMS and FCS as shown in Figure 1. Modest modifications to the VMS are required to enable it to pass
STANAG messages to TALOS, and to forego sending waypoints to the FCS, instead putting the FCS in ‘TALOS
mode’. In this mode the FCS follows trajectories (discussed below) instead of waypoints. This allows TALOS to
use perceived information to alter the path that the aircraft takes – instead of blindly traveling through the waypoints
that make up the route plan, it intelligently achieves the desired goal points while taking into account AVO-defined
constraints, perceived obstacles, and landing zone conditions.
Mission Manager. It will be noted that the flow of information in Figure 3 does not precisely correspond to the
arrows shown in Figure 1. Instead, the Mission Manager acts as the data hub, communicating with all the major
systems and marshaling information as needed for the phase of flight, contingency scenario, and real-time input
from the Perception system, the FO through the handheld interface, and the AVO through the MOB GCS. Under
nominal operations, the mission manager sequences the system from take-off to en route cruise to landing, feeds
waypoints to the Trajectory Planner as the mission proceeds, monitors vehicle location and requests permission to
land from the FO at the COP at the appropriate time, etc. When contingencies and off-nominal situations arise5, the
Mission Manager contains the mission level decision-making logic, and manages communications between ground
elements and, for instance, the Perception system to relay LZ evaluation information to the user.
Trajectory Planning. In virtually all current UAS, a route like the one shown in Figure 4 is the final result of the
planning process, and the waypoints are sent directly to the FCS e.g. in a ‘from-to-next’ format. The aircraft ‘fairs
in’ a pathway that smoothly transitions between the route legs, flying near, over, or around the waypoints as
determined by the waypoint type. This method works well when waypoints are many hundreds of feet apart, and the
route is at high altitude with no obstacles or sudden appearance of threat zones.
Flying at low altitudes in tactical situations and landing at unprepared sites in austere, perhaps mountainous or
cluttered (e.g. urban) terrain, where threats and obstacles may appear at any time, requires a next-generation,
reactive capability for path planning and following. TALOS provides an open architecture framework and
associated interfaces to provide this capability, and the Trajectory Planner developed by Carnegie Mellon University
(CMU) provides the integrated solution to the trajectory planning problem within the TALOS framework. The
inputs to the Trajectory Planner are similar to those of a typical route-following FCS, but the output is in the form of
a dynamically feasible spline. A spline is a general representation of a curve through space, which is easily adapted
to a variety of FCS tracking methods, such as those used by the ULB and the RASCAL. Aurora, CMU, and Boeing
developed an open ICD that allows for communication of dynamically-changing splines through a MIL-STD-1553
interface to the ULB FCS. The ULB follows points along this spline using a proprietary trajectory following
method that does not require changes to the inner, safety-critical loops of the FCS.
As shown in Figure 1, the Trajectory Planner takes inputs from three components: (1) The Mission Manager, which
provides the route points from the Route Planner, as well as mission phase, contingency information, and relayed
information from the ground control systems; (2) The Perception system, which provides a 3-dimensional map of
detected obstacles in the environment, as well as LZ evaluation results; and (3) The Vehicle Dynamics Lookup
(VDL) module, which provides information about the rotorcraft’s system dynamics. The VDL is broken out as a
separate module for portability: only the VDL changes when TALOS is moved to a different rotorcraft.
The Trajectory Planner performs a set of parallel, on-line optimizations to determine the best dynamically feasible
path, around the obstacles and other constraints, that achieves the waypoints provided by the Route Planner. This
problem is complicated by several factors. First, the perceived environment is constantly changing as the aircraft
flies through the environment and new obstacles are discovered. Second, if an aggressive maneuver is required, it
must be planned quickly (within about 1 second) and be dynamically feasible. Finally, the aircraft cannot
instantaneously track a new path, so the Trajectory Planner must respect a ‘look-ahead’ or transition time; new paths
must smoothly transition or ‘branch’ from the path that has already been sent to the FCS, with sufficient advance
notice for FCS algorithms to adjust. Other Trajectory Planning features are detailed in References 17, 18, and 19,
5 Situations that require immediate reaction are handled by the Trajectory Planning and Perception systems, which
work together to ensure the safety of the aircraft. Thus some lower-level decision-making resides with these
systems, rather than with the Mission Manager.
6
and include (1) the ability to apply parallel, alternate
trajectory planning techniques in an ‘ensemble’ and
choose the best solution, (2) guaranteed safety through
the inclusion of a library of deterministic safety
maneuvers, and (3) specialized techniques to enable
landing into difficult environments with winds.
Figure 5 shows a typical result.
PERCEPTION SYSTEM
The Perception system and Trajectory Planner work
hand-in-hand, at high bandwidth, and perform critical
decision-making functions without significant
interaction with the higher-level Mission Manager.
Near Earth Autonomy and CMU have described this
capability in references 4, 5, 8, and 11-15. Here we
focus on the ‘end game’ of approach and landing,
which is the most critical stage of perception and
trajectory planning, because the AES is most likely to
encounter obstacles, and must quickly evaluate the LZ
and decide where to land. For the AACUS program, the TALOS team was required to perform a high-speed
approach along a shallow flight path, similar to the way a pilot would approach to minimize exposure to enemy fire.
This further pushed the Perception system and associated autonomous interaction with the Trajectory Planner in this
critical phase of the resupply mission. For this reason, we describe the end game in detail, to highlight the
Perception system functionality and interaction with the rest of the TALOS components.
Figure 6 shows the details of an approach, for an 5 degree profile. Ground speed during most of the missions
demonstrated by TALOS is 100-120 knots, with the aircraft decelerating as it reaches the final phase shown here, so
that at 50 seconds from touchdown, velocity has decreased to ~30 knots, and the system is decelerating at about 2.5
ft/s2. At approximately 90 seconds from landing, the perception system transitions from scanning the immediate
area in front of the aircraft (to sweep out a large safety region in the direction of flight) to focused scanning of the
LZ. The LZ is a large area (150-500 ft. in radius) near where the operator has requested that the system land; the
goal is to land precisely where requested, in the center of this region. However, TALOS scans over the wider LZ
region to maximize the opportunity for a successful mission, realizing that (1) the FO may not have had sufficient
opportunity to survey the LZ; (2) conditions in the LZ or within the specific TZ chosen may have changed; and (3)
landing within a few hundred feet of the requested Landing Point is often acceptable in an operational setting.
The Perception system and Trajectory Planner consider a wide variety of requirements when selecting suitable TZs.
Figure 5 Obstacle avoidance scenario. During a test
flight on the ULB, the Trajectory Planner successfully
planned a trajectory around an obstacle detected by the
Perception System. Inset shows the real-time Perception
system output and planned trajectory. The ULB received
and tracked this maneuver in real time, avoiding the peak.
Figure 6 End-game scenario. In the case shown, the user-
selected touchdown zone (red) is invalid (too rough or
sloped, or too close to an obstacle), so the Perception system selects two other locations (green and blue). The FO
chooses the blue alternate, and the trajectory deviates from the red dashed to the green trajectory.
7
The zones must be free of obstacles, sufficiently smooth and of low enough slope to enable safe landing. The
trajectory to the LP must also be feasible – that is, it must be not only feasible to reach, but also feasible to wave-off
from while approaching, and to take off from after the aircraft has landed. In addition, obstacles may exist along the
straight-line path to the desired or alternate TZs. These complicate the trajectory planning problem, and may cause
occlusions that the Perception system must address. Thus a difficult perception, trajectory planning, and decision-
making problem must be solved as part of the TZ selection process.
Further complicating this process is the need to communicate with the FO, who is monitoring the approach on a
handheld device. As demonstrated for the first phase of the AACUS program, the AES lands under a ‘management
by consent’ policy – that is, it must request and receive approval to land at an alternate TZ before proceeding to that
zone. The FO also maintains the authority to wave-off the AES until very close to touchdown– this is necessary to
avoid putting personnel on the ground (or the rotorcraft itself) in danger in difficult combat scenarios. To address
this requirement, a time line of communication and decision-making was developed, which places difficult
requirements on the Perception system. The range at which LZ evaluation can be performed at tactical approach
speeds directly impacts TALOS’ ability to communicate suitable TZs to the FO, so that those TZs can be approved.
Figure 7 shows examples of typical LZ evaluations – starting with registered lidar data in an obstacle map,
proceeding to processed evaluation results (which are used by the Trajectory Planner), and finally as they appear on
the Field Operator’s handheld device for approval.
Referring now to both Figures 6 and 7, we can describe the interaction of components. The Perception system and
Trajectory Planner together determine whether the requested landing point is feasible. If it is, the landing proceeds
without the necessity for additional operator input – this location was requested and is thus pre-approved. The FO
can, of course, wave off TALOS if conditions have changed (the FO can also change the request while the AES is en
route to adjust for changing conditions). If the landing point is infeasible, alternates based on detailed LZ evaluation
are presented as shown at right in Figure 7. The FO has the option of (1) touching one of these alternates to give
consent to land there, (1) commanding a wave-off, or (3) doing nothing, in which case a wave-off automatically
occurs. In our field trials, an FO with 15 minutes of training was able to make this decision within about 3 seconds.
Feedback from the operator and others resulted in adding audial and haptic alerts (sounds and vibration) to the
handheld device whenever consent is required, in recognition of the fact that the FOs in dangerous scenarios are
unlikely to devote their full attention to the interface. They also have a natural tendency to watch the aircraft and
not the handheld device.
The M3 Sensor Suite. The requirements placed on the sensor suite are those derived from the scenario described
above, as well as a variety of other scenarios and conditions. Near Earth Autonomy developed the ‘M3’ sensor suite
(Figure 8) specifically to meet these requirements, making it one of the most capable perception systems extant.
M3 employs an industrial grade surveying lidar in a nodding configuration that has programmable motion: nodding
can be ‘focused’ on an area of interest, e.g. a landing zone at long range, which encompasses only a narrow vertical
field of view. Deeply embedded, high-grade GPS/INS data provides the pointing and registration accuracy needed
to meet the extreme requirements for landing-zone evaluation from a moving platform at a shallow flight path angle;
Figure 7 Landing Zone evaluation and Field Operator negotiation. Left: precisely-registered, fine scale
obstacle map based on lidar scans (colorized by altitude); Center: color-coded LZ evaluation, where green
locations are suitable for landing; Right: options as presented to the FO. Red location is the requested, infeasible
location; green and blue locations are alternates; the blue location is the one chosen by the FO, to which the AES
autonomously diverts on consent.
8
the scan-and-evaluate accuracy is further enhanced
by onboard real-time registration correction
algorithms. Onboard algorithms also provide wire
detection, obscurant penetration, sensor steering, and
localization in GPS-denied settings. The M3
capabilities are further enhanced by fusing EO/IR
data that is co-registered with the lidar data; this
allows the AES to, for instance, land to a marker (a
20”x72” standard-issue nylon ‘VS-17’ emergency
signal panel) in cases where the FO does not have
accurate landing point location information. EO/IR
cameras also support GPS-denied operations.
As part of the open architecture development
process, ICDs were developed for communication
with the Perception system. These ICDs govern
communication for trajectory planning, sensor
steering, localization in GPS-denied settings, and
landing to visual landmarks such as VS-17 panels.
Grid definitions and parameters for obstacle mapping in either a ‘scrolling’ environment (roughly centered around
the vehicle) or a static setting (centered around the LZ) are provided. In both of these settings, two types of data are
provided: the occupancy at each grid location, and the position of the nearest obstacle to each location. Thus there
are four potential combinations of grid type (scrolling or static) and grid content (occupancy or nearest obstacle).
Only three of these four are actually supplied, since a static grid of nearest-obstacle distance provides all of the
necessary utility for trajectory planning. The static grid is centered at the LZ, providing terminal area planning
information, while the scrolling grids (both types) provide en route planning information. Information is published
using the Robot Operating System (ROS) publish-subscribe framework. This framework, however, is not well
suited to the high bandwidth and precision at which occupancy information changes. To mitigate this problem, only
the changes to each cell’s occupancy level are published. This, together with the ability to provide information at
various levels of obstacle grid resolution, allows for more efficient transfer of data.
HUMAN-SYSTEM INTERFACES
Human-system interfaces play a critical role in successfully carrying out the AACUS mission. The focus during the
first phase of the AACUS program was on design and validation of the FO handheld, which is a new concept for
UAS control, in which a Marine who is untrained in UAS operations, flight vehicle characteristics, or LZ
preparation submits an Assault Support Request (ASR) using an ‘app,’ which he/she learns about in a short (15
minute) training process (e.g. a training video, pamphlet, or in-app walk-through). Such a minimal level of training
is enabled by two features of the TALOS FO human interface: first, it relies heavily on the commonality of the ‘look
and feel’ of applications developed for smart phonesand tablets. Touching icons or map locations to elicit effects,
scrolling and pinching a map, sending and receiving texts, and using menus to get to desired functions are performed
in common ways with which all Marines are familiar and/or can master very quickly. The second feature of the
human interface is that it minimizes the decision-making that the FO must do, relying instead on TALOS’ inherent
autonomy. The FO does not evaluate the feasibility of the landing points, steer the rotorcraft along a safe path,
evaluate the specific properties of the terrain, or consider approach direction, winds, or other features of the mission.
Neither does the FO consider any of the high-level mission parameters that a typical AVO must set; these are
handled by a combination of onboard autonomy (e.g. re-planning if a new threat region or NFZ appears) and, when
necessary, reach-back to the AVO at the MOB.
The TALOS team, led by Applied Research Associates (ARA) and MIT Professor David Mindell, performed a
formal human interface design process, which began with interviews with Subject Matter Experts (SMEs) and future
users (Marines) to inform a cognitive task analysis of the resupply mission, and to understand the roles at the MOB,
the COP, and in the cockpit (to inform the role of TALOS). This analysis led to the development of requirements
and design concepts, with additional input from SMEs, engineers, and designers in brain-storming sessions. The
initial design for the FO (the various screens in the application, the functionalities and actions, etc.) was then
mocked up on paper, forming a set of ‘storyboards’ (Figure 9) for the relevant AACUS mission use cases. These
Figure 8 M3 sensor suite. Foreground—lidar in nodding
enclosure. Background: EO and IR cameras. System is
mounted on a standard external mount, and includes
components internal to the unit and the aircraft: GPS/INS,
custom embedded components, and software.
9
storyboards were validated, and refinements
motivated, by additional design reviews (“walk-
throughs”) attended by SMEs. Refined requirements
drove detailed system integration and software
development, which was performed by Kutta, Inc. for
the iOS operating system on an iPad mini.
The process described here was made much more
effective through the use of well-known military
standards and protocols, and by leveraging Kutta
expertise in these operational UAS techniques.
Specifically, the request for supplies is submitted as
an ASR (also referred to as a “9 line”), which is a set
of information that Marines are routinely trained to
provide. The ASR is filled out on a request page in
the app that auto-supplies as much information as
possible, and provides map-, GPS-, and
magnetometer-aided features for determining the
coordinates of the desired delivery location. Aids are
also available to allow the FO to radio in the request, if operational communications limitations do not permit direct
connectivity to the MOB GCS. Once the request is submitted, it is managed using CoT protocols. CoT is common
in the military, and its use allows for integration of ASRs into the broader military planning environment. Finally,
FO handheld communication with the AES conforms to an abbreviated version of STANAG-4586 – although this is
transparent to the user, it is important to maintain an easily communicated, open architecture.
The final design stage involved validation testing with Air National Guard security forces, an accessible military
population who agreed to volunteer and who also shared characteristics with our intended Marine user. Validation
testing was intended to meet multiple goals:
1. Evaluate the feasibility of a new user (any Marine) performing the FO role after a 15-minute orientation to
the interface;
2. Determine the best method for performing the time-critical “Infeasible Touchdown Zone” negotiation
(described in the previous section), for which ARA had designed three options;
3. Identify components for re-design or additional training; and
4. Validate the overall interface.
To address these goals, ARA developed and executed a procedure that included an orientation session, a
performance measurement (13-question test), and a validation and usability evaluation. The training comprised a 15-
minute orientation to the interface, which used screenshots of the app on an iPad mini. The test consisted of a series
of questions, including how participants would
engage the interface across a variety of scenarios
(e.g., point to/push/simulate appropriate action on the
screenshot). The test was given both before and after
the training, to assess how intuitive the screens were
by themselves without orientation. ARA measured
participants’ accuracy of response (correct/incorrect)
in addition to participants’ time to completion.
Validation testing demonstrated the feasibility of the
brief orientation session. More importantly, the
testing provided valuable feedback on specific design
questions, and ideas for redesign, which the
participants generated from the testing experience.
ARA folded the feedback into our designs to create a
final, validated design. Figure 10 shows the results of
the performance testing.
Figure 9 Storyboard example. The FO handheld design
was refined by validating screenshots using a detailed
storyboard, which consisted of dozens of pages like the
above, covering the primary use cases.
Figure 10 Performance before and after 15 minutes of
training. Data indicates that (a) the interface improved based
on design changes (b) the interface is intuitive (high pre-test
scores), and (c) 15 minutes of training is sufficient.
10
It is notable that, unlike many autonomy programs, the cognitive task analysis and use case descriptions drove much
of the TALOS design process, and the interface requirements drove software development details. This is in sharp
contrast to most autonomy programs, where the autonomous operations are considered first, and human interfaces
are introduced after-the-fact by the engineers developing the autonomy. We believe that the efficacy of the “human
plus autonomy” team is much higher if a “humans first” design process is used.
Although the full cognitive design process described above (and detailed in Reference 18) was only completed for
the FO handheld, the process was begun for the interface within the MOB GCS, and made sufficient progress to
allow us to understand the roles at the system level, to provide requirements for the autonomy software, and to begin
lay out of the initial MOB GCS interface design (see Reference 19). This design is being further evaluated and
refined during the Phase 2 effort, which is currently underway.
UNMANNED LITTLE BIRD INTEGRATION AND DEMONSTRATION
The ONR AACUS Phase 1 INP was an aggressive technology development and demonstration program, executed
over the 18 month period between the end of September 2012 and the end of March 2014, with a culminating flight
demonstration in February 2014. Our goal was to demonstrate retrofit of the system on an existing rotorcraft UAS.
The Boeing ULB provides a mature platform for such demonstrations, supported by an agile, experienced team of
flight control, integration, and flight test personnel.
The nominal AACUS mission starts with an Assault Support Request made by the FO using the handheld device or
radio. This initiates pre-flight planning and vehicle loading, during which the FO handheld device keeps the FO
informed of the status of the planning process. The delivery flight itself consists of take-off and climb-out, en route
flight, contact with the FO to request permission to proceed, approach, and landing. The autonomous system must
plan the route, perform sensor steering and perception (as governed by the necessity to clear a safe zone around the
vehicle and perform LZ evaluation), detect and adjust the trajectory to deviate around obstacles en route and on the
ground, evaluate the LZ and re-direct the vehicle to a suitable touchdown point, interact with the FO, and perform a
pilot-like landing without hover. Contingencies for lost link, operator wave-off, and autonomous wave-off must
also be addressed.
Details of the integration process, through the interfaces that cross the blue boundary between TALOS and the
VTOL UAS in Figure 1, are given in previous sections. Integration with the MOB GCS and Mission Manager
(upper right in Figure 1) was made seamless by ULB’s use of a STANAG-4586 compliant Vehicle Specific Module
(VSM) within the Vehicle Management System (VMS). Some details of the STANAG message sequencing were
adjusted, and an additional Ethernet port on which the Mission Manager could listen to ‘multicast’ STANAG
messages was created. The VMS software was also altered to send a ‘TALOS mode’ message to the ULB FCS.
In TALOS mode, the ULB’s FCS switched from
taking waypoint commands from the VMS to taking
spline commands from TALOS (lower right interface
in Figure 1). The ULB inner-loop control laws were
not changed; rather the source of the path to follow
was altered to come from the TALOS (as splines)
instead of the Vehicle Management System (VMS, as
waypoints). Integrated testing of this interface was
performed in March 2013, using pre-computed splines
loaded from a file – at this stage of the program, the
autonomy software would not be complete for several
months, but it was important to verify the ICD and
1553 interface, to validate that the ULB was able to
follow splines, and to characterize tracking
performance. Figure 11 shows typical results.
Six additional Flight Test Periods (FTPs), of
approximately two weeks duration each, were carried
out between September 2013 and February 2014. Each
Figure 11 ULB tracking performance test results.
11
FTP tested a more sophisticated combination of subsystems in the TALOS-ULB integrated system, and drove the
development process. During the later FTPs, testing became more focused on validating required performance on
eight mission scenarios, developed with ONR: a nominal mission, a mission requiring negotiation with the FO to
pick an alternate landing site, two missions in which pre-flight and a pop-up (mid-mission) NFZs were introduced, a
communication outage contingency, an operator wave-off contingency, a landing to a location designated by a VS-
17 panel, and a scenario in which the FO does not respond when selection of an alternate LZ is requested. Required
behavior in all of these cases was specified in detail by ONR, with input from the TALOS team.
For the demonstration at Marine Corps Base Quantico on February 26-27, 2014, the missions performed were
modified versions of the AACUS mission, to allow a variety of landings and contingencies to be validated. The
ULB was manually taken off and flown to a pre-specified location. The safety pilot engaged TALOS and was then
‘hands off’ until after landing (or after reaching an abort point), monitoring the system behavior for safety using
specialized displays. The FO’s initial request for supplies, and subsequent mission planning, were abbreviated since
some of the operational considerations for request and planning were not in play. A researcher playing the role of
the AVO serviced basic FO requests and uploaded the mission parameters to begin each test. Each mission covered
approximately 6 nautical miles; the last 3.4 nautical miles were timed for speed, with the goal of performing the
approach and landing in under 4 minutes. The FO role for all the demonstrations was performed by a Marine with
15 minutes of training on how to request supplies and designate the desired landing point, monitor the mission,
interact with the AES in the case of an infeasible touchdown zone, and perform wave-offs and other contingency
functions.
During the two-day demonstration period, the TALOS team flew 24 missions (from pilot engagement of TALOS to
pilot take-over at the end of the mission) during four separate flight events (periods when the helicopter was either
airborne or idling) of about 1.5 hours each. Each of the 24 missions was initiated en route to the first waypoint of
the mission. 15 of these missions resulted in successful autonomous landing to the ground without any pilot input
and 7 more missions resulted in the system successfully redirecting to a wave-off or abort point due to either a
planned or unplanned contingency; the pilot took over as planned upon arrival at the contingency point. One
mission was interrupted because of a (non-TALOS related) barometric pressure problem, and one mission was
interrupted because the FO handheld ‘hung up’ and required a reboot. The TALOS system achieved times of 3:18,
3:21, and 3:26 in the three timed missions of the demonstration – well under the 4-minute goal.
The 15 landings performed were from three different approach directions, and many required a deviation from the
FO-chosen landing point to a nearby landing point, because the chosen point was too sloped, or contained obstacles
or terrain features that rendered that point infeasible for landing. In some of these cases, hay bales were specifically
placed at the requested landing coordinates, to cause the system to negotiate with the FO and go to a different
touchdown zone. The system was required to detect and avoid an obstacle no larger than 3 ft. in each dimension;
this was demonstrated in a previous FTP. Other landings included three landings for which the approach vector was
modified so that the landing was into the wind, four landings that deviated around No-Fly Zones (Figure 4), and two
landings to a visual landmark (a VS-17 marker panel). Some specific examples are given below.
Flight path deviations during approach to land. Figure 12, left, shows a typical landing case. The AES is
autonomously approaching the landing zone from the upper right of the picture. The approach angle is
approximately 5 degrees, which is shallow enough that the trees (rainbow colored because the picture is a height
map) are too close to the nominal flight path (yellow). The Trajectory Planner automatically re-plans the trajectory
to avoid the trees. Further evaluation of the requested touchdown zone (30 ft. area around the yellow dot) led to the
Perception system, Trajectory Planner and FO, through the negotiation process and human interface described
above, to choose an alternate touchdown zone and deviate to that location (magenta line). All of this happened
automatically, within the last 20 seconds of flight, and was repeated several times, under a variety of conditions,
during the demonstration (Figure 12, right).
No-Fly Zone (NFZ) replanning. NFZs added far ahead of the aircraft, or during the planning phase, are handled by
the Route Planner. This was demonstrated, yielding the flight test data shown in Figure 13. In this figure, the route
is initially planned without the NFZ in place, and then the NFZ is added. The automatic replan was conducted in the
MOB GCS in this case, but the capability has since been demonstrated using the onboard Route Planner.
12
Note in Figure 13 that the blue trajectory ‘fairs in’ the pathway described by the way points. This behavior requires
that buffers around NFZs be incorporated to ensure that the faired-in trajectory does not violate the NFZ. During the
demonstrations, this buffer was incorrectly set to 100 m (328 ft.) instead of 150 m (492 ft.), causing the aircraft’s
path to briefly ‘clip the corner’ of the NFZ. This is unacceptable behavior, resulting in changes to the operation of,
and interfaces between the Route Planner and Trajectory Planner to automatically incorporate corridors of flight that
respect the NFZs, and plan trajectories in real time that do not violate these corridors, preventing this type of error.
Figure 12 Trajectory re-plans due to obstacles and changed Landing Point: Left: typical approach through a
clearing in the trees, desired landing point is infeasible. Right: detail showing deviation through trees – green line
is original path, magenta line is re-planned path. This case landed to the requested landing point.
Figure 13 TALOS Route Planner auto-routing around a no-fly-zone. Purple area is accommodated by laying
in additional waypoints, separate from the ‘goal points’ provided by the AVO. Blue line is flight data.
13
Pop-up NFZs may be too close to the current vehicle position to allow for re-routing using waypoints, so the
Trajectory Planner (instead of the Route Planner6) is responsible for making the necessary adjustment and changing
the trajectory to avoid the NFZ. Figure 14 shows an example of this behavior. After a pop-up NFZ is added along
an existing leg of a route, a trajectory is planned around it; the new trajectory terminates at the next waypoint in the
route, which in this case is the landing point. As with obstacle avoidance, minimal interaction with other TALOS
components is required for such reactive planning, as long as the deviation required does not jeopardize other
aspects of the mission. For the demonstration, pop-up NFZs were placed and uplinked by the Marine on the ground;
therefore the NFZ location was not known in advance (the Marine was instructed on the general location and time to
enter the NFZ). This procedure demonstrated functionality, intuitiveness, and ease-of-use of the FO handheld
interface.
Contingencies. The behavior during contingencies was specified by ONR to be completely autonomous, under the
CONOPS that communication with the AVO at the MOB GCS may be unavailable during relatively long periods of
operation. Thus the goal is to enable delivery, including any contingencies that might occur, even if
communications with the MOB are unavailable. Communication with the FO at the COP, on the other hand, is
required during certain phases of the mission, to ensure that the FO is aware of the AES state and that both the AES
and the FO are able to communicate their intent7. Based on this CONOPS, two contingency points (corresponding
to Contingency Points A and B in a standard STANAG-4586 route) are created by the AVO for each mission. The
wave-off point is the point to which the AES diverts if the FO sends a wave-off command. The abort point is the
point to which the AES goes if the delivery must be aborted, e.g. if communications are lost to the FO, if the FO is
unresponsive, or no feasible touchdown zone can be found. Successful handling of contingencies was demonstrated
for all these cases.
6 During Phase 1, the Trajectory Planner was responsible for all NFZs added after mission initiation. In Phase 2, an
approach that balances the use of the Route Planner and Trajectory Planner for addressing NFZs has been
implemented.
7 TALOS can also operate completely autonomously, i.e. without approval or monitoring by the Field Operator.
This CONOPS was not part of the Phase 1 effort, but will be tested during Phase 2.
Figure 14 TALOS Trajectory Planner performing on-line trajectory planning around a pop-up No-Fly
Zone. Red circle is accommodated by a change in the trajectory, which must avoid the NFZ while still
progressing to the next waypoint in the route. Dashed line is originally planned trajectory, green line is the
replanned trajectory, and blue line is flight data.
14
Because wave-offs and aborts can happen at any time during a mission, and the SAV and NFZ constraints must still
be satisfied as the rotorcraft makes its way to the wave-off or abort point, onboard route and trajectory planning are
required. Planning from various points in the mission to these contingency points, without violation of the
constraints, was demonstrated in 8 different missions during the 2-day demonstration period.
CONCLUSIONS
The AACUS program is pushing the performance boundaries of technologies for autonomous re-supply to austere
locations under the supervision of ground-based warfighters. To ensure smooth transition, the technologies under
development are being incorporated into an end-to-end architecture with carefully designed interfaces and command
and control mechanisms. The interfaces ensure that the TALOS architecture has a pathway to integration into future
DoD VTOL UAS, without requiring significant re-work. The command and control mechanisms are consistent with
current military standards, although they push those standards in new ways to enable new capabilities.
In this unique ‘technology for transition’ setting, the state of the art in perception, trajectory and route planning,
autonomous mission management, and human-machine teaming are all being pushed forward. By requiring these
technologies to work in a realistic, open architecture under demanding conditions driven by operational
requirements, the AACUS program is not only advancing the key technologies, but doing so in a realistic way,
providing a strong chance for transitioning to the warfighter.
The focus of this paper is work performed during the Phase 1 AACUS/TALOS effort. Since March 2014, the
TALOS team has been awarded a second Phase and has further improved the system architecture and the
performance, relevance, and readiness of the component technologies. Future phases will test TALOS on a second
platform to demonstrate the portability of the technology.
ACKNOWLEDGMENTS
This work was funded by the Office of Naval Research, under contract N00014-12-C-0671. The support of ONR is
gratefully acknowledged.
REFERENCES
1 Whalley, M., Freed, M., and Takahashi. M., Christian Patterson-Hine, D., Schulein, G., Harris, R., “The
NASA/Army autonomous rotorcraft project”, American Helicopter Society 59th Annual Forum Proceedings,
Volume 59, Issue 2, Pages 2040-2053, 2003.
2 Theodore, C., Rowley, D., Ansar, A., Matthies, L., Goldberg, S., Hubbard D., and Whalley, M., “Flight trials of a
rotorcraft unmanned aerial vehicle landing autonomously at unprepared sites,” American Helicopter Society 62nd
Annual Forum Proceedings, Volume 62, Issue 2, p. 1250, 2006.
3 Mettler, B., Kong Z., Goerzen C., and Whalley M., “Benchmarking of obstacle field navigation algorithms for
autonomous helicopters,” American Helicopter Society 66th Annual Forum Proceedings, Phoenix, AZ, 2010.
4 Scherer, S., Singh, S., Chamberlain, L. J., and Saripalli, S., “Flying Fast and Low Among Obstacles,” Proceedings
International Conference on Robotics and Automation, April, 2007.
5 Scherer, S., Singh, S., Chamberlain. L., and Elgersma, M., “Flying Fast and Low Among Obstacles: Methodology
and Experiments,” The International Journal of Robotics Research, Vol. 27, No. 5, May, 2008, pp. 549-574.
6 Whalley, M., Takahashi, M., Tsenkov, P., Schulein, G., and Goerzen, C., “Field-testing of a helicopter UAV
obstacle field navigation and landing system,” Proceedings of the 65th Annual forum of the American Helicopter
Society, Grapevine, Texas, 2009.
15
7 Limbaugh, D., Higgins, R., Cates, P., and Ericsson, L., “Achieving Safe and Effective UAS Control for the U.S.
Army’s Bi-Directional Remote Video Terminal,” American Helicopter Society 65th Annual Forum Proceedings,
Grapevine, Texas, May 2009.
8 Scherer, S., Chamberlain, L. J., and Singh, S., “Online Assessment of Landing Sites,” AIAA Infotech@Aerospace
2010, April, 2010.
9 Goerzen, C., and Whalley, M., “Minimal risk motion planning: a new planner for autonomous UAVs in uncertain
environment,” American Helicopter Society International Specialists’ Meeting on Unmanned Rotorcraft, Tempe,
Arizona, 2011.
10 Takahashi, M. D., Abershitz, A., Rubinets. R., and Whalley, M., “Evaluation of safe landing area determination
algorithms for autonomous rotorcraft using site benchmarking,” American Helicopter Society 67th Annual Forum
Proceedings, Virginia Beach, March 2011.
11 Chamberlain, L. J., Scherer, S., and Singh, S., “Self-Aware Helicopters: Full-Scale Automated Landing and
Obstacle Avoidance in Unmapped Environments,” American Helicopter Society 67th Annual Forum Proceedings,
Virginia Beach, March 2011.
12 Scherer, S., “Low-Altitude Operation of Unmanned Rotorcraft,” doctoral dissertation, tech. report CMU-RI-TR-
11-03, Robotics Institute, Carnegie Mellon University, May, 2011
13 Scherer, S., Chamberlain, L. J., and Singh, S., “Autonomous landing at unprepared sites by a full-scale
helicopter,” Robotics and Autonomous Systems, September, 2012
14 Scherer, S., Chamberlain, L. J., and Singh, S., “First Results in Autonomous Landing and Obstacle Avoidance by
a Full-Scale Helicopter,” ICRA, May 2012.
15 Scherer, S., Chamberlain, L. J., and Singh, S., “Autonomous landing at unprepared sites by a full-scale
helicopter,” Robotics and Autonomous Systems, vol. 60, (12), 2012, pp. 1545–1562
16 Whalley, M. S., Takahashi, M. D., Fletcher, J. W., Moralez, E., Ott, L. C. R., Olmstead, L. M. G., Savage, J. C.,
Goerzen, C. L., Schulein, G. J., Burns, H. N. and Conrad, B. “Autonomous Black Hawk in Flight: Obstacle Field
Navigation and Landing-site Selection on the RASCAL JUH-60,” Journal of Field Robotics, vol. 31, 2014, pp.
591–616.
17 Choudhury, S., Scherer, S., and Singh, S., “RRT*-AR: Sampling-Based Alternate Routes Planning with
Applications to Autonomous Emergency Landing of a Helicopter,” International Conference on Robotics and
Automation, May, 2013.
18 Arora, S., Choudhury, S., Scherer, S., and Althoff, D., “A Principled Approach to Enable Safe and High
Performance Maneuvers for Autonomous Rotorcraft,” American Helicopter Society 70th Annual Forum
Proceedings, Montreal, May 2014.
19 Choudhury, S., Arora, S., and Scherer, S., “The Planner Ensemble and Trajectory Executive: A High Performance
Motion Planning System with Guaranteed Safety,” American Helicopter Society 70th Annual Forum Proceedings,
Montreal, May 2014.
16
18 Dominguez, C., Strouse, R., Papautsky, L., and Moon, B., “Cognitive Design of an App Enabling Remote Bases
to Receive Unmanned Helicopter Resupply,” Journal of Human-Robot Interaction. In Press, 2015).
19 Papautsky, E., Dominguez, C., Strouse, R., and Moon, B.. “Integration of CTA and Design Thinking for
Autonomous Helicopter Displays,” Manuscript submitted for publication, 2015).
.
17