ArticlePDF Available

TALOS: An unmanned cargo delivery system for rotorcraft landing to unprepared sites


Abstract and Figures

This paper describes the Tactical Autonomous Aerial LOgistics System (TALOS), developed and flight tested during the first phase of the Office of Naval Research (ONR) Autonomous Aerial Cargo/Utility System (AACUS) Innovative Naval Prototype (INP) program. The ONR vision for AACUS is to create a retrofit perception/planning/human interface system that enables autonomous take-off, flight, and landing of a full-scale rotary-wing aircraft to and from austere, possibly-hostile landing zones, in a tactical manner, with minimal human supervision. The goal of the AACUS system is to enable any Marine to supervise this capability from an intuitive field interface (e.g. a smart phone, tablet, or military radio/interface). A key feature of the TALOS team's realization of the AACUS goal is its portability to any rotorcraft of sufficient payload capacity. It uses mostly-COTS hardware and next-generation autonomy software, and employs modular, open interfaces, so that (1) new software and hardware components can be integrated relatively easily and (2) perception, planning, and human interface capabilities are "transition-able" to other applications and aircraft. This paper gives an overview of the system architecture, the interaction between the TALOS components, and the demonstration results.
Content may be subject to copyright.
TALOS: An Unmanned Cargo Delivery System
for Rotorcraft Landing to Unprepared Sites
J. Paduano, J. Wissler, G. Drozeski,
M. Piedmonte, N. Dadkhah, J. Francis, C.
Shortlidge, J. Bold, F. Langford,
M. Chaoui, C. J. Liu, E. Foster
Aurora Flight Sciences, Inc.
Manassas, VA
S. Singh, L. Chamberlain, B. Hamner,
H. Cover, A. Stambler, A. Singh,
S. Nalbone, M. Bergerman
Near Earth Autonomy
Pittsburgh, PA
S. Scherer, S. Choudhury, S. Maeta,
S. Arora, D. Althoff, D. Maturana
Carnegie Mellon University
Pittsburgh, PA
D. Limbaugh, J. Bona,
D. Barnhard, D. Chessar
Kutta Technologies, Inc.
Phoenix, AZ
D. Mindell
Massachusetts Institute
of Technology
Cambridge, MA
C. Dominguez1, B. Moon2
R. Strouse3, L. Papautsky
Applied Research Associates, Inc.
Dayton, OH
D. Cerchie, B. Chu, J. Graham C.
Cameron, M. Hardesty, R. Hehr
The Boeing Company
Phoenix, AZ
This paper describes the Tactical Autonomous Aerial LOgistics System (TALOS), developed and flight tested
during the first phase of the Office of Naval Research (ONR) Autonomous Aerial Cargo/Utility System (AACUS)
Innovative Naval Prototype (INP) program. The ONR vision for AACUS is to create a retrofit
perception/planning/human interface system that enables autonomous take-off, flight, and landing of a full-scale
rotary-wing aircraft to and from austere, possibly-hostile landing zones, in a tactical manner, with minimal human
supervision. The goal of the AACUS system is to enable any Marine to supervise this capability from an intuitive
field interface (e.g. a smart phone, tablet, or military radio/interface). A key feature of the TALOS team’s
realization of the AACUS goal is its portability to any rotorcraft of sufficient payload capacity. It uses mostly-COTS
hardware and next-generation autonomy software, and employs modular, open interfaces, so that (1) new software
and hardware components can be integrated relatively easily and (2) perception, planning, and human interface
capabilities are “transition-able” to other applications and aircraft. This paper gives an overview of the system
architecture, the interaction between the TALOS components, and the demonstration results.
1 Currently at MITRE Corporation,
2 Currently at Perigean Technologies LLC,
3 Currently at Clutch Corporation,
AACUS Autonomous Aerial Cargo/Utility System
AES AACUS Enabled System
ASR Assault Support Request
AVO Air Vehicle Operator
CONOPS Concept of Operations
COP Combat OutPost
COTS Commercial Off-The-Shelf
CoT Cursor on Target
EO Electro-Optical
FCS Flight Control System
FO Field Operator
FTP Flight Test Period
GCS Ground Control Station
HSI Human-Systems Interface
ICD Interface Control Document
INP Innovative Naval Prototype
IR Infrared
LZ Landing Zone
MOB Main Operating Base
MOSA Modular Open Software Architecture
NFZ No Fly Zone
RASCAL Rotorcraft Aircrew Systems Concept
Airborne Laboratory
SAV Safe Air Volume
SME Subject Matter Expert
STANAG NATO STANdardization AGreement
TALOS Tactical Autonomous Aerial LOgistics System
TZ Touchdown Zone
UAS Unmanned Aircraft System
ULB Unmanned Little Bird
VDL Vehicle Dynamics Lookup
VMS Vehicle Management System
VSM Vehicle Specific Module
The main objective for AACUS is to deliver cargo by rotorcraft in unmapped, unprepared and potentially hostile
areas. As opposed to current Unmanned Aircraft Systems (UAS) missions, the Concept of Operations (CONOPS)
requires flight close to the terrain and self-determination of suitable landing locations. Little can be taken for
grantednot only are terrain maps not guaranteed to be accurate, but even designated landing sites might not be
suitable either because the specified landing point is too sloped, too rough, or has been designated by error. One of
the most significant program requirements with far reaching implications for admissible solutions is that landing
(even to a previously unmapped site) must be done without over flight.
This paper describes the system architecture, component functionalities, flight test results, and way forward for
AACUS/TALOS. AACUS advances the state of the art in autonomous operations in the areas of mission/route/
trajectory planning, human-system interfaces, and perception, enhancing complex resupply missions.
Autonomous helicopter research, development, and flight test have been active since the early 90’s (Ref. 1), and
methodologies to perceive, localize, and avoid obstacles using lidar have been studied since the 2000 time frame
(Refs. 2-6). Perception and motion planning algorithms have been undergoing detailed development ever since
(Refs. 7-14), culminating in demonstration experiments on aircraft such as the Boeing Unmanned Little Bird (ULB,
a modified MH-6), the Sikorsky MATRIX, and the Army AMRDEC RASCAL UH-60 (Ref. 16). Detailed
algorithm development for perception, landing zone evaluation, safe landing area determination, trajectory planning,
trajectory following, and obstacle detection and avoidance have all been performed, and reported in the cited
publications. Much of this technology was incorporated into TALOS, and is not described in detail here. Instead,
this paper focuses on describing an integrated system that (1) interacts effectively with human operators, (2) retrofits
onto current and future unmanned rotorcraft, (3) uses open interfaces to enable different algorithms and components
to be ‘swapped out’ and (4) performs faster, more aggressive missions with less input from ground users by pushing
the state of the art in human interfaces, architectures for independent and human-collaborative decision-making,
perception, trajectory planning, and route planning. In addition, this paper augments other AACUS publications,
which are cited throughout this document, by detailing specific results achieved at the official ONR
AACUS/TALOS demonstrations on February 26-27, 2014, at Marine Corps Base Quantico, Virginia.
This paper is organized around the
architecture developed for TALOS (Figure 1),
so we begin by briefly describing it. A
modular architecture enables the participation
of our widely-dispersed team of experts in
Human-System Interfaces (HSIs), Perception,
and Planning. Subsystems (yellow blocks)
are tied together through formal interfaces
(blue circles) that are described by Interface
Control Documents (ICDs) governing the
interaction between the various components.
This serves two purposes: first, it conforms to
the Modular Open Software Architecture
(MOSA) concept of providing information
about how each component of the system
performs its function, as well as its inputs and
outputs. Second, it allows technology
providers to upgrade on a component-by-
component basis, delivering higher value to
the government over the long term. The
functional breakdown created by the TALOS
team is shown in Figure 1.
Figure 1 TALOS architecture, showing major components and
open interfaces. HSIs exist at the Combat Out Post (COP) and
the Main Operating Base (MOB), planning includes the Mission
Manager, Route Planner, and Trajectory Planner, and Perception
includes both sensor hardware integration, sensor fusion, and
perceptive processing. Vehicle Dynamics Lookup encapsulates
vehicle-specific data. Typical UAS elements are also shown; the
sum total of the modules in this diagram is called the AACUS-
Enabled System (AES).
Human-System Interfaces (HSIs): For the AACUS mission, a Field Operator (FO) at the COP where supplies are
needed is provided with a simple interface to request supplies, monitor the mission, and manage (at a high level) the
final stages of the helicopter approach and landing. An Air Vehicle Operator (AVO) at the MOB has supervisory
control of the aircraft through a Ground Control Station (GCS), and is provided with special displays and controls to
enable the unique aspects of the AACUS mission.
Planning Systems: Planning consists of three levels, all of which are onboard the aircraft. The top-level planning
and decision-making is handled by a Mission Manager, which sequences the system through the mission phases,
communicates with the ground, and handles contingencies initiated either on the ground or by pre-existing vehicle
management functions. The Mission Manager communicates with the Route Planner, which provides onboard
waypoint planning to carry out the desired mission. Finally, the Trajectory Planner is responsible for following the
planned route while addressing dynamic feasibility and responding to obstacles detected by the Perception system.
Perception System: The Perception system is the component of TALOS that includes not only software, but also
integrated sensors. The Perception system on TALOS consists of a scanning lidar, electro-optical (EO) and infrared
(IR) cameras, tightly-integrated GPS-INS for precise and accurate registration of lidar returns in inertial space, and
software for real-time processing of lidar and image data to detect obstacles en route and characterize the Landing
Zone (LZ). The Perception system must find a suitable landing site at a location that has never been visited before
the FO is not responsible for preparing or judging feasibility of the landing site, so the Perception system must
provide this capability. Because some reactive functions must happen quickly, the Perception system and Trajectory
Planner contain some of the key autonomous functionalities of TALOS, as discussed in the Perception section.
This broad categorization of elements has been used to organize the development of TALOS, which is designed to
be retrofitted onto any STANAG 4586-compliant unmanned helicopter. For this effort, the Boeing Unmanned Little
Bird (ULB) was chosen as the demonstration platform.
The remainder of the paper is organized as follows: first, an AACUS mission is described, providing terminology
and motivating system requirements. Next, the planning systems are described, as these naturally flow from the
mission description. Human interfaces are next described, emphasizing the unique aspects of the problem and the
TALOS solution. Perception requirements and approach are then described, with particular emphasis on the
requirements and autonomous behavior during an aggressive approach to land. Finally, integration and testing on
the ULB is described, and demonstration results are provided.
An AACUS mission is a substantial extension of a typical UAS mission. It begins with a delivery request from an
FO at an austere location, entered using the FO handheld device (a smart phone or tablet, as is projected to be part of
the future Marine’s standard equipment), or by radio. The request initiates an Assault Support Request (ASR) which
is managed by the AVO at the MOB. The request conforms to the ASR standards, and consists of information about
the Landing Point (LPthe precise GPS location where the helicopter is requested to land), the conditions at the
landing site (wind, threat zones, etc.), and the desired supplies. The ASR is handled using Cursor on Target (CoT)
protocols, which allow both the FO and the AVO to maintain situation awareness about the status of the request.
The AVO uses the TALOS automated route planning capabilities to plan the mission, from take-off at the MOB to
landing at the COP. After the vehicle is loaded and cleared for take-off, the AVO commands the launch and
monitors take-off, climb-out, and cruise to the desired location. As the rotorcraft approaches the beginning of the
approach, it requests permission to land at this point the FO switched roles, from monitoring mission progress to
interacting with the AACUS-Enabled System (AES) that is, the aircraft and associated onboard autonomy
En route to the COP, the AES behaves much like a typical UAS, flying a waypoint-based mission that conforms to
current and future UAS operational requirements. However, additional decision-making capabilities are provided,
which enable the AES to handle contingencies, provide for re-planning, and respond to pop-up No Fly Zones
(NFZs) and other events. Handling these next-generation mission capabilities ensures that TALOS is a fully
functional, end-to-end autonomous system.
The final approach and landing phase of the mission is the most demanding, and motivates many of the specialized
capabilities of TALOS. Because an austere LZ may include areas that are less desirable for landing, TALOS is
responsible for scanning the Touchdown Zone (TZ) the immediate (e.g. 30 ft. radius) area around the FO-
designated LP, to determine if it is suitable for landing. Rocks and other obstacles, as well as terrain deviations
(slopes, ditches, small ridges, etc.), may make the TZ unsuitable. In this case, TALOS evaluates the entire LZ to
find alternate TZs, and interacts with the FO to ensure his/her awareness and acceptance of a safe alternate.
TALOS is designed to enable a tactical approach in dangerous situations as such, it will direct the UAS to fly low
and fast toward the LZ. This requires the system to be aware of towers, trees, buildings, and other obstructions
between the vehicle and the desired LP, and to maneuver around these obstacles. Further, the AES must avoid
threat zones designated by the FO and AVO, including those that ‘pop up’ after the mission is planned. Finally,
several contingency situations exist that require the AES to either ‘wave off’ from the approach, or to abort the
mission altogether. These capabilities require coordinated utilization of the Perception system (e.g. to steer the
sensor between LZ evaluation and obstacle avoidance), aggressive trajectory planning, and decision-making
functions to handle contingencies, all while maintaining AVO situation awareness through the MOB GCS, and FO
situation awareness at the COP through the handheld interface.
Once the AES, in concert with the FO, has chosen a suitable TZ, maneuvered to a LP somewhere within the TZ, and
landed, the AES alerts the FO that the rotorcraft is ready to be unloaded. After unloading and clearance to take-off
from the FO, the AES takes off and departs from the COP. The AVO maintains supervisory control at all times, and
may alter the parameters governing the return-to-base flight route if necessary.
We now describe the TALOS components that enable this AACUS mission.
TALOS is designed to retrofit to a fully-capable Vertical Take-Off and Landing (VTOL) UAS that can perform
standard UAS missions using the STANAG-4586 message set. TALOS capitalizes on the UAS ability to perform
Figure 2 AACUS mission. The Main Operating Base is in on the horizon, and the Landing Zone contains
various obstacles and terrain characteristics that must be evaluated during a high-speed tactical approach.
pre-flight, to take-off to a low hover (which could be as
low as 1 ft. off the ground), and to land from a low
hover on command4. However, during flight (between
these two low-hover states), TALOS replaces the
existing UAS waypoint-based outer control loops with
trajectory-based control, which provides more
aggressive, dynamically-feasible paths for the rotorcraft
to follow, enabling maneuvering near the LZ and in-
flight obstacle avoidance in response to detected
obstacles (trees, buildings, power lines, etc.).
Furthermore, TALOS provides mission-level autonomy
capabilities to support complex end-to-end missions
and next-generation route planning functionality.
The TALOS planning architecture is broken up into
three planning modules: the Mission Manager, which
provides high-level planning coordination and decision-
making; the Route Planner, which performs waypoint-
based planning; and the Trajectory Planner, which
plans trajectories for precise, aggressive maneuvering
and obstacle avoidance. Figure 3 shows the flow of information. High-level goals are defined by the AVO and
uplinked by the MOB GCS via STANAG-4586. The goal points are then provided to the Route Planner, which
generates the full set of waypoints describing the route. Routes can also be planned on the ground and uplinked, as
is common in current UAS. Waypoints are sent to the Trajectory Planner, which uses them as the basis for
perception-driven trajectory planning. Trajectories are transmitted to the UAS Flight Control System (FCS) using
splines. More details on each stage of this process are provided below.
Mission Definition and Route Planning. Most UAS follow waypoints that are planned on the ground, either by
hand or using a route planning tool. This prevents the system from intelligently re-planning in the case of
communication outage, or other contingencies, such as pop-up NFZs, which require quick on-line reaction. TALOS
overcomes these limitations by providing onboard route planning. Onboard route planning has been made viable for
current UAS by Kutta Inc., a member of our TALOS team that developed the onboard Route Planner used for
TALOS. The Kutta Route Planner and integration methodology were previously demonstrated on the Shadow UAS.
This Route Planner provides routes that keep the AES inside an AVO-defined ‘Safe Air Volume’ or SAV, which
could be a set of narrow corridors of operation, or, as in Figure 4, a more permissive planning volume. It
automatically plans within this volume, around NFZs,
and through pre-defined points in the mission, which
are called goal points. Figure 4 show a typical plan,
which has several goal points (including a take-off
point, an approach point, and a landing point) and
several automatically-generated waypoints that are
incorporated to ensure that the aircraft does not violate
the SAV or NFZs. By automating waypoint
generation and integrating the Route Planner on board
the AES, the system can re-plan from any point along
the route, even if communications or time-to-react is
Most currently fielded military UAS perform route-
planning on the ground, and many use STANAG-4586
to up-link the resulting missions to the UAS. After
receiving the waypoints, the onboard Vehicle
4 The ultimate goal of the AACUS program is to command the rotorcraft all the way to touchdown, but this was not
implemented during Phase 1. Instead, the ULB FCS performed the final landing (no human input).
Figure 3 TALOS planning flow, emphasizing the top-
down message flow through the various components
discussed in the text.
Figure 4 TALOS framework for mission and route
planning contains goal points, waypoints, a Safe Air
Volume (SAV, green area), and any number of pre-
planned and/or pop-up No-Fly Zones (NFZs, red areas).
Goal points are cyan circles, waypoints are cyan dots.
Management System (VMS) sequences them to the flight control system. TALOS is a retrofit system that interfaces
with the VMS and FCS as shown in Figure 1. Modest modifications to the VMS are required to enable it to pass
STANAG messages to TALOS, and to forego sending waypoints to the FCS, instead putting the FCS in ‘TALOS
mode’. In this mode the FCS follows trajectories (discussed below) instead of waypoints. This allows TALOS to
use perceived information to alter the path that the aircraft takes instead of blindly traveling through the waypoints
that make up the route plan, it intelligently achieves the desired goal points while taking into account AVO-defined
constraints, perceived obstacles, and landing zone conditions.
Mission Manager. It will be noted that the flow of information in Figure 3 does not precisely correspond to the
arrows shown in Figure 1. Instead, the Mission Manager acts as the data hub, communicating with all the major
systems and marshaling information as needed for the phase of flight, contingency scenario, and real-time input
from the Perception system, the FO through the handheld interface, and the AVO through the MOB GCS. Under
nominal operations, the mission manager sequences the system from take-off to en route cruise to landing, feeds
waypoints to the Trajectory Planner as the mission proceeds, monitors vehicle location and requests permission to
land from the FO at the COP at the appropriate time, etc. When contingencies and off-nominal situations arise5, the
Mission Manager contains the mission level decision-making logic, and manages communications between ground
elements and, for instance, the Perception system to relay LZ evaluation information to the user.
Trajectory Planning. In virtually all current UAS, a route like the one shown in Figure 4 is the final result of the
planning process, and the waypoints are sent directly to the FCS e.g. in a from-to-nextformat. The aircraft ‘fairs
in’ a pathway that smoothly transitions between the route legs, flying near, over, or around the waypoints as
determined by the waypoint type. This method works well when waypoints are many hundreds of feet apart, and the
route is at high altitude with no obstacles or sudden appearance of threat zones.
Flying at low altitudes in tactical situations and landing at unprepared sites in austere, perhaps mountainous or
cluttered (e.g. urban) terrain, where threats and obstacles may appear at any time, requires a next-generation,
reactive capability for path planning and following. TALOS provides an open architecture framework and
associated interfaces to provide this capability, and the Trajectory Planner developed by Carnegie Mellon University
(CMU) provides the integrated solution to the trajectory planning problem within the TALOS framework. The
inputs to the Trajectory Planner are similar to those of a typical route-following FCS, but the output is in the form of
a dynamically feasible spline. A spline is a general representation of a curve through space, which is easily adapted
to a variety of FCS tracking methods, such as those used by the ULB and the RASCAL. Aurora, CMU, and Boeing
developed an open ICD that allows for communication of dynamically-changing splines through a MIL-STD-1553
interface to the ULB FCS. The ULB follows points along this spline using a proprietary trajectory following
method that does not require changes to the inner, safety-critical loops of the FCS.
As shown in Figure 1, the Trajectory Planner takes inputs from three components: (1) The Mission Manager, which
provides the route points from the Route Planner, as well as mission phase, contingency information, and relayed
information from the ground control systems; (2) The Perception system, which provides a 3-dimensional map of
detected obstacles in the environment, as well as LZ evaluation results; and (3) The Vehicle Dynamics Lookup
(VDL) module, which provides information about the rotorcraft’s system dynamics. The VDL is broken out as a
separate module for portability: only the VDL changes when TALOS is moved to a different rotorcraft.
The Trajectory Planner performs a set of parallel, on-line optimizations to determine the best dynamically feasible
path, around the obstacles and other constraints, that achieves the waypoints provided by the Route Planner. This
problem is complicated by several factors. First, the perceived environment is constantly changing as the aircraft
flies through the environment and new obstacles are discovered. Second, if an aggressive maneuver is required, it
must be planned quickly (within about 1 second) and be dynamically feasible. Finally, the aircraft cannot
instantaneously track a new path, so the Trajectory Planner must respect a ‘look-ahead’ or transition time; new paths
must smoothly transition or ‘branch’ from the path that has already been sent to the FCS, with sufficient advance
notice for FCS algorithms to adjust. Other Trajectory Planning features are detailed in References 17, 18, and 19,
5 Situations that require immediate reaction are handled by the Trajectory Planning and Perception systems, which
work together to ensure the safety of the aircraft. Thus some lower-level decision-making resides with these
systems, rather than with the Mission Manager.
and include (1) the ability to apply parallel, alternate
trajectory planning techniques in an ‘ensemble’ and
choose the best solution, (2) guaranteed safety through
the inclusion of a library of deterministic safety
maneuvers, and (3) specialized techniques to enable
landing into difficult environments with winds.
Figure 5 shows a typical result.
The Perception system and Trajectory Planner work
hand-in-hand, at high bandwidth, and perform critical
decision-making functions without significant
interaction with the higher-level Mission Manager.
Near Earth Autonomy and CMU have described this
capability in references 4, 5, 8, and 11-15. Here we
focus on the ‘end game’ of approach and landing,
which is the most critical stage of perception and
trajectory planning, because the AES is most likely to
encounter obstacles, and must quickly evaluate the LZ
and decide where to land. For the AACUS program, the TALOS team was required to perform a high-speed
approach along a shallow flight path, similar to the way a pilot would approach to minimize exposure to enemy fire.
This further pushed the Perception system and associated autonomous interaction with the Trajectory Planner in this
critical phase of the resupply mission. For this reason, we describe the end game in detail, to highlight the
Perception system functionality and interaction with the rest of the TALOS components.
Figure 6 shows the details of an approach, for an 5 degree profile. Ground speed during most of the missions
demonstrated by TALOS is 100-120 knots, with the aircraft decelerating as it reaches the final phase shown here, so
that at 50 seconds from touchdown, velocity has decreased to ~30 knots, and the system is decelerating at about 2.5
ft/s2. At approximately 90 seconds from landing, the perception system transitions from scanning the immediate
area in front of the aircraft (to sweep out a large safety region in the direction of flight) to focused scanning of the
LZ. The LZ is a large area (150-500 ft. in radius) near where the operator has requested that the system land; the
goal is to land precisely where requested, in the center of this region. However, TALOS scans over the wider LZ
region to maximize the opportunity for a successful mission, realizing that (1) the FO may not have had sufficient
opportunity to survey the LZ; (2) conditions in the LZ or within the specific TZ chosen may have changed; and (3)
landing within a few hundred feet of the requested Landing Point is often acceptable in an operational setting.
The Perception system and Trajectory Planner consider a wide variety of requirements when selecting suitable TZs.
Figure 5 Obstacle avoidance scenario. During a test
flight on the ULB, the Trajectory Planner successfully
planned a trajectory around an obstacle detected by the
Perception System. Inset shows the real-time Perception
system output and planned trajectory. The ULB received
and tracked this maneuver in real time, avoiding the peak.
Figure 6 End-game scenario. In the case shown, the user-
selected touchdown zone (red) is invalid (too rough or
sloped, or too close to an obstacle), so the Perception system selects two other locations (green and blue). The FO
chooses the blue alternate, and the trajectory deviates from the red dashed to the green trajectory.
The zones must be free of obstacles, sufficiently smooth and of low enough slope to enable safe landing. The
trajectory to the LP must also be feasible that is, it must be not only feasible to reach, but also feasible to wave-off
from while approaching, and to take off from after the aircraft has landed. In addition, obstacles may exist along the
straight-line path to the desired or alternate TZs. These complicate the trajectory planning problem, and may cause
occlusions that the Perception system must address. Thus a difficult perception, trajectory planning, and decision-
making problem must be solved as part of the TZ selection process.
Further complicating this process is the need to communicate with the FO, who is monitoring the approach on a
handheld device. As demonstrated for the first phase of the AACUS program, the AES lands under a ‘management
by consent’ policy that is, it must request and receive approval to land at an alternate TZ before proceeding to that
zone. The FO also maintains the authority to wave-off the AES until very close to touchdownthis is necessary to
avoid putting personnel on the ground (or the rotorcraft itself) in danger in difficult combat scenarios. To address
this requirement, a time line of communication and decision-making was developed, which places difficult
requirements on the Perception system. The range at which LZ evaluation can be performed at tactical approach
speeds directly impacts TALOS’ ability to communicate suitable TZs to the FO, so that those TZs can be approved.
Figure 7 shows examples of typical LZ evaluations – starting with registered lidar data in an obstacle map,
proceeding to processed evaluation results (which are used by the Trajectory Planner), and finally as they appear on
the Field Operator’s handheld device for approval.
Referring now to both Figures 6 and 7, we can describe the interaction of components. The Perception system and
Trajectory Planner together determine whether the requested landing point is feasible. If it is, the landing proceeds
without the necessity for additional operator input this location was requested and is thus pre-approved. The FO
can, of course, wave off TALOS if conditions have changed (the FO can also change the request while the AES is en
route to adjust for changing conditions). If the landing point is infeasible, alternates based on detailed LZ evaluation
are presented as shown at right in Figure 7. The FO has the option of (1) touching one of these alternates to give
consent to land there, (1) commanding a wave-off, or (3) doing nothing, in which case a wave-off automatically
occurs. In our field trials, an FO with 15 minutes of training was able to make this decision within about 3 seconds.
Feedback from the operator and others resulted in adding audial and haptic alerts (sounds and vibration) to the
handheld device whenever consent is required, in recognition of the fact that the FOs in dangerous scenarios are
unlikely to devote their full attention to the interface. They also have a natural tendency to watch the aircraft and
not the handheld device.
The M3 Sensor Suite. The requirements placed on the sensor suite are those derived from the scenario described
above, as well as a variety of other scenarios and conditions. Near Earth Autonomy developed the ‘M3’ sensor suite
(Figure 8) specifically to meet these requirements, making it one of the most capable perception systems extant.
M3 employs an industrial grade surveying lidar in a nodding configuration that has programmable motion: nodding
can be ‘focused’ on an area of interest, e.g. a landing zone at long range, which encompasses only a narrow vertical
field of view. Deeply embedded, high-grade GPS/INS data provides the pointing and registration accuracy needed
to meet the extreme requirements for landing-zone evaluation from a moving platform at a shallow flight path angle;
Figure 7 Landing Zone evaluation and Field Operator negotiation. Left: precisely-registered, fine scale
obstacle map based on lidar scans (colorized by altitude); Center: color-coded LZ evaluation, where green
locations are suitable for landing; Right: options as presented to the FO. Red location is the requested, infeasible
location; green and blue locations are alternates; the blue location is the one chosen by the FO, to which the AES
autonomously diverts on consent.
the scan-and-evaluate accuracy is further enhanced
by onboard real-time registration correction
algorithms. Onboard algorithms also provide wire
detection, obscurant penetration, sensor steering, and
localization in GPS-denied settings. The M3
capabilities are further enhanced by fusing EO/IR
data that is co-registered with the lidar data; this
allows the AES to, for instance, land to a marker (a
20”x72” standard-issue nylon ‘VS-17’ emergency
signal panel) in cases where the FO does not have
accurate landing point location information. EO/IR
cameras also support GPS-denied operations.
As part of the open architecture development
process, ICDs were developed for communication
with the Perception system. These ICDs govern
communication for trajectory planning, sensor
steering, localization in GPS-denied settings, and
landing to visual landmarks such as VS-17 panels.
Grid definitions and parameters for obstacle mapping in either a ‘scrolling’ environment (roughly centered around
the vehicle) or a static setting (centered around the LZ) are provided. In both of these settings, two types of data are
provided: the occupancy at each grid location, and the position of the nearest obstacle to each location. Thus there
are four potential combinations of grid type (scrolling or static) and grid content (occupancy or nearest obstacle).
Only three of these four are actually supplied, since a static grid of nearest-obstacle distance provides all of the
necessary utility for trajectory planning. The static grid is centered at the LZ, providing terminal area planning
information, while the scrolling grids (both types) provide en route planning information. Information is published
using the Robot Operating System (ROS) publish-subscribe framework. This framework, however, is not well
suited to the high bandwidth and precision at which occupancy information changes. To mitigate this problem, only
the changes to each cell’s occupancy level are published. This, together with the ability to provide information at
various levels of obstacle grid resolution, allows for more efficient transfer of data.
Human-system interfaces play a critical role in successfully carrying out the AACUS mission. The focus during the
first phase of the AACUS program was on design and validation of the FO handheld, which is a new concept for
UAS control, in which a Marine who is untrained in UAS operations, flight vehicle characteristics, or LZ
preparation submits an Assault Support Request (ASR) using an app,’ which he/she learns about in a short (15
minute) training process (e.g. a training video, pamphlet, or in-app walk-through). Such a minimal level of training
is enabled by two features of the TALOS FO human interface: first, it relies heavily on the commonality of the ‘look
and feel’ of applications developed for smart phonesand tablets. Touching icons or map locations to elicit effects,
scrolling and pinching a map, sending and receiving texts, and using menus to get to desired functions are performed
in common ways with which all Marines are familiar and/or can master very quickly. The second feature of the
human interface is that it minimizes the decision-making that the FO must do, relying instead on TALOS’ inherent
autonomy. The FO does not evaluate the feasibility of the landing points, steer the rotorcraft along a safe path,
evaluate the specific properties of the terrain, or consider approach direction, winds, or other features of the mission.
Neither does the FO consider any of the high-level mission parameters that a typical AVO must set; these are
handled by a combination of onboard autonomy (e.g. re-planning if a new threat region or NFZ appears) and, when
necessary, reach-back to the AVO at the MOB.
The TALOS team, led by Applied Research Associates (ARA) and MIT Professor David Mindell, performed a
formal human interface design process, which began with interviews with Subject Matter Experts (SMEs) and future
users (Marines) to inform a cognitive task analysis of the resupply mission, and to understand the roles at the MOB,
the COP, and in the cockpit (to inform the role of TALOS). This analysis led to the development of requirements
and design concepts, with additional input from SMEs, engineers, and designers in brain-storming sessions. The
initial design for the FO (the various screens in the application, the functionalities and actions, etc.) was then
mocked up on paper, forming a set of ‘storyboards’ (Figure 9) for the relevant AACUS mission use cases. These
Figure 8 M3 sensor suite. Foregroundlidar in nodding
enclosure. Background: EO and IR cameras. System is
mounted on a standard external mount, and includes
components internal to the unit and the aircraft: GPS/INS,
custom embedded components, and software.
storyboards were validated, and refinements
motivated, by additional design reviews (“walk-
throughs”) attended by SMEs. Refined requirements
drove detailed system integration and software
development, which was performed by Kutta, Inc. for
the iOS operating system on an iPad mini.
The process described here was made much more
effective through the use of well-known military
standards and protocols, and by leveraging Kutta
expertise in these operational UAS techniques.
Specifically, the request for supplies is submitted as
an ASR (also referred to as a “9 line”), which is a set
of information that Marines are routinely trained to
provide. The ASR is filled out on a request page in
the app that auto-supplies as much information as
possible, and provides map-, GPS-, and
magnetometer-aided features for determining the
coordinates of the desired delivery location. Aids are
also available to allow the FO to radio in the request, if operational communications limitations do not permit direct
connectivity to the MOB GCS. Once the request is submitted, it is managed using CoT protocols. CoT is common
in the military, and its use allows for integration of ASRs into the broader military planning environment. Finally,
FO handheld communication with the AES conforms to an abbreviated version of STANAG-4586 although this is
transparent to the user, it is important to maintain an easily communicated, open architecture.
The final design stage involved validation testing with Air National Guard security forces, an accessible military
population who agreed to volunteer and who also shared characteristics with our intended Marine user. Validation
testing was intended to meet multiple goals:
1. Evaluate the feasibility of a new user (any Marine) performing the FO role after a 15-minute orientation to
the interface;
2. Determine the best method for performing the time-critical “Infeasible Touchdown Zone” negotiation
(described in the previous section), for which ARA had designed three options;
3. Identify components for re-design or additional training; and
4. Validate the overall interface.
To address these goals, ARA developed and executed a procedure that included an orientation session, a
performance measurement (13-question test), and a validation and usability evaluation. The training comprised a 15-
minute orientation to the interface, which used screenshots of the app on an iPad mini. The test consisted of a series
of questions, including how participants would
engage the interface across a variety of scenarios
(e.g., point to/push/simulate appropriate action on the
screenshot). The test was given both before and after
the training, to assess how intuitive the screens were
by themselves without orientation. ARA measured
participants’ accuracy of response (correct/incorrect)
in addition to participants’ time to completion.
Validation testing demonstrated the feasibility of the
brief orientation session. More importantly, the
testing provided valuable feedback on specific design
questions, and ideas for redesign, which the
participants generated from the testing experience.
ARA folded the feedback into our designs to create a
final, validated design. Figure 10 shows the results of
the performance testing.
Figure 9 Storyboard example. The FO handheld design
was refined by validating screenshots using a detailed
storyboard, which consisted of dozens of pages like the
above, covering the primary use cases.
Figure 10 Performance before and after 15 minutes of
training. Data indicates that (a) the interface improved based
on design changes (b) the interface is intuitive (high pre-test
scores), and (c) 15 minutes of training is sufficient.
It is notable that, unlike many autonomy programs, the cognitive task analysis and use case descriptions drove much
of the TALOS design process, and the interface requirements drove software development details. This is in sharp
contrast to most autonomy programs, where the autonomous operations are considered first, and human interfaces
are introduced after-the-fact by the engineers developing the autonomy. We believe that the efficacy of the “human
plus autonomy” team is much higher if a “humans first” design process is used.
Although the full cognitive design process described above (and detailed in Reference 18) was only completed for
the FO handheld, the process was begun for the interface within the MOB GCS, and made sufficient progress to
allow us to understand the roles at the system level, to provide requirements for the autonomy software, and to begin
lay out of the initial MOB GCS interface design (see Reference 19). This design is being further evaluated and
refined during the Phase 2 effort, which is currently underway.
The ONR AACUS Phase 1 INP was an aggressive technology development and demonstration program, executed
over the 18 month period between the end of September 2012 and the end of March 2014, with a culminating flight
demonstration in February 2014. Our goal was to demonstrate retrofit of the system on an existing rotorcraft UAS.
The Boeing ULB provides a mature platform for such demonstrations, supported by an agile, experienced team of
flight control, integration, and flight test personnel.
The nominal AACUS mission starts with an Assault Support Request made by the FO using the handheld device or
radio. This initiates pre-flight planning and vehicle loading, during which the FO handheld device keeps the FO
informed of the status of the planning process. The delivery flight itself consists of take-off and climb-out, en route
flight, contact with the FO to request permission to proceed, approach, and landing. The autonomous system must
plan the route, perform sensor steering and perception (as governed by the necessity to clear a safe zone around the
vehicle and perform LZ evaluation), detect and adjust the trajectory to deviate around obstacles en route and on the
ground, evaluate the LZ and re-direct the vehicle to a suitable touchdown point, interact with the FO, and perform a
pilot-like landing without hover. Contingencies for lost link, operator wave-off, and autonomous wave-off must
also be addressed.
Details of the integration process, through the interfaces that cross the blue boundary between TALOS and the
VTOL UAS in Figure 1, are given in previous sections. Integration with the MOB GCS and Mission Manager
(upper right in Figure 1) was made seamless by ULB’s use of a STANAG-4586 compliant Vehicle Specific Module
(VSM) within the Vehicle Management System (VMS). Some details of the STANAG message sequencing were
adjusted, and an additional Ethernet port on which the Mission Manager could listen to ‘multicast’ STANAG
messages was created. The VMS software was also altered to send a ‘TALOS mode’ message to the ULB FCS.
In TALOS mode, the ULB’s FCS switched from
taking waypoint commands from the VMS to taking
spline commands from TALOS (lower right interface
in Figure 1). The ULB inner-loop control laws were
not changed; rather the source of the path to follow
was altered to come from the TALOS (as splines)
instead of the Vehicle Management System (VMS, as
waypoints). Integrated testing of this interface was
performed in March 2013, using pre-computed splines
loaded from a file at this stage of the program, the
autonomy software would not be complete for several
months, but it was important to verify the ICD and
1553 interface, to validate that the ULB was able to
follow splines, and to characterize tracking
performance. Figure 11 shows typical results.
Six additional Flight Test Periods (FTPs), of
approximately two weeks duration each, were carried
out between September 2013 and February 2014. Each
Figure 11 ULB tracking performance test results.
FTP tested a more sophisticated combination of subsystems in the TALOS-ULB integrated system, and drove the
development process. During the later FTPs, testing became more focused on validating required performance on
eight mission scenarios, developed with ONR: a nominal mission, a mission requiring negotiation with the FO to
pick an alternate landing site, two missions in which pre-flight and a pop-up (mid-mission) NFZs were introduced, a
communication outage contingency, an operator wave-off contingency, a landing to a location designated by a VS-
17 panel, and a scenario in which the FO does not respond when selection of an alternate LZ is requested. Required
behavior in all of these cases was specified in detail by ONR, with input from the TALOS team.
For the demonstration at Marine Corps Base Quantico on February 26-27, 2014, the missions performed were
modified versions of the AACUS mission, to allow a variety of landings and contingencies to be validated. The
ULB was manually taken off and flown to a pre-specified location. The safety pilot engaged TALOS and was then
‘hands off’ until after landing (or after reaching an abort point), monitoring the system behavior for safety using
specialized displays. The FO’s initial request for supplies, and subsequent mission planning, were abbreviated since
some of the operational considerations for request and planning were not in play. A researcher playing the role of
the AVO serviced basic FO requests and uploaded the mission parameters to begin each test. Each mission covered
approximately 6 nautical miles; the last 3.4 nautical miles were timed for speed, with the goal of performing the
approach and landing in under 4 minutes. The FO role for all the demonstrations was performed by a Marine with
15 minutes of training on how to request supplies and designate the desired landing point, monitor the mission,
interact with the AES in the case of an infeasible touchdown zone, and perform wave-offs and other contingency
During the two-day demonstration period, the TALOS team flew 24 missions (from pilot engagement of TALOS to
pilot take-over at the end of the mission) during four separate flight events (periods when the helicopter was either
airborne or idling) of about 1.5 hours each. Each of the 24 missions was initiated en route to the first waypoint of
the mission. 15 of these missions resulted in successful autonomous landing to the ground without any pilot input
and 7 more missions resulted in the system successfully redirecting to a wave-off or abort point due to either a
planned or unplanned contingency; the pilot took over as planned upon arrival at the contingency point. One
mission was interrupted because of a (non-TALOS related) barometric pressure problem, and one mission was
interrupted because the FO handheld ‘hung up’ and required a reboot. The TALOS system achieved times of 3:18,
3:21, and 3:26 in the three timed missions of the demonstrationwell under the 4-minute goal.
The 15 landings performed were from three different approach directions, and many required a deviation from the
FO-chosen landing point to a nearby landing point, because the chosen point was too sloped, or contained obstacles
or terrain features that rendered that point infeasible for landing. In some of these cases, hay bales were specifically
placed at the requested landing coordinates, to cause the system to negotiate with the FO and go to a different
touchdown zone. The system was required to detect and avoid an obstacle no larger than 3 ft. in each dimension;
this was demonstrated in a previous FTP. Other landings included three landings for which the approach vector was
modified so that the landing was into the wind, four landings that deviated around No-Fly Zones (Figure 4), and two
landings to a visual landmark (a VS-17 marker panel). Some specific examples are given below.
Flight path deviations during approach to land. Figure 12, left, shows a typical landing case. The AES is
autonomously approaching the landing zone from the upper right of the picture. The approach angle is
approximately 5 degrees, which is shallow enough that the trees (rainbow colored because the picture is a height
map) are too close to the nominal flight path (yellow). The Trajectory Planner automatically re-plans the trajectory
to avoid the trees. Further evaluation of the requested touchdown zone (30 ft. area around the yellow dot) led to the
Perception system, Trajectory Planner and FO, through the negotiation process and human interface described
above, to choose an alternate touchdown zone and deviate to that location (magenta line). All of this happened
automatically, within the last 20 seconds of flight, and was repeated several times, under a variety of conditions,
during the demonstration (Figure 12, right).
No-Fly Zone (NFZ) replanning. NFZs added far ahead of the aircraft, or during the planning phase, are handled by
the Route Planner. This was demonstrated, yielding the flight test data shown in Figure 13. In this figure, the route
is initially planned without the NFZ in place, and then the NFZ is added. The automatic replan was conducted in the
MOB GCS in this case, but the capability has since been demonstrated using the onboard Route Planner.
Note in Figure 13 that the blue trajectory ‘fairs in’ the pathway described by the way points. This behavior requires
that buffers around NFZs be incorporated to ensure that the faired-in trajectory does not violate the NFZ. During the
demonstrations, this buffer was incorrectly set to 100 m (328 ft.) instead of 150 m (492 ft.), causing the aircraft’s
path to briefly ‘clip the corner’ of the NFZ. This is unacceptable behavior, resulting in changes to the operation of,
and interfaces between the Route Planner and Trajectory Planner to automatically incorporate corridors of flight that
respect the NFZs, and plan trajectories in real time that do not violate these corridors, preventing this type of error.
Figure 12 Trajectory re-plans due to obstacles and changed Landing Point: Left: typical approach through a
clearing in the trees, desired landing point is infeasible. Right: detail showing deviation through trees green line
is original path, magenta line is re-planned path. This case landed to the requested landing point.
Figure 13 TALOS Route Planner auto-routing around a no-fly-zone. Purple area is accommodated by laying
in additional waypoints, separate from the ‘goal points’ provided by the AVO. Blue line is flight data.
Pop-up NFZs may be too close to the current vehicle position to allow for re-routing using waypoints, so the
Trajectory Planner (instead of the Route Planner6) is responsible for making the necessary adjustment and changing
the trajectory to avoid the NFZ. Figure 14 shows an example of this behavior. After a pop-up NFZ is added along
an existing leg of a route, a trajectory is planned around it; the new trajectory terminates at the next waypoint in the
route, which in this case is the landing point. As with obstacle avoidance, minimal interaction with other TALOS
components is required for such reactive planning, as long as the deviation required does not jeopardize other
aspects of the mission. For the demonstration, pop-up NFZs were placed and uplinked by the Marine on the ground;
therefore the NFZ location was not known in advance (the Marine was instructed on the general location and time to
enter the NFZ). This procedure demonstrated functionality, intuitiveness, and ease-of-use of the FO handheld
Contingencies. The behavior during contingencies was specified by ONR to be completely autonomous, under the
CONOPS that communication with the AVO at the MOB GCS may be unavailable during relatively long periods of
operation. Thus the goal is to enable delivery, including any contingencies that might occur, even if
communications with the MOB are unavailable. Communication with the FO at the COP, on the other hand, is
required during certain phases of the mission, to ensure that the FO is aware of the AES state and that both the AES
and the FO are able to communicate their intent7. Based on this CONOPS, two contingency points (corresponding
to Contingency Points A and B in a standard STANAG-4586 route) are created by the AVO for each mission. The
wave-off point is the point to which the AES diverts if the FO sends a wave-off command. The abort point is the
point to which the AES goes if the delivery must be aborted, e.g. if communications are lost to the FO, if the FO is
unresponsive, or no feasible touchdown zone can be found. Successful handling of contingencies was demonstrated
for all these cases.
6 During Phase 1, the Trajectory Planner was responsible for all NFZs added after mission initiation. In Phase 2, an
approach that balances the use of the Route Planner and Trajectory Planner for addressing NFZs has been
7 TALOS can also operate completely autonomously, i.e. without approval or monitoring by the Field Operator.
This CONOPS was not part of the Phase 1 effort, but will be tested during Phase 2.
Figure 14 TALOS Trajectory Planner performing on-line trajectory planning around a pop-up No-Fly
Zone. Red circle is accommodated by a change in the trajectory, which must avoid the NFZ while still
progressing to the next waypoint in the route. Dashed line is originally planned trajectory, green line is the
replanned trajectory, and blue line is flight data.
Because wave-offs and aborts can happen at any time during a mission, and the SAV and NFZ constraints must still
be satisfied as the rotorcraft makes its way to the wave-off or abort point, onboard route and trajectory planning are
required. Planning from various points in the mission to these contingency points, without violation of the
constraints, was demonstrated in 8 different missions during the 2-day demonstration period.
The AACUS program is pushing the performance boundaries of technologies for autonomous re-supply to austere
locations under the supervision of ground-based warfighters. To ensure smooth transition, the technologies under
development are being incorporated into an end-to-end architecture with carefully designed interfaces and command
and control mechanisms. The interfaces ensure that the TALOS architecture has a pathway to integration into future
DoD VTOL UAS, without requiring significant re-work. The command and control mechanisms are consistent with
current military standards, although they push those standards in new ways to enable new capabilities.
In this unique ‘technology for transition’ setting, the state of the art in perception, trajectory and route planning,
autonomous mission management, and human-machine teaming are all being pushed forward. By requiring these
technologies to work in a realistic, open architecture under demanding conditions driven by operational
requirements, the AACUS program is not only advancing the key technologies, but doing so in a realistic way,
providing a strong chance for transitioning to the warfighter.
The focus of this paper is work performed during the Phase 1 AACUS/TALOS effort. Since March 2014, the
TALOS team has been awarded a second Phase and has further improved the system architecture and the
performance, relevance, and readiness of the component technologies. Future phases will test TALOS on a second
platform to demonstrate the portability of the technology.
This work was funded by the Office of Naval Research, under contract N00014-12-C-0671. The support of ONR is
gratefully acknowledged.
1 Whalley, M., Freed, M., and Takahashi. M., Christian Patterson-Hine, D., Schulein, G., Harris, R., The
NASA/Army autonomous rotorcraft project, American Helicopter Society 59th Annual Forum Proceedings,
Volume 59, Issue 2, Pages 2040-2053, 2003.
2 Theodore, C., Rowley, D., Ansar, A., Matthies, L., Goldberg, S., Hubbard D., and Whalley, M., “Flight trials of a
rotorcraft unmanned aerial vehicle landing autonomously at unprepared sites,” American Helicopter Society 62nd
Annual Forum Proceedings, Volume 62, Issue 2, p. 1250, 2006.
3 Mettler, B., Kong Z., Goerzen C., and Whalley M., Benchmarking of obstacle field navigation algorithms for
autonomous helicopters,” American Helicopter Society 66th Annual Forum Proceedings, Phoenix, AZ, 2010.
4 Scherer, S., Singh, S., Chamberlain, L. J., and Saripalli, S., Flying Fast and Low Among Obstacles,Proceedings
International Conference on Robotics and Automation, April, 2007.
5 Scherer, S., Singh, S., Chamberlain. L., and Elgersma, M., Flying Fast and Low Among Obstacles: Methodology
and Experiments,The International Journal of Robotics Research, Vol. 27, No. 5, May, 2008, pp. 549-574.
6 Whalley, M., Takahashi, M., Tsenkov, P., Schulein, G., and Goerzen, C., “Field-testing of a helicopter UAV
obstacle field navigation and landing system,” Proceedings of the 65th Annual forum of the American Helicopter
Society, Grapevine, Texas, 2009.
7 Limbaugh, D., Higgins, R., Cates, P., and Ericsson, L., Achieving Safe and Effective UAS Control for the U.S.
Army’s Bi-Directional Remote Video Terminal,” American Helicopter Society 65th Annual Forum Proceedings,
Grapevine, Texas, May 2009.
8 Scherer, S., Chamberlain, L. J., and Singh, S., Online Assessment of Landing Sites,AIAA Infotech@Aerospace
2010, April, 2010.
9 Goerzen, C., and Whalley, M., “Minimal risk motion planning: a new planner for autonomous UAVs in uncertain
environment,” American Helicopter Society International Specialists’ Meeting on Unmanned Rotorcraft, Tempe,
Arizona, 2011.
10 Takahashi, M. D., Abershitz, A., Rubinets. R., and Whalley, M., “Evaluation of safe landing area determination
algorithms for autonomous rotorcraft using site benchmarking,” American Helicopter Society 67th Annual Forum
Proceedings, Virginia Beach, March 2011.
11 Chamberlain, L. J., Scherer, S., and Singh, S., “Self-Aware Helicopters: Full-Scale Automated Landing and
Obstacle Avoidance in Unmapped Environments,” American Helicopter Society 67th Annual Forum Proceedings,
Virginia Beach, March 2011.
12 Scherer, S.,Low-Altitude Operation of Unmanned Rotorcraft,” doctoral dissertation, tech. report CMU-RI-TR-
11-03, Robotics Institute, Carnegie Mellon University, May, 2011
13 Scherer, S., Chamberlain, L. J., and Singh, S., Autonomous landing at unprepared sites by a full-scale
helicopter,” Robotics and Autonomous Systems, September, 2012
14 Scherer, S., Chamberlain, L. J., and Singh, S., First Results in Autonomous Landing and Obstacle Avoidance by
a Full-Scale Helicopter,” ICRA, May 2012.
15 Scherer, S., Chamberlain, L. J., and Singh, S., Autonomous landing at unprepared sites by a full-scale
helicopter,Robotics and Autonomous Systems, vol. 60, (12), 2012, pp. 1545–1562
16 Whalley, M. S., Takahashi, M. D., Fletcher, J. W., Moralez, E., Ott, L. C. R., Olmstead, L. M. G., Savage, J. C.,
Goerzen, C. L., Schulein, G. J., Burns, H. N. and Conrad, B. Autonomous Black Hawk in Flight: Obstacle Field
Navigation and Landing-site Selection on the RASCAL JUH-60,” Journal of Field Robotics, vol. 31, 2014, pp.
17 Choudhury, S., Scherer, S., and Singh, S., RRT*-AR: Sampling-Based Alternate Routes Planning with
Applications to Autonomous Emergency Landing of a Helicopter,International Conference on Robotics and
Automation, May, 2013.
18 Arora, S., Choudhury, S., Scherer, S., and Althoff, D., A Principled Approach to Enable Safe and High
Performance Maneuvers for Autonomous Rotorcraft, American Helicopter Society 70th Annual Forum
Proceedings, Montreal, May 2014.
19 Choudhury, S., Arora, S., and Scherer, S., The Planner Ensemble and Trajectory Executive: A High Performance
Motion Planning System with Guaranteed Safety,American Helicopter Society 70th Annual Forum Proceedings,
Montreal, May 2014.
18 Dominguez, C., Strouse, R., Papautsky, L., and Moon, B., “Cognitive Design of an App Enabling Remote Bases
to Receive Unmanned Helicopter Resupply,” Journal of Human-Robot Interaction. In Press, 2015).
19 Papautsky, E., Dominguez, C., Strouse, R., and Moon, B.. “Integration of CTA and Design Thinking for
Autonomous Helicopter Displays,” Manuscript submitted for publication, 2015).
... Shortly thereafter, another example of reactive autonomy was first flown on the full-authority Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) JUH-60A Fly-by-Wire (FBW) Black Hawk helicopter in mountainous terrain [11,12]. The Office of Naval Research (ONR) Autonomous Aerial Cargo/Utility System (AACUS) program also addressed autonomous obstacle avoidance and landing at unprepared sites [13]. An emphasis of this program was portability, so the autonomy system was tested on a variety of partial-authority aircraft including the Boeing AH-6 Unmanned Little Bird (ULB), Bell 206 variants, and Aurora's Bell UH-1. ...
... An example of autonomy on a partial-authority system was that used on the Boeing ULB [13], which had a mode to follow waypoint trajectories generated by the autonomy. This system was based on the previously mentioned autonomy work in Ref. [10], and relied on commanding dual electromechanical actuators composed of a low and high bandwidth unit [18]. ...
A comparison of a guidance and flight control system performance on a partial-authority helicopter was made against a previously flown version on a full-authority helicopter. This control system was integrated and flight-tested on a partial-authority helicopter to autonomously navigate and reactively avoid obstacles enroute and on final landing approach. Because both systems used virtually the same control system and aircraft, a comparison was performed to investigate how much of the autonomous maneuvering capability could be recovered using a partial-authority aircraft. This paper describes the control system, its performance, and how it was adapted to a partial-authority EH-60L Black Hawk helicopter. A partial-authority mixing method is described and it was used to integrate the autonomy system using frequency allocation to distribute the control commands to the high- and low-frequency actuation. Flight tests results are presented of the integrated system flying predefined maneuvers that ranged from precision hover maneuvers to moderately aggressive forward flight maneuvers. Results are also presented of the system navigating through mountainous terrain using a reactive obstacle-avoidance algorithm. Tracking performance, actuator usage, and stability results for both systems are shown. The comparison of the partial- and full-authority results showed that the partial-authority mixing system was effective by allowing the partial-authority system to achieve most of the full-authority performance as measured by path tracking error and actuator usage for several autonomous maneuvers.
... Eventually, LADAR (aka LiDAR or LIDAR)-based autonomous landing algorithms progressed onto full-scale rotorcraft such as was tested on the Boeing Unmanned Little Bird (ULB), in Ref. 7 and on the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) JUH-60A Black Hawk helicopter in Ref. 8. More recently, full-scale SLAD work was conducted in Ref. 9, as part of the Office of Naval Research, Autonomous Aerial Cargo/Utility System (AACUS) prototype program. This program also used the ULB with a landing system using a scanning LADAR, electro-optical and infrared cameras, and integrated GPS-INS to georectify the LADAR returns. ...
... This program also used the ULB with a landing system using a scanning LADAR, electro-optical and infrared cameras, and integrated GPS-INS to georectify the LADAR returns. Follow-on work in perception was presented in Ref. 10, where improvements were made to the algorithm in Ref. 9 that were related to wire-detection, GPS free navigation, and, LZ categorization. ...
Full-text available
An important element in autonomous full-scale rotorcraft operations is safe landing area determination (SLAD) to find landing points at unprepared sites. This capability is also critical for optionally piloted rotorcraft where landing aides are essential for operations in severely degraded visual environments. This paper presents flight-test results from a new SLAD algorithm flown on the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) JUH-60A Black Hawk helicopter equipped with a nodding line-scanning LADAR. Eighty-six autonomous and manned approaches were flown to landing zones, many in rugged mountainous terrain. A detailed description of the algorithm and flight hardware is given, and sample results are shown for seven landing zones. Pilot interaction with the algorithm during approach and final landing point selection is also discussed.
... The Office of Naval Research (ONR) Autonomous Aerial Cargo/Utility System (AACUS) program addressed autonomous obstacle avoidance and landing in unprepared sites (Ref. 6). The U.S. Army Aviation Development Directorate autonomy work in Refs. ...
Autonomous flight of unmanned full‐size rotor‐craft has the potential to enable many new applications. However, the dynamics of these aircraft, prevailing wind conditions, the need to operate over a variety of speeds and stringent safety requirements make it difficult to generate safe plans for these systems. Prior work has shown results for only parts of the problem. Here we present the first comprehensive approach to planning safe trajectories for autonomous helicopters from takeoff to landing. Our approach is based on two key insights. First, we compose an approximate solution by cascading various modules that can efficiently solve different relaxations of the planning problem. Our framework invokes a long‐term route optimizer, which feeds a receding‐horizon planner which in turn feeds a high‐fidelity safety executive. Secondly, to deal with the diverse planning scenarios that may arise, we hedge our bets with an ensemble of planners. We use a data‐driven approach that maps a planning context to a diverse list of planning algorithms that maximize the likelihood of success. Our approach was extensively evaluated in simulation and in real‐world flight tests on three different helicopter systems for duration of more than 109 autonomous hours and 590 pilot‐in‐the‐loop hours. We provide an in‐depth analysis and discuss the various tradeoffs of decoupling the problem, using approximations and leveraging statistical techniques. We summarize the insights with the hope that it generalizes to other platforms and applications.
Full-text available
Express by micro aerial vehicle (MAV) becomes more and more popular because it can avoid the influence of terrain and save more space for taking-off and landing of aircraft. At present, quadrotor is often used in the express industry due to its flexibility and easy operation, and the flight trajectory plays an important role in the efficiency and safety level of express service. In this paper, the trajectory planning problem is studied for quadrotor delivering goods in urban environment with the purpose of avoiding the heavy ground traffic, and a cuckoo search (CS)-based trajectory planning method is proposed to solve the problem. First, a conceptual model containing all the key elements of the delivery task is developed, which presents a general idea of solving the problem. Some characteristics of the urban environment and the delivery task, such as the wind field, dense buildings and inclination of shipped goods, are taken into account in the trajectory planning model. The goal of the delivery task is to make the goods reach the destination accurately. When designing the CS-based trajectory planning algorithm, the basics of CS algorithm are explained, and then it is integrated into the trajectory planning problem. Comparative experiments are carried out to investigate the superiority of the proposed method, and the influences of parameters in CS algorithm are also discussed to conclude its performance in trajectory planning problem.
Conference Paper
An improvement to autonomous Safe Landing Area Determination (SLAD) algorithms is proposed which takes into account aircraft motion and agility. This kinematic weight function is calculated over all points of the landing area and is based on current position, minimum hover position, and minimum time to hover. This paper presents the mathematical basis behind the kinematic weight function, its implementation in the Multi-Layer Surface Map Safe Landing Area Determination (MLSM-SLAD) as an additional layer, and simulation results of tests using a virtual UH-60 helicopter with a scanning LADAR terrain sensor. It is found that using the kinematic weight function reduces landing approach duration by over 18 seconds by eliminating overshoot of the landing point chosen from data scanned during the approach.
Conference Paper
Full-text available
This paper describes the development and flight testing of an autonomous flight control system that allows for different levels of pilot interaction in both hover and forward flight. Three modes of operations are described, fully coupled autonomy, additive control, and piloted decoupled attitude-command. These modes are demonstrated on the Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) JUH-60A Black Hawk integrated with Obstacle Field Navigation (OFN) and Safe Landing Area Determination (SLAD) algorithms that both use a line scanning LADAR as their primary sensor. The paper will describe the control system, its performance, and demonstrate how it was used to transition into these modes while flying mission scenarios through mountainous terrain at speeds from hover to 100 kts.
Conference Paper
Full-text available
Unmanned cargo delivery to combat outposts will inevitably involve operations in degraded visual environments (DVE). When DVE occurs, the aircraft autonomy system needs to be able to function regardless of the obscurant level. In 2014, Near Earth Autonomy established a baseline perception system for autonomous rotorcraft operating in clear air conditions, when its m3 sensor suite and perception software enabled autonomous, no-hover landings onto unprepared sites populated with obstacles. The m3’s long-range lidar scanned the helicopter’s path and the perception software detected obstacles and found safe locations for the helicopter to land. This paper presents the results of initial tests with the Near Earth perception system in a variety of DVE conditions and analyzes them from the perspective of mission performance and risk. Tests were conducted with the m3’s lidar and a lightweight synthetic aperture radar in rain, smoke, snow, and controlled brownout experiments. These experiments showed the capability to penetrate through mild DVE but the perceptual capabilities became degraded with the densest brownouts. The results highlight the need for not only improved ability to see through DVE, but also for improved algorithms to monitor and report DVE conditions.
Conference Paper
Full-text available
An important element of rotorcraft UAV operations is Safe Landing Area Determination (SLAD) to select desirable sites for landing or load placement. Effectively and reliably accomplishing this task would greatly enhance high-level autonomous capabilities in many operations such as search and rescue and re-supply. This paper presents the results of quantitatively evaluating two SLAD algorithms using a new test method that incorporates a detailed survey of the test sites. These survey sites act as benchmarks against which the SLAD methods are compared. One SLAD algorithm is a new approach that uses laser range data to detect a set of potential landing points and uses fuzzy logic to rank them based on surface roughness, size, and terrain slope metrics. The second SLAD method was developed previously under the Precision Autonomous Landing Adaptive Control Experiment (PALACE), Army Technology Objective. Flight test data were collected at seven sites ranging from simple to complex with multiple runs at each site. Both methods are evaluated based on their true positive and false positive rates and the consistency of their landing site selection.
Full-text available
Ensuring that unmanned aerial systems' (UAS) control stations include a tight coupling of systems engineering with human factors, cognitive analysis, and design is key to their success. We describe a combined cognitive task analysis (CTA) and design thinking effort to develop interfaces for an operator controlling an autonomous helicopter, a prototype system that the Office of Naval Research is developing. We first conducted CTA interviews with subject-matter experts having expertise in UAS flight operations, helicopter resupply, military ground forces, and marine airspace control. Data informed the development of analysis products, including human-system interface requirements, which drove the creation of design concepts through ideation sessions using design thinking methods. We validated and refined the design concepts with UAS pilots. We provide an overview of our process, illustrated by details of a timeline display development. Significant aspects of our work include close integration of CTA and design thinking efforts, designing for an "Üenvisioned world" of interaction with highly autonomous helicopter systems, and the importance of knowledge elicitation early in system design. This effort represents a successful demonstration of an innovative design process in developing UAS interfaces.
Full-text available
This paper reports on a research project that combined cognitive task analysis (CTA) methods with innovative design processes to develop a handheld device application enabling a non-aviator to interact with a highly autonomous resupply helicopter. In recent military operations, unmanned helicopters have been used to resupply U.S. Marines at remote forward operating bases (FOBs) and combat outposts (COPs). This use of unmanned systems saves lives by eliminating the need to drive through high-risk areas for routine resupply. The U.S. Navy is investing in research to improve the autonomy of these systems and the design of interfaces to enable a non-aviator Marine to safely and successfully interact with an incoming resupply helicopter using a simple, intuitive handheld device application. In this research, we collected data from multiple stakeholders to develop requirements, use cases, and design storyboards that have been implemented and demonstrated during flight tests in early 2014.
Full-text available
An important element of rotorcraft UAV operations is safe landing area determination (SLAD), which is the ability to select desirable landing or load placement areas at unprepared sites. Effectively and reliably accomplishing this task would greatly enhance high-level autonomous capabilities in many operations such as search and rescue and resupply. This paper presents the results of quantitatively evaluating two SLAD algorithms using a new test method that incorporates a detailed survey of the test sites. These survey sites act as benchmarks against which the SLAD methods are compared. One SLAD algorithm is a new approach that uses laser range data to detect a set of potential landing points and uses fuzzy logic to rank them based on surface roughness, size, and terrain slope metrics. The second algorithm uses laser range data to optimize a performance index, based on sliding window statistics of surface slope and roughness over the landing zone, to select potential landing points. Flight-test data were collected at six sites ranging from simple to complex with multiple runs at each site. Both methods are evaluated based on their true-positive and false-positive rates and the consistency of their landing site selection.
Conference Paper
Full-text available
Assessing a landing zone (LZ) reliably is essential for safe operation of vertical takeoff and landing (VTOL) aerial vehicles that land at unimproved locations. Currently an operator has to rely on visual assessment to make an approach decision; however. visual information from afar is insufficient to judge slope and detect small obstacles. Prior work has modeled LZ quality based on plane fitting, which only partly represents the interaction between vehicle and ground. Our approach consists of a coarse evaluation based on slope and roughness criteria, a fine evaluation for skid contact, and body clearance of a location. We investigated whether the evaluation is correct for using terrain maps collected from a helicopter. This paper defines the problem of evaluation, describes our incremental real-time algorithm, and discusses the effectiveness of our approach. In results from urban and natural environments, we were able to successfully classify LZs from point cloud maps collected on a helicopter. The presented method enables detailed assessment of LZs without an landing approach, thereby improving safety. Still, the method assumes low-noise point cloud data. We intend to increase robustness to outliers while still detecting small obstacles in future work. I.
Full-text available
This paper describes the development and flight test of autonomous obstacle field navigation and safe landing area selection on the U.S. Army Aeroflightdynamics Directorate RASCAL JUH-60A research helicopter. Using laser detection and ranging (LADAR) as the primary terrain sensor, the autonomous flight system is able to avoid obstacles, including wires, and select safe landing sites. An autonomous integrated landing zone approach profile was developed and validated that integrates cruise flight, low-level terrain flight, and approach to a safe landing spot determined on the fly. Results are presented for a range of sites and conditions. Approximately 750 km of autonomous flight was performed, 230 km of which was at low altitude in mountainous terrain using the obstacle field navigation system. This is the first time a full-scale helicopter has been flown fully autonomously a significant distance in low-level flight over complex terrain, basing its planning solely on sensor data gathered from an onboard sensor. These flights demonstrate tight integration between terrain avoidance, control, and autonomous landing.
Conference Paper
Full-text available
Engine malfunctions during helicopter flight poses a large risk to pilot and crew. Without a quick and coordinated reaction, such situations lead to a complete loss of control. An autonomous landing system could react quicker to regain control, however current emergency landing methods only generate dynamically feasible trajectories without considering obstacles. We address the problem of autonomously landing a helicopter while considering a realistic context: multiple potential landing zones, geographical terrain, sensor limitations and pilot contextual knowledge. We designed a planning system to generate alternate routes (AR) that respect these factors till touchdown exploiting the human-in-loop to make a choice. This paper presents an algorithm, RRT*-AR, building upon the optimal sampling-based algorithm RRT* to generate AR in realtime and examines its performance for simulated failures occurring in mountainous terrain, while maintaining optimality guarantees. After over 4500 trials, RRT*-AR outperformed RRT* by providing the human 280% more options 67% faster on average. As a result, it provides a much wider safety margin for unaccounted disturbances, and a more secure environment for a pilot. Using AR, the focus can now shift on delivering safety guarantees and handling uncertainties in these situations.
Autonomous rotorcraft are required to operate in cluttered, unknown, and unstructured environments. Guaranteeing the safety of these systems is critical for their successful deployment. Current methodologies for evaluating or ensuring safety either do not guarantee safety or severely limit the performance of rotorcraft. To design a guaranteed safe rotorcraft, we have defined safety for an autonomous rotorcraft flying in unknown environments given sensory and dynamic constraints. We have developed an approach that ensures the vehicle's safety while pushing the limits of safe operation of the vehicle. Furthermore, the presented safety definition and the presented approach are independent of the vehicle and planning algorithm used on the rotorcraft. In this paper we present a real time algorithm to guarantee the safety of the rotorcraft through a diverse set of emergency maneuvers. We prove that the related trajectory set diversity problem is monotonic and sub-modular which enables us to develop an efficient, bounded sub-optimal trajectory set generation algorithm. We present safety results for the autonomous Unmanned Little Bird Helicopter flying at speeds of up to 56m/s in partially-known environments. Through months of flight testing the helicopter has been avoiding trees, performing autonomous landing, avoiding mountains while being guaranteed safe. We also present simulation results of the helicopter flying in the Grand Canyon, with no prior map of the environment. Copyright © 2014 by the American Helicopter Society International, Inc. All rights reserved.
In this paper we present a perception and autonomy package that for the first time allows a full-scale unmanned helicopter (the Boeing Unmanned Little Bird) to automatically fly through unmapped, obstacle-laden terrain, find a landing zone, and perform a safe landing near a casualty, all with no human control or input. The system also demonstrates the ability to avoid obstacles while in low-altitude flight. The perception system consists of a 3D LADAR mapping unit with sufficient range, accuracy, and bandwidth to bring autonomous flight into the realm of full-scale aircraft. Efficient evaluation of this data and fast planning algorithms provide the aircraft with safe flight trajectories in real-time. We show the results of several fully autonomous landing and obstacle avoidance missions. Copyright © 2011, American Helicopter Society International, Inc. All rights reserved.
Autonomous helicopters are required to fly at a wide range of speed close to ground and eventually land in an unprepared cluttered area. Existing planning systems for unmanned rotorcrafts are capable of flying in unmapped environments, however they are restricted to a specific operating regime dictated by the underlying planning algorithm. We address the problem of planning a trajectory that is computed in real time, respects the dynamics of the helicopter, and keeps the vehicle safe in an unmapped environment with a finite horizon sensor. We have developed a planning system that is capable of doing this by running competing planners in parallel. This paper presents a planning architecture that consists of a trajectory executive - A low latency, verifiable component - That selects plans from a planner ensemble and ensures safety by maintaining emergency maneuvers. Here we report results with an autonomous helicopter that flies missions several kilometers long through unmapped terrain at speeds of upto 56 m/s and landing in clutter. In over 6 months of flight testing, the system has avoided unmapped mountains, popup no fly zones, and has come into land while avoiding trees and buildings in a cluttered landing zone. We also present results from simulation where the same system is flown in challenging obstacle regions - In all cases the system always remains safe and accomplishes the mission. As a result, the system showcases the ability to have a high performance in all environments while guaranteeing safety. © 2014 by the American Helicopter Society international Inc. All rights reserved.