We present the motion planning framework for an autonomous vehicle navigating through urban environments. Such environments present a number of motion planning challenges, including ultra-reliability, high-speed operation, complex inter-vehicle interaction, parking in large unstructured lots, and constrained maneuvers. Our approach combines a model-predictive trajectory generation algorithm for computing dynamically-feasible actions with two higher-level planners for generating long range plans in both on-road and unstructured areas of the environment. In this Part II of a two-part paper, we describe the unstructured planning component of this system used for navigating through parking lots and recovering from anomalous on-road scenarios. We provide examples and results from ldquoBossrdquo, an autonomous SUV that has driven itself over 3000 kilometers and competed in, and won, the Urban Challenge.
It is generally accepted that systems composed of multiple aerial robots with autonomous cooperation capabilities can assist responders in many search and rescue (SAR) scenarios. In most of the previous research work, the aerial robots are mainly considered as platforms for environmental sensing and have not been used to assist victims. In this paper, outdoors field experiments of transportation and accurate deployment of loads, with single/multiple autonomous aerial vehicles are presented. This is a novel feature that opens the possibility to use aerial robots to assist victims during the rescue phase operations. The accuracy in the deployment location is a critical issue in SAR scenarios where injured people may have very limited mobility.
The presented system is composed of up to three small size helicopters and features the cooperative sensing, using several different sensor types. The system supports several forms of cooperative actuation as well, ranging from the cooperative deployment of small sen- sors/objects to the coupled transportation of slung loads.
Within this paper the complete system is described, outlining the used hardware and the used software framework, as well as the used approaches for modeling and control. Addition- ally, the results of several flight field experiments are presented, including the description of the worldwide first successful autonomous load transportation experiment, using three cou- pled small size helicopters (conducted in December 2007). During these experiments strong steady winds and wind gusts were present. Various solutions and lessons learned from the design and operation of the system are also provided.
Boss is an autonomous vehicle that uses on-board sensors (GPS, lasers, radars, and cameras) to track other vehicles, detect
static obstacles and localize itself relative to a road model. A three-layer planning system combines mission, behavioral
and motion planning to drive in urban environments. The mission planning layer considers which street to take to achieve a
mission goal. The behavioral layer determines when to change lanes, precedence at intersections and performs error recovery
maneuvers. The motion planning layer selects actions to avoid obstacles while making progress towards local goals.
The system was developed from the ground up to address the requirements of the DARPA Urban Challenge using a spiral system
development process with a heavy emphasis on regular, regressive system testing. During the National Qualification Event and
the 85km Urban Challenge Final Event Boss demonstrated some of its capabilities, qualifying first and winning the challenge.
This paper describes the architecture and implementation of an autonomous passenger vehicle designed to navigate using locally
perceived information in preference to potentially inaccurate or incomplete map data. The vehicle architecture was designed
to handle the original DARPA Urban Challenge requirements of perceiving and navigating a road network with segments defined
by sparse waypoints. The vehicle implementation includes many heterogeneous sensors with significant communications and computation
bandwidth to capture and process high-resolution, high-rate sensor data. The output of the comprehensive environmental sensing
subsystem is fed into a kino-dynamic motion planning algorithm to generate all vehicle motion. The requirements of driving
in lanes, three-point turns, parking, and maneuvering through obstacle fields are all generated with a unified planner. A
key aspect of the planner is its use of closed-loop simulation in a Rapidly-exploring Randomized Trees (RRT) algorithm, which
can randomly explore the space while efficiently generating smooth trajectories in a dynamic and uncertain environment. The
overall system was realized through the creation of a powerful new suite of software tools for message-passing, logging, and
visualization. These innovations provide a strong platform for future research in autonomous driving in GPS-denied and highly
dynamic environments with poor a priori information.
This article describes the robot Stanley, which won the 2005 DARPA Grand Challenge. Stanley was developed for high-speed desert driving without manual intervention. The robot’s software system relied predominately on state-of-the-art artificial intelligence technologies, such as machine learning and probabilistic reasoning. This article describes the major components of this architecture, and discusses the results of the Grand Challenge race. (a) (b) Figure 1: (a) At approximately 1:40pm on Oct 8, 2005, Stanley is the first robot to complete the DARPA Grand Challenge. (b) The robot is being honored by DARPA Director Dr. Tony Tether.
Soldiers are often asked to perform missions that last many hours and are extremely stressful. After a mission is complete, the soldiers are typically asked to provide a report describing the most important things that happened during the mission. Due to the various stresses associated with military missions, there are undoubtedly many instances in which important information is missed or not reported and, therefore, not available for use when planning future missions. The ASSIST (Advanced Soldier Sensor Information System and Sensors Technology) program is addressing this challenge by instrumenting soldiers with sensors that they can wear directly on their uniforms. During the mission, the sensors continuously record what is going on around the soldier. With this information, soldiers are able to give more accurate reports without relying solely on their memory. In order for systems like this (often termed autonomous or intelligent systems) to be successful, they must be comprehensively and quantitatively evaluated to ensure that they will function appropriately and as expected in a wartime environment. The primary contribution of this paper is to introduce and define a framework and approach to performance evaluation called SCORE (System, Component, and Operationally Relevant Evaluation) and describe the results of applying it to evaluate the ASSIST technology. As the name implies, SCORE is built around the premise that, in order to get a true picture of how a system performs in the field, it must be evaluated at the component level, the system level, and in operationally relevant environments. The SCORE framework provides proven techniques to aid in the performance evaluation of many types of intelligent systems. To date, SCORE has only been applied to technologies under development (formative evaluation), but the authors believe that this approach would lend itself equally well to the evaluation of technologies ready to be fielded (summative evaluation).
In January 2004, NASA's twin Mars Exploration Rovers (MERs), Spirit and Opportunity, began searching the surface of Mars for evidence of past water activity. To localize and approach scientifically interesting targets, the rovers employ an onboard navigation ...