Journal of Field Robotics

Published by Wiley
Online ISSN: 1556-4967
Print ISSN: 1556-4959
Publications
We present the motion planning framework for an autonomous vehicle navigating through urban environments. Such environments present a number of motion planning challenges, including ultra-reliability, high-speed operation, complex inter-vehicle interaction, parking in large unstructured lots, and constrained maneuvers. Our approach combines a model-predictive trajectory generation algorithm for computing dynamically-feasible actions with two higher-level planners for generating long range plans in both on-road and unstructured areas of the environment. In this Part II of a two-part paper, we describe the unstructured planning component of this system used for navigating through parking lots and recovering from anomalous on-road scenarios. We provide examples and results from ldquoBossrdquo, an autonomous SUV that has driven itself over 3000 kilometers and competed in, and won, the Urban Challenge.
 
It is generally accepted that systems composed of multiple aerial robots with autonomous cooperation capabilities can assist responders in many search and rescue (SAR) scenarios. In most of the previous research work, the aerial robots are mainly considered as platforms for environmental sensing and have not been used to assist victims. In this paper, outdoors field experiments of transportation and accurate deployment of loads, with single/multiple autonomous aerial vehicles are presented. This is a novel feature that opens the possibility to use aerial robots to assist victims during the rescue phase operations. The accuracy in the deployment location is a critical issue in SAR scenarios where injured people may have very limited mobility. The presented system is composed of up to three small size helicopters and features the cooperative sensing, using several different sensor types. The system supports several forms of cooperative actuation as well, ranging from the cooperative deployment of small sen- sors/objects to the coupled transportation of slung loads. Within this paper the complete system is described, outlining the used hardware and the used software framework, as well as the used approaches for modeling and control. Addition- ally, the results of several flight field experiments are presented, including the description of the worldwide first successful autonomous load transportation experiment, using three cou- pled small size helicopters (conducted in December 2007). During these experiments strong steady winds and wind gusts were present. Various solutions and lessons learned from the design and operation of the system are also provided.
 
Boss is an autonomous vehicle that uses on-board sensors (GPS, lasers, radars, and cameras) to track other vehicles, detect static obstacles and localize itself relative to a road model. A three-layer planning system combines mission, behavioral and motion planning to drive in urban environments. The mission planning layer considers which street to take to achieve a mission goal. The behavioral layer determines when to change lanes, precedence at intersections and performs error recovery maneuvers. The motion planning layer selects actions to avoid obstacles while making progress towards local goals. The system was developed from the ground up to address the requirements of the DARPA Urban Challenge using a spiral system development process with a heavy emphasis on regular, regressive system testing. During the National Qualification Event and the 85km Urban Challenge Final Event Boss demonstrated some of its capabilities, qualifying first and winning the challenge.
 
This paper describes the architecture and implementation of an autonomous passenger vehicle designed to navigate using locally perceived information in preference to potentially inaccurate or incomplete map data. The vehicle architecture was designed to handle the original DARPA Urban Challenge requirements of perceiving and navigating a road network with segments defined by sparse waypoints. The vehicle implementation includes many heterogeneous sensors with significant communications and computation bandwidth to capture and process high-resolution, high-rate sensor data. The output of the comprehensive environmental sensing subsystem is fed into a kino-dynamic motion planning algorithm to generate all vehicle motion. The requirements of driving in lanes, three-point turns, parking, and maneuvering through obstacle fields are all generated with a unified planner. A key aspect of the planner is its use of closed-loop simulation in a Rapidly-exploring Randomized Trees (RRT) algorithm, which can randomly explore the space while efficiently generating smooth trajectories in a dynamic and uncertain environment. The overall system was realized through the creation of a powerful new suite of software tools for message-passing, logging, and visualization. These innovations provide a strong platform for future research in autonomous driving in GPS-denied and highly dynamic environments with poor a priori information.
 
This article describes the robot Stanley, which won the 2005 DARPA Grand Challenge. Stanley was developed for high-speed desert driving without manual intervention. The robot’s software system relied predominately on state-of-the-art artificial intelligence technologies, such as machine learning and probabilistic reasoning. This article describes the major components of this architecture, and discusses the results of the Grand Challenge race. (a) (b) Figure 1: (a) At approximately 1:40pm on Oct 8, 2005, Stanley is the first robot to complete the DARPA Grand Challenge. (b) The robot is being honored by DARPA Director Dr. Tony Tether.
 
This paper describes “Little Ben,” an autonomous ground vehicle constructed by the Ben Franklin Racing Team for the 2007 DARPA Urban Challenge in under a year and for less than $250,000. The sensing, planning, navigation, and actuation systems for Little Ben were carefully designed to meet the performance demands required of an autonomous vehicle traveling in an uncertain urban environment. We incorporated an array of a global positioning system (GPS)/inertial navigation system, LIDARs, and stereo cameras to provide timely information about the surrounding environment at the appropriate ranges. This sensor information was integrated into a dynamic map that could robustly handle GPS dropouts and errors. Our planning algorithms consisted of a high-level mission planner that used information from the provided route network definition and mission data files to select routes, whereas the lower level planner used the latest dynamic map information to optimize a feasible trajectory to the next waypoint. The vehicle was actuated by a cost-based controller that efficiently handled steering, throttle, and braking maneuvers in both forward and reverse directions. Our software modules were integrated within a hierarchical architecture that allowed rapid development and testing of the system performance. The resulting vehicle was one of six to successfully finish the Urban Challenge. © 2008 Wiley Periodicals, Inc.
 
This paper reports on AnnieWAY, an autonomous vehicle that is capable of driving through urban scenarios and that successfully entered the finals of the 2007 DARPA Urban Challenge competition. After describing the main challenges imposed and the major hardware components, we outline the underlying software structure and focus on selected algorithms. Environmental perception mainly relies on a recent laser scanner that delivers both range and reflectivity measurements. Whereas range measurements are used to provide three-dimensional scene geometry, measuring reflectivity allows for robust lane marker detection. Mission and maneuver planning is conducted using a hierarchical state machine that generates behavior in accordance with California traffic laws. We conclude with a report of the results achieved during the competition. © 2008 Wiley Periodicals, Inc.
 
This paper presents results from the first two Space Shuttle test flights of the TriDAR vision system. TriDAR was developed as a proximity operations sensor for autonomous rendezvous and docking (AR&D) missions to noncooperative targets in space. The system does not require the use of cooperative markers, such as retro-reflectors, on the target spacecraft. TriDAR includes a hybrid three-dimensional (3D) sensor along with embedded model based tracking algorithms to provide six-degree-of-freedom (6 DOF) relative pose information in real time. A thermal imager is also included to provide range and bearing information for far-range rendezvous operations. In partnership with the Canadian Space Agency (CSA) and NASA, Neptec has space-qualified the TriDAR vision system and integrated it on board Space Shuttle Discovery to fly as a detailed test objective (DTO) on the STS-128 and STS-131 missions to the International Space Station (ISS). The objective of the TriDAR DTO missions was to demonstrate the system's ability to perform acquisition and tracking of a known target in space autonomously and provide real-time relative navigation cues. Knowledge (reference 3D model) about the target can be obtained on the ground or in orbit. Autonomous operations involved automatic acquisition of the ISS and real-time tracking, as well as detection and recovery from system malfunctions and/or loss of tracking. © 2012 Wiley Periodicals, Inc.
 
The Defense Applied Research Projects Agency (DARPA) Learning Applied to Ground Vehicles (LAGR) program aims to develop algorithms for autonomous vehicle navigation that learn how to operate in complex terrain. Over many years, the National Institute of Standards and Technology (NIST) has developed a reference model control system architecture called 4D/RCS that has been applied to many kinds of robot control, including autonomous vehicle control. For the LAGR program, NIST has embedded learning into a 4D/RCS controller to enable the small robot used in the program to learn to navigate through a range of terrain types. The vehicle learns in several ways. These include learning by example, learning by experience, and learning how to optimize traversal. Learning takes place in the sensory processing, world modeling, and behavior generation parts of the control system. The 4D/RCS architecture is explained in the paper, its application to LAGR is described, and the learning algorithms are discussed. Results are shown of the performance of the NIST control system on independently-conducted tests. Further work on the system and its learning capabilities is discussed. © 2007 Wiley Periodicals, Inc.
 
This paper reports the development and deployment of a synchronous-clock acoustic navigation system suitable for the simultaneous navigation of multiple underwater vehicles. Our navigation system is composed of an acoustic modem–based communication and navigation system that allows for onboard navigational data to be broadcast as a data packet by a source node and for all passively receiving nodes to be able to decode the data packet to obtain a one-way-travel-time (OWTT) pseudo-range measurement and navigational ephemeris data. The navigation method reported herein uses a surface ship acting as a single moving reference beacon to a fleet of passively listening underwater vehicles. All vehicles within acoustic range are able to concurrently measure their slant range to the reference beacon using the OWTT measurement methodology and additionally receive transmission of reference beacon position using the modem data packet. The advantages of this type of navigation system are that it can (i) concurrently navigate multiple underwater vehicles within the vicinity of the surface ship and (ii) provide a bounded-error XY position measure that is commensurate with conventional moored long-baseline (LBL) navigation systems [i.e., ] but unlike LBL is not geographically restricted to a fixed-beacon network. We present results for two different field experiments using a two-node configuration consisting of a global positioning system–equipped surface ship acting as a global navigation aid to a Doppler-aided autonomous underwater vehicle. In each experiment, vehicle position was independently corroborated by other standard navigation means. Results for a maximum likelihood sensor fusion framework are reported. © 2010 Wiley Periodicals, Inc.
 
In this paper, we report on the integration challenges of the various component technologies developed toward the establishment of a framework for deploying an adaptive system of heterogeneous robots for urban surveillance. In our integrated experiment and demonstration, aerial robots generate maps that are used to design navigation controllers and plan missions for the team. A team of ground robots constructs a radio-signal strength map that is used as an aid for planning missions. Multiple robots establish a mobile ad hoc communication network that is aware of the radio-signal strength between nodes, and can adapt to changing conditions to maintain connectivity. Finally, the team of aerial and ground robots is able to monitor a small village, and search for and localize human targets by the color of the uniform, while ensuring that the information from the team is available to a remotely located human operator. The key component technologies and contributions include: (a) Mission specification and planning software; (b) exploration and mapping of radio-signal strengths in an urban environment; (c) programming abstractions and composition of controllers for multirobot deployment; (d) cooperative control strategies for search, identification, and localization of targets; and (e) three-dimensional mapping in an urban setting. © 2007 Wiley Periodicals, Inc.
 
Recently, there has been growing interest in developing unmanned aircraft systems (UAS) with advanced onboard autonomous capabilities. This paper describes the current state of the art in autonomous rotorcraft UAS (RUAS) and provides a detailed literature review of the last two decades of active research on RUAS. Three functional technology areas are identified as the core components of an autonomous RUAS. Guidance, navigation, and control (GNC) have received much attention from the research community, and have dominated the UAS literature from the nineties until now. This paper first presents the main research groups involved in the development of GNC systems for RUAS. Then it describes the development of a framework that provides standard definitions and metrics characterizing and measuring the autonomy level of a RUAS using GNC aspects. This framework is intended to facilitate the understanding and the organization of this survey paper, but it can also serve as a common reference for the UAS community. The main objective of this paper is to present a comprehensive survey of RUAS research that captures all seminal works and milestones in each GNC area, with a particular focus on practical methods and technologies that have been demonstrated in flight tests. These algorithms and systems have been classified into different categories and classes based on the autonomy level they provide and the algorithmic approach used. Finally, the paper discusses the RUAS literature in general and highlights challenges that need to be addressed in developing autonomous systems for unmanned rotorcraft. © 2012 Wiley Periodicals, Inc.
 
This paper presents the architecture developed in the framework of the AWARE project for the autonomous distributed cooperation between unmanned aerial vehicles (UAVs), wireless sensor/actuator networks, and ground camera networks. One of the main goals was the demonstration of useful actuation capabilities involving multiple ground and aerial robots in the context of civil applications. A novel characteristic is the demonstration in field experiments of the transportation and deployment of the same load with single/multiple autonomous aerial vehicles. The architecture is endowed with different modules that solve the usual problems that arise during the execution of multipurpose missions, such as task allocation, conflict resolution, task decomposition, and sensor data fusion. The approach had to satisfy two main requirements: robustness for operation in disaster management scenarios and easy integration of different autonomous vehicles. The former specification led to a distributed design, and the latter was tackled by imposing several requirements on the execution capabilities of the vehicles to be integrated in the platform. The full approach was validated in field experiments with different autonomous helicopters equipped with heterogeneous devices onboard, such as visual/infrared cameras and instruments to transport loads and to deploy sensors. Four different missions are presented in this paper: sensor deployment and fire confirmation with UAVs, surveillance with multiple UAVs, tracking of firemen with ground and aerial sensors/cameras, and load transportation with multiple UAVs. © 2011 Wiley Periodicals, Inc.
 
In this paper, we present a novel approach to planetary rover localization that incorporates sun sensor and inclinometer data directly into a stereo visual odometry pipeline. Utilizing the absolute orientation information provided by the sun sensor and inclinometer significantly reduces the error growth of the visual odometry path estimate. The measurements have very low computation, power, and mass requirements, providing localization improvement at nearly negligible cost. We describe the mathematical formulation of error terms for the stereo camera, sun sensor, and inclinometer measurements, as well as the bundle adjustment framework for determining the maximum likelihood vehicle transformation. Extensive results are presented from experimental trials utilizing data collected during a 10-km traversal of a Mars analogue site on Devon Island in the Canadian high Arctic. We also illustrate how our approach can be used to reduce the computational burden of visual odometry for planetary exploration missions. © 2012 Wiley Periodicals, Inc.
 
Machine vision represents a particularly attractive solution for sensing and detecting potential collision-course targets due to the relatively low cost, size, weight, and power requirements of vision sensors (as opposed to radar and Traffic Alert and Collision Avoidance System). This paper describes the development and evaluation of a real-time, vision-based collision-detection system suitable for fixed-wing aerial robotics. Using two fixed-wing unmanned aerial vehicles (UAVs) to recreate various collision-course scenarios, we were able to capture highly realistic vision (from an onboard camera perspective) of the moments leading up to a collision. This type of image data is extremely scarce and was invaluable in evaluating the detection performance of two candidate target detection approaches. Based on the collected data, our detection approaches were able to detect targets at distances ranging from 400 to about 900 m. These distances (with some assumptions about closing speeds and aircraft trajectories) translate to an advance warning of between 8 and 10 s ahead of impact, which approaches the 12.5-s response time recommended for human pilots. We overcame the challenge of achieving real-time computational speeds by exploiting the parallel processing architectures of graphics processing units (GPUs) found on commercial-off-the-shelf graphics devices. Our chosen GPU device suitable for integration onto UAV platforms can be expected to handle real-time processing of 1,024 × 768 pixel image frames at a rate of approximately 30 Hz. Flight trials using manned Cessna aircraft in which all processing is performed onboard will be conducted in the near future, followed by further experiments with fully autonomous UAV platforms. © 2010 Wiley Periodicals, Inc.
 
Project AURORA aims at the development of unmanned robotic airships capable of autonomous flight over user-defined locations for aerial inspection and environmental monitoring missions. In this article, the authors report a successful control and navigation scheme for a robotic airship flight path following. First, the AURORA airship, software environment, onboard system, and ground station infrastructures are described. Then, two main approaches for the automatic control and navigation system of the airship are presented. The first one shows the design of dedicated controllers based on the linearized dynamics of the vehicle. Following this methodology, experimental results for the airship flight path following through a set of predefined points in latitude/longitude, along with automatic altitude control are presented. A second approach considers the design of a single global nonlinear control scheme, covering all of the aerodynamic operational range in a sole formulation. Nonlinear control solutions under investigation for the AURORA airship are briefly described, along with some preliminary simulation results. © 2006 Wiley Periodicals, Inc.
 
Soldiers are often asked to perform missions that last many hours and are extremely stressful. After a mission is complete, the soldiers are typically asked to provide a report describing the most important things that happened during the mission. Due to the various stresses associated with military missions, there are undoubtedly many instances in which important information is missed or not reported and, therefore, not available for use when planning future missions. The ASSIST (Advanced Soldier Sensor Information System and Sensors Technology) program is addressing this challenge by instrumenting soldiers with sensors that they can wear directly on their uniforms. During the mission, the sensors continuously record what is going on around the soldier. With this information, soldiers are able to give more accurate reports without relying solely on their memory. In order for systems like this (often termed autonomous or intelligent systems) to be successful, they must be comprehensively and quantitatively evaluated to ensure that they will function appropriately and as expected in a wartime environment. The primary contribution of this paper is to introduce and define a framework and approach to performance evaluation called SCORE (System, Component, and Operationally Relevant Evaluation) and describe the results of applying it to evaluate the ASSIST technology. As the name implies, SCORE is built around the premise that, in order to get a true picture of how a system performs in the field, it must be evaluated at the component level, the system level, and in operationally relevant environments. The SCORE framework provides proven techniques to aid in the performance evaluation of many types of intelligent systems. To date, SCORE has only been applied to technologies under development (formative evaluation), but the authors believe that this approach would lend itself equally well to the evaluation of technologies ready to be fielded (summative evaluation).
 
Large-scale environmental sensing, e.g., understanding microbial processes in an aquatic ecosystem, requires coordination across a multidisciplinary team of experts working closely with a robotic sensing and sampling system. We describe a human-robot team that conducted an aquatic sampling campaign in Lake Fulmor, San Jacinto Mountains Reserve, California during three consecutive site visits (May 9–11, June 19–22, and August 28–31, 2006). The goal of the campaign was to study the behavior of phytoplankton in the lake and their relationship to the underlying physical, chemical, and biological parameters. Phytoplankton form the largest source of oxygen and the foundation of the food web in most aquatic ecosystems. The reported campaign consisted of three system deployments spanning four months. The robotic system consisted of two subsystems—NAMOS (networked aquatic microbial observing systems) comprised of a robotic boat and static buoys, and NIMS-RD (rapidly deployable networked infomechanical systems) comprised of an infrastructure-supported tethered robotic system capable of high-resolution sampling in a two-dimensional cross section (vertical plane) of the lake. The multidisciplinary human team consisted of 25 investigators from robotics, computer science, engineering, biology, and statistics.We describe the lake profiling campaign requirements, the robotic systems assisted by a human team to perform high fidelity sampling, and the sensing devices used during the campaign to observe several environmental parameters. We discuss measures taken to ensure system robustness and quality of the collected data. Finally, we present an analysis of the data collected by iteratively adapting our experiment design to the observations in the sampled environment. We conclude with the plans for future deployments. © 2007 Wiley Periodicals, Inc.
 
The architecture of an advanced fault detection and diagnosis (FDD) system is described and applied with an Autonomous Underwater Vehicle (AUV). The architecture aims to provide a more capable system that does not require dedicated sensors for each fault, can diagnose previously unforeseen failures and failures with cause-effect patterns across different subsystems. It also lays the foundations for incipient fault detection and condition-based maintenance schemes. A model of relationships is used as an ontology to describe the connected set of electrical, mechanical, hydraulic, and computing components that make up the vehicle, down to the level of least replaceable unit in the field. The architecture uses a variety of domain dependent diagnostic tools (rulebase, model-based methods) and domain independent tools (correlator, topology analyzer, watcher) to first detect and then diagnose the location of faults. Tools nominate components, so that a rank order of most likely candidates can be generated. This modular approach allows existing proven FDD methods (e.g., vibration analysis, FMEA) to be incorporated and to add confidence to the conclusions. Illustrative performance is provided working in real time during deployments with the RAUVER hover capable AUV as an example of the class of automated system to which this approach is applicable. © 2007 Wiley Periodicals, Inc.
 
This paper describes the design and use of two new autonomous underwater vehicles, Jaguar and Puma, which were deployed in the summer of 2007 at sites at 85°N latitude in the ice-covered Arctic Ocean to search for hydrothermal vents. These robots are the first to be deployed and recovered through ice to the deep ocean (>3,500 m) for scientific research. We examine the mechanical design, software architecture, navigation considerations, sensor suite, and issues with deployment and recovery in the ice based on the missions they carried out. Successful recoveries of vehicles deployed under the ice require two-way acoustic communication, flexible navigation strategies, redundant localization hardware, and software that can cope with several different kinds of failure. The ability to direct an autonomous underwater vehicle via the low-bandwidth and intermittently functional acoustic channel is of particular importance. On the basis of our experiences, we also discuss the applicability of the technology and operational approaches of this expedition to the exploration of Jupiter's ice-covered moon Europa. © 2009 Wiley Periodicals, Inc.
 
Wilderness Search and Rescue (WiSAR) entails searching over large regions in often rugged remote areas. Because of the large regions and potentially limited mobility of ground searchers, WiSAR is an ideal application for using small (human-packable) unmanned aerial vehicles (UAVs) to provide aerial imagery of the search region. This paper presents a brief analysis of the WiSAR problem with emphasis on practical aspects of visual-based aerial search. As part of this analysis, we present and analyze a generalized contour search algorithm, and relate this search to existing coverage searches. Extending beyond laboratory analysis, lessons from field trials with search and rescue personnel indicated the immediate need to improve two aspects of UAV-enabled search: How video information is presented to searchers and how UAV technology is integrated into existing WiSAR teams. In response to the first need, three computer vision algorithms for improving video display presentation are compared; results indicate that constructing temporally localized image mosaics is more useful than stabilizing video imagery. In response to the second need, a goal-directed task analysis of the WiSAR domain was conducted and combined with field observations to identify operational paradigms and field tactics for coordinating the UAV operator, the payload operator, the mission manager, and ground searchers. © 2008 Wiley Periodicals, Inc.
 
This paper presents a fully autonomous navigation solution for urban, pedestrian environments. The task at hand, undertaken within the context of the European project URUS, was to enable two urban service robots, based on Segway RMP200 platforms and using planar lasers as primary sensors, to navigate around a known, large (10,000 m2), pedestrian-only environment with poor global positioning system coverage. Special consideration is given to the nature of our robots, highly mobile but two-wheeled, self-balancing, and inherently unstable. Our approach allows us to tackle locations with large variations in height, featuring ramps and staircases, thanks to a three-dimensional, map-based particle filter for localization and to surface traversability inference for low-level navigation. This solution was tested in two different urban settings, the experimental zone devised for the project, a university campus, and a very crowded public avenue, both located in the city of Barcelona, Spain. Our results total more than 6 km of autonomous navigation, with a success rate on go-to requests of nearly 99%. The paper presents our system, examines its overall performance, and discusses the lessons learned throughout development. © 2011 Wiley Periodicals, Inc.
 
This paper describes the architecture and implementation of an autonomous passenger vehicle designed to navigate using locally perceived information in preference to potentially inaccurate or incomplete map data. The vehicle architecture was designed to handle the original DARPA Urban Challenge requirements of perceiving and navigating a road network with segments defined by sparse waypoints. The vehicle implementation includes many heterogeneous sensors with significant communications and computation bandwidth to capture and process high-resolution, high-rate sensor data. The output of the comprehensive environmental sensing subsystem is fed into a kinodynamic motion planning algorithm to generate all vehicle motion. The requirements of driving in lanes, three-point turns, parking, and maneuvering through obstacle fields are all generated with a unified planner. A key aspect of the planner is its use of closed-loop simulation in a rapidly exploring randomized trees algorithm, which can randomly explore the space while efficiently generating smooth trajectories in a dynamic and uncertain environment. The overall system was realized through the creation of a powerful new suite of software tools for message passing, logging, and visualization. These innovations provide a strong platform for future research in autonomous driving in global positioning system–denied and highly dynamic environments with poor a priori information. © 2008 Wiley Periodicals, Inc.
 
Autonomous robot navigation in unstructured outdoor environments is a challenging area of active research and is currently unsolved. The navigation task requires identifying safe, traversable paths that allow the robot to progress toward a goal while avoiding obstacles. Stereo is an effective tool in the near field, but used alone leads to a common failure mode in autonomous navigation in which suboptimal trajectories are followed due to nearsightedness, or the robot's inability to distinguish obstacles and safe terrain in the far field. This can be addressed through the use of machine learning methods to accomplish near-to-far learning, in which near-field terrain appearance and stereo readings are used to train models able to predict far-field terrain. This paper proposes to enhance existing, memoryless near-to-far learning approaches through the use of classifier ensembles that allow terrain models trained on data seen at different points in time to be preserved and referenced later. These stored models serve as memory, and we show that they can be leveraged for more effective far-field terrain classification on future images seen by the robot. A five-factor, full-factorial, repeated-measures experimental evaluation is performed on hand-labeled data sets taken directly from the problem domain. The experiments result in many statistically significant findings, the most important being that the proposed near-to-far Best-K Ensemble Algorithm, with appropriate parameter selection, outperforms the single-model, nonensemble baseline approach in far-field terrain classification. Several other findings that inform the use of near-to-far ensemble methods are also presented. © 2009 Wiley Periodicals, Inc.
 
In this paper, we describe a framework for the autonomous capture and servicing of satellites. The work is based on laboratory experiments that illustrate the autonomy and remote-operation aspects. The satellite-capture problem is representative of most on-orbit robotic manipulation tasks where the environment is known and structured, but it is dynamic since the satellite to be captured is in free flight. Bandwidth limitations and communication dropouts dominate the quality of the communication link. The satellite-servicing scenario is implemented on a robotic test-bed in laboratory settings. The communication aspects were validated in transatlantic tests. © 2007 Canadian Space Agency
 
The 2007 DARPA Urban Challenge afforded the golden opportunity for the Technische Universität Braunschweig to demonstrate its abilities to develop an autonomously driving vehicle to compete with the world's best. After several stages of qualification, our team CarOLO qualified early for the DARPA Urban Challenge Final Event and was among only 11 teams from initially 89 competitors to compete in the final. We had the ability to work together in a large group of experts, each contributing his expertise in his discipline, and significant organizational, financial, and technical support by local sponsors, who helped us to become the best non-U.S. team. In this report, we describe the 2007 DARPA Urban Challenge, our contribution, “Caroline,” the technology, and algorithms, along with her performance in the DARPA Urban Challenge Final Event on November 3, 2007. © 2008 Wiley Periodicals, Inc.
 
This article describes a simple monocular navigation system for a mobile robot based on the map-and-replay technique. The presented method is robust and easy to implement and does not require sensor calibration or structured environment, and its computational complexity is independent of the environment size. The method can navigate a robot while sensing only one landmark at a time, making it more robust than other monocular approaches. The aforementioned properties of the method allow even low-cost robots to effectively act in large outdoor and indoor environments with natural landmarks only. The basic idea is to utilize a monocular vision to correct only the robot's heading, leaving distance measurements to the odometry. The heading correction itself can suppress the odometric error and prevent the overall position error from diverging. The influence of a map-based heading estimation and odometric errors on the overall position uncertainty is examined. A claim is stated that for closed polygonal trajectories, the position error of this type of navigation does not diverge. The claim is defended mathematically and experimentally. The method has been experimentally tested in a set of indoor and outdoor experiments, during which the average position errors have been lower than 0.3 m for paths more than 1 km long. © 2010 Wiley Periodicals, Inc.
 
Broad-leaved dock is a common and troublesome grassland weed with a wide geographic distribution. In conventional farming the weed is normally controlled by using a selective herbicide, but in organic farming manual removal is the best option to control this weed. The objective of our work was to develop a robot that can navigate a pasture, detect broad-leaved dock, and remove any weeds found. A prototype robot was constructed that navigates by following a predefined path using centimeter-precision global positioning system (GPS). Broad-leaved dock is detected using a camera and image processing. Once detected, weeds are destroyed by a cutting device. Tests of aspects of the system showed that path following accuracy is adequate but could be improved through tuning of the controller or adoption of a dynamic vehicle model, that the success rate of weed detection is highest when the grass is short and when the broad-leaved dock plants are in rosette form, and that 75% of weeds removed did not grow back. An on-farm field test of the complete system resulted in detection of 124 weeds of 134 encountered (93%), while a weed removal action was performed eight times without a weed being present. Effective weed control is considered to be achieved when the center of the weeder is positioned within 0.1 m of the taproot of the weed—this occurred in 73% of the cases. We conclude that the robot is an effective instrument to detect and control broad-leaved dock under the conditions encountered on a commercial farm. © 2010 Wiley Periodicals, Inc.
 
This article investigates the use of time-of-flight (ToF) cameras in mapping tasks for autonomous mobile robots, in particular in simultaneous localization and mapping (SLAM) tasks. Although ToF cameras are in principle an attractive type of sensor for three-dimensional (3D) mapping owing to their high rate of frames of 3D data, two features make them difficult as mapping sensors, namely, their restricted field of view and influences on the quality of range measurements by high dynamics in object reflectivity; in addition, currently available models suffer from poor data quality in a number of aspects. The paper first summarizes calibration and filtering approaches for improving the accuracy, precision, and robustness of ToF cameras independent of their intended usage. Then, several ego motion estimation approaches are applied or adapted, respectively, in order to provide a performance benchmark for registering ToF camera data. As a part of this, an extension to the iterative closest point algorithm has been developed that increases the robustness under restricted field of view and under larger displacements. Using an indoor environment, the paper provides results from SLAM experiments using these approaches in comparison. It turns out that the application of ToF cameras is feasible to SLAM tasks, although this type of sensor has a complex error characteristic. © 2009 Wiley Periodicals, Inc.
 
This article presents the architecture of Junior, a robotic vehicle capable of navigating urban environments autonomously. In doing so, the vehicle is able to select its own routes, perceive and interact with other traffic, and execute various urban driving skills including lane changes, U-turns, parking, and merging into moving traffic. The vehicle successfully finished and won second place in the DARPA Urban Challenge, a robot competition organized by the U.S. Government. © 2008 Wiley Periodicals, Inc.
 
Midway through the 2007 DARPA Urban Challenge, MIT's robot “Talos” and Team Cornell's robot “Skynet” collided in a low-speed accident. This accident was one of the first collisions between full-sized autonomous road vehicles. Fortunately, both vehicles went on to finish the race and the collision was thoroughly documented in the vehicle logs. This collaborative study between MIT and Cornell traces the confluence of events that preceded the collision and examines its root causes. A summary of robot–robot interactions during the race is presented. The logs from both vehicles are used to show the gulf between robot and human-driver behavior at close vehicle proximities. Contributing factors are shown to be (1) difficulties in sensor data association leading to an inability to detect slow-moving vehicles and phantom obstacles, (2) failure to anticipate vehicle intent, and (3) an overemphasis on lane constraints versus vehicle proximity in motion planning. Finally, we discuss approaches that could address these issues in future systems, such as intervehicle communication, vehicle detection, and prioritized motion planning. © 2008 Wiley Periodicals, Inc.
 
Team Cornell's Skynet is an autonomous Chevrolet Tahoe built to compete in the 2007 DARPA Urban Challenge. Skynet consists of many unique subsystems, including actuation and power distribution designed in-house, a tightly coupled attitude and position estimator, a novel obstacle detection and tracking system, a system for augmenting position estimates with vision-based detection algorithms, a path planner based on physical vehicle constraints and a nonlinear optimization routine, and a state-based reasoning agent for obeying traffic laws. This paper describes these subsystems in detail before discussing the system's overall performance in the National Qualifying Event and the Urban Challenge. Logged data recorded at the National Qualifying Event and the Urban Challenge are presented and used to analyze the system's performance. © 2008 Wiley Periodicals, Inc.
 
This article presents a robust approach to navigating at high speed across desert terrain. A central theme of this approach is the combination of simple ideas and components to build a capable and robust system. A pair of robots were developed, which completed a 212 km Grand Challenge desert race in approximately 7 h. A pathcentric navigation system uses a combination of LIDAR and RADAR based perception sensors to traverse trails and avoid obstacles at speeds up to 15 m/s. The onboard navigation system leverages a human-based preplanning system to improve reliability and robustness. The robots have been extensively tested, traversing over 3500 km of desert trails prior to completing the challenge. This article describes the mechanisms, algorithms, and testing methods used to achieve this performance. © 2006 Wiley Periodicals, Inc.
 
Robotic systems exhibit remarkable capability for exploring and mapping subterranean voids. Information about subterranean spaces has immense value for civil, security, and commercial applications where problems, such as encroachment, collapse, flooding and subsidence can occur. Contemporary method for underground mapping, such as human surveys and geophysical techniques, can provide estimates of void location, but cannot achieve the coverage, quality, or economy of robotic approaches. This article presents the challenges, mechanisms, sensing, and software of subterranean robots. Results obtained from operations in active, abandoned, and submerged subterranean spaces will also be shown. © 2006 Wiley Periodicals, Inc.
 
In this article we present the complete details of the architecture and implementation of Leaving Flatland, an exploratory project that attempts to surmount the challenges of closing the loop between autonomous perception and action on challenging terrain. The proposed system includes comprehensive localization, mapping, path planning, and visualization techniques for a mobile robot to operate autonomously in complex three-dimensional (3D) indoor and outdoor environments. In doing so we integrate robust visual odometry localization techniques with real-time 3D mapping methods from stereo data to obtain consistent global models annotated with semantic labels. These models are used by a multiregion motion planner that adapts existing two-dimensional planning techniques to operate in 3D terrain. All the system components are evaluated on a variety of real-world data sets, and their computational performance is shown to be favorable for high-speed autonomous navigation. © 2009 Wiley Periodicals, Inc.
 
Urban Search and Rescue is a growing area of robotic research. The RoboCup Federation has recognized this, and has created the new Virtual Robots competition to complement its existing physical robot and agent competitions. In order to successfully compete in this competition, teams need to field multi-robot solutions that cooperatively explore and map an environment while searching for victims. This paper presents the results of the first annual RoboCup Rescue Virtual competition. It provides details on the metrics used to judge the contestants as well as summaries of the algorithms used by the top four teams. This allows readers to compare and contrast these effective approaches. Furthermore, the simulation engine itself is examined and real-world validation results on the engine and algorithms are offered. © 2007 Wiley Periodicals, Inc.
 
In this paper we describe a LIDAR-based navigation approach applied at both the C-Elrob (European Land Robot Trial) 2007 and the 2007 DARPA Urban Challenge. At the C-Elrob 2007 the approach was used without any prior knowledge about the terrain and without global positioning system (GPS). At the Urban Challenge the approach was combined with a GPS-based path follower. At the core of the method is a set of “tentacles” that represent precalculated trajectories defined in the ego-centered coordinate space of the vehicle. Similar to an insect's antennae or feelers, they fan out with different curvatures discretizing the basic driving options of the vehicle. We detail how the approach can be used for exploration of unknown environments and how it can be extended to combined GPS path following and obstacle avoidance allowing safe road following in case of GPS offsets. © 2008 Wiley Periodicals, Inc.
 
This paper defines the issues required for the development of successful visualization sensors for use in open cut and underground mines. It examines the mine environment and considers both the reflectivity of the rock and attenuation effects of dust and water droplets. Millimeter wave technology, as an alternative to the more commonly used laser and sonar implementations, is selected due to its superior penetration through adverse atmospheric conditions. Of the available radar techniques, frequency modulated continuous wave (FMCW) is selected as being the most robust. The theoretical performance of a number of 77 and 94 GHz FMCW millimeter wave radar systems is determined and these confirm the capability of these sensors in the mining environment. Implementations of FMCW radar sensors for simple ranging and three-dimensional surface profiling are discussed before data obtained during field trials in mines is presented to justify the selection of this technology. © 2007 Wiley Periodicals, Inc.
 
In January 2004, NASA's twin Mars Exploration Rovers (MERs), Spirit and Opportunity, began searching the surface of Mars for evidence of past water activity. To localize and approach scientifically interesting targets, the rovers employ an onboard navigation ...
 
Current rover localization techniques such as visual odometry have proven to be very effective on short- to medium-length traverses (e.g., up to a few kilometers). This paper deals with the problem of long-range rover localization (e.g., 10 km and up) by developing an algorithm named MOGA (Multi-frame Odometry-compensated Global Alignment). This algorithm is designed to globally localize a rover by matching features detected from a three-dimensional (3D) orbital elevation map to features from rover-based, 3D LIDAR scans. The accuracy and efficiency of MOGA are enhanced with visual odometry and inclinometer/sun-sensor orientation measurements. The methodology was tested with real data, including 37 LIDAR scans of terrain from a Mars–Moon analog site on Devon Island, Nunavut. When a scan contained a sufficient number of good topographic features, localization produced position errors of no more than 100 m, of which most were less than 50 m and some even as low as a few meters. Results were compared to and shown to outperform VIPER, a competing global localization algorithm that was given the same initial conditions as MOGA. On a 10-km traverse, MOGA's localization estimates were shown to significantly outperform visual odometry estimates. This paper shows how the developed algorithm can be used to accurately and autonomously localize a rover over long-range traverses. © 2010 Wiley Periodicals, Inc.
 
This paper describes a navigation system for autonomous underwater vehicles (AUVs) in partially structured environments, such as dams, harbors, marinas, and marine platforms. A mechanically scanned imaging sonar is used to obtain information about the location of vertical planar structures present in such environments. A robust voting algorithm has been developed to extract line features, together with their uncertainty, from the continuous sonar data flow. The obtained information is incorporated into a feature-based simultaneous localization and mapping (SLAM) algorithm running an extended Kalman filter. Simultaneously, the AUV's position estimate is provided to the feature extraction algorithm to correct the distortions that the vehicle motion produces in the acoustic images. Moreover, a procedure to build and maintain a sequence of local maps and to posteriorly recover the full global map has been adapted for the application presented. Experiments carried out in a marina located in the Costa Brava (Spain) with the Ictineu AUV show the viability of the proposed approach. © 2008 Wiley Periodicals, Inc.
 
In this paper, we present a multi-pronged approach to the “Learning from Example” problem. In particular, we present a framework for integrating learning into a standard, hybrid navigation strategy, composed of both plan-based and reactive controllers. Based on the classification of colors and textures as either good or bad, a global map is populated with estimates of preferability in conjunction with the standard obstacle information. Moreover, individual feedback mappings from learned features to learned control actions are introduced as additional behaviors in the behavioral suite. A number of real-world experiments are discussed that illustrate the viability of the proposed method. © 2007 Wiley Periodicals, Inc.
 
Multiple-wheel all-terrain vehicles without a steering system must use great amounts of power when skid steering. Skid steering is modeled with emphasis put on the ground contact forces of the wheels according to the mass distribution of the vehicle. To increase steering efficiency, it is possible to modify the distribution of the normal contact forces on the wheels. This paper focuses on two aspects: first, it provides a model and an experimental study of skid steering on an all-road 6 × 6 electric wheelchair, the Kokoon mobile platform. Second, it studies two configurations of the distribution of the normal forces on the six wheels, obtained via suspension adjustments. This was both modeled and experimented. Contact forces were measured with a six-component force plate. The first results show that skid steering can be substantially improved by only minor adjustments to the suspensions. This setting decreases the required longitudinal forces applied by the engines and improves the steering ability of the vehicle or robot. Skid-steering characteristic parameters, such as the position of the center of rotation and absorbed skid power, are also dealt with in this paper. © 2010 Wiley Periodicals, Inc.
 
Long-duration robotic missions on lunar and planetary surfaces (for example, the Mars Exploration Rovers have operated continuously on the Martian surface for close to 3 years) provide the opportunity to acquire scientifically interesting information from a diverse set of surface and subsurface sites and to explore multiple sites in greater detail. Exploring a wide range of terrain types, including plains, cliffs, sand dunes, and lava tubes, requires the development of robotic systems with mobility enhanced beyond that which is currently fielded. These systems include single as well as teams of robots. TRESSA (Teamed Robots for Exploration and Science on Steep Areas) is a closely coupled three-robot team developed at the Jet Propulsion Laboratory (JPL) that previously demonstrated the ability to drive on soil-covered slopes up to 70 deg. In this paper, we present results from field demonstrations of the TRESSA system in even more challenging terrain: rough rocky slopes of up to 85 deg. In addition, the integration of a robotic arm and instrument suite has allowed TRESSA to demonstrate semi-autonomous science investigation of the cliffs and science sample collection. TRESSA successfully traversed cliffs and collected samples at three Mars analog sites in Svalbard, Norway as part of a recent geological and astrobiological field investigation called AMASE: Arctic Mars Analog Svalbard Expedition under the NASA ASTEP (Astrobiology Science and Technology for Exploring Planets) program. © 2007 Wiley Periodicals, Inc.
 
Visual inspection and nondestructive evaluation (NDE) of natural gas distribution mains is an important future maintenance cost-planning step for the nation's gas utilities. These data need to be gathered at an affordable cost with the fewest excavations and maximum linear feet inspected for each deployment, with minimal to no disruption in service. Current methods (sniffing, direct assessment) are either postleak reactive or too unreliable to offer a viable and Department of Transportation–acceptable approach as a whole. Toward achieving the above goal, a consortium of federal and commercial sponsors funded the development of Explorer™. Explorer™ is a long-range, untethered, self-powered, and wirelessly controlled modular inspection robot for the visual inspection and NDE of 6- and 8-in. natural gas distribution pipelines/mains. The robot is launched into the pipeline under live (pressurized flow) conditions and can negotiate diameter changes, 45- and 90-deg bends and tees, as well as inclined and vertical sections of the piping network. The modular design of the system allows it to be expanded to include additional inspection and/or repair tools. The range of the robot is an order of magnitude higher (thousands of feet) than present state-of-the-art inspection systems and will improve the way gas utilities maintain and manage their systems. Two prototypes, Explorer-I and -II (X-I and X-II), were developed and field-tested over a 3-year period. X-I is capable of visual inspection only and was field-tested in 2004 and 2005. The next-generation X-II, capable of visual and NDE inspection [remote field eddy current (RFEC) and magnetic flux leakage (MFL)] was developed thereafter and had field trials in 2006 and late 2007. It was successfully deployed into low-pressure (<125 psig) and high-pressure (>500 psig) distribution and transmission natural gas mains, with multi-1,000-ft inspection runs under live conditions from a single excavation. This paper will describe the overall engineering design and functionality of the Explorer™ family of robots, as well as the results of the field trials for both platforms. It will highlight the importance of the various design and safety features of the in-pipe crawler and showcase the value of data types and position-tagged visual∼NDE data collected in working pipelines under live flow conditions. © 2010 Wiley Periodicals, Inc.
 
To design high-level control structures efficiently, reasonable mathematical model parameters of the vessel have to be known. Because sensors and equipment mounted onboard marine vessels can change during a mission, it is important to have an identification procedure that will be easily implementable and time preserving and result in model parameters accurate enough to perform controller design. This paper introduces one such method, which is based on self-oscillations (IS-O). The described methodology can be used to identify single-degree-of-freedom nonlinear model parameters of underwater and surface marine vessels. Extensive experiments have been carried out on the VideoRay remotely operated vehicle and Charlie unmanned surface vehicle to prove that the method gives consistent results. A comparison with the least-squares identification and thorough validation tests have been performed, proving the quality of the obtained parameters. The proposed method can also be used to make conclusions on the model that describes the dynamics of the vessel. The paper also includes results of autopilot design in which the controllers are tuned according to the proposed method based on self-oscillations, proving the applicability of the proposed method. © 2010 Wiley Periodicals, Inc.
 
In an ideal case telepresence achieves a state in which a human operator can no longer differentiate between an interaction with a real environment and a technical mediated one. This state is called transparent telepresence. The applicability of telepresence to on-orbit servicing (OOS), i.e., an unmanned servicing operation in space, teleoperated from ground in real time, is verified in this paper. For this purpose, a communication test environment was set up on the ground, which involved the Institute of Astronautics (LRT) ground station in Garching, Germany, and the European Space Agency (ESA) ground station in Redu, Belgium. Both were connected via the geostationary ESA data relay satellite ARTEMIS. Utilizing the data relay satellite, a teleoperation was accomplished in which the human operator as well as the (space) teleoperator was located on the ground. The feasibility of telepresent OOS was evaluated, using an OOS test bed at the Institute of Mechatronics and Robotics at the German Aerospace Center (DLR). The manipulation task was representative for OOS and supported real-time feedback from the haptic-visual workspace. The tests showed that complex manipulation tasks can be fulfilled by utilizing geostationary data relay satellites. For verifying the feasibility of telepresent OOS, different evaluation methods were used. The properties of the space link were measured and related to subjective perceptions of participants, who had to fulfill manipulation tasks. An evaluation of the transparency of the system, including the data relay satellite, was accomplished as well. © 2009 Wiley Periodicals, Inc.
 
Building a model of large-scale terrain that can adequately handle uncertainty and incompleteness in a statistically sound way is a challenging problem. This work proposes the use of Gaussian processes as models of large-scale terrain. The proposed model naturally provides a multiresolution representation of space, incorporates and handles uncertainties aptly, and copes with incompleteness of sensory information. Gaussian process regression techniques are applied to estimate and interpolate (to fill gaps in occluded areas) elevation information across the field. The estimates obtained are the best linear unbiased estimates for the data under consideration. A single nonstationary (neural network) Gaussian process is shown to be powerful enough to model large and complex terrain, effectively handling issues relating to discontinuous data. A local approximation method based on a “moving window” methodology and implemented using k-dimensional (KD)-trees is also proposed. This enables the approach to handle extremely large data sets, thereby completely addressing its scalability issues. Experiments are performed on large-scale data sets taken from real mining applications. These data sets include sparse mine planning data, which are representative of a global positioning system–based survey, as well as dense laser scanner data taken at different mine sites. Further, extensive statistical performance evaluation and benchmarking of the technique has been performed through cross-validation experiments. They conclude that for dense and/or flat data, the proposed approach will perform very competitively with grid-based approaches using standard interpolation techniques and triangulated irregular networks using triangle-based interpolation techniques; for sparse and/or complex data, however, it would significantly outperform them. © 2009 Wiley Periodicals, Inc.
 
Next-generation autonomous underwater vehicles (AUVs) will be required to robustly identify underwater targets for tasks such as inspection, localization, and docking. Given their often unstructured operating environments, vision offers enormous potential in underwater navigation over more traditional methods; however, reliable target segmentation often plagues these systems. This paper addresses robust vision-based target recognition by presenting a novel scale and rotationally invariant target design and recognition routine based on self-similar landmarks that enables robust target pose estimation with respect to a single camera. These algorithms are applied to an AUV with controllers developed for vision-based docking with the target. Experimental results show that the system performs exceptionally on limited processing power and demonstrates how the combined vision and controller system enables robust target identification and docking in a variety of operating conditions. © 2008 Wiley Periodicals, Inc.
 
Top-cited authors
Gabriel Hoffmann
Hendrik Dahlkamp
  • Stanford University
Philip Fong
Alonzo Kelly
  • Carnegie Mellon University
Martial Hebert
  • Carnegie Mellon University