ArticlePDF Available


While the DARPA Grand Challenge has revitalized interest in intelligent highway systems, autonomous vehicles, and sensing technology, a host of other novel issues afford interesting design and computer-engineering challenges for the future.
0018-9162/06/$20.00 © 2006 IEEE26 Computer Published by the IEEE Computer Society
Getting a driver’s license marks a milestone for
most teenagers on their journey into adulthood.
As a matter of speaking, robotics technology
has also matured over the past three decades
to the point where it too is ready to claim a
driver’s license.
Significant recent advances in information processing,
machine vision, control theory, and signal processing—
in both hardware and software—increase the capabil-
ity to represent, analyze, perceive, and respond to
dynamic road conditions. In this issue, we feature
the latest developments in ground-based unmanned
autonomous vehicles as seen in the highly publicized
DARPA Grand Challenge. While the Grand Challenge
focuses attention on current UAVs, this issue also
devotes special attention to the future of unmanned vehi-
cles and computational paradigms that might be used
as part of a system of intelligent vehicles.
Driving is a demanding task requiring much more
than precise, reliable, repetitive robotic behavior. As
Herbert Simon noted, “What information consumes is
rather obvious: It consumes the attention of its recipi-
While the DARPA Grand Challenge has
revitalized interest in intelligent highway
systems,autonomous vehicles,and sensing
technology,a host of other novel issues
afford interesting design and computer-
engineering challenges for the future.
Guna Seetharaman
Air Force Institute of Technology
Arun Lakhotia
University of Louisiana at Lafayette
Erik Philip Blasch
Air Force Research Laboratory
Come of Age:
r12geiGOOD2.qxp 27/11/06 1:03 PM Page 26
ents. Hence, a wealth of information creates a poverty
of attention and a need to allocate that attention effi-
ciently among the overabundance of information
sources that might consume it.”
Successful driving requires attention, alertness, and
instinctive responses to varying road conditions, obsta-
cles, and safety conditions. A car driving at 96 kph (60
mph) covers 26.5 meters per second. Most human dri-
vers have a reaction time of three seconds, and the vehi-
cle braking distance at that speed is typically 100 meters.
A typical driver is alert to road conditions for about 11
seconds in advance of the vehicle. During this interval,
the driver makes a stream of decisions based on 291
meters’ worth of data spread across an arbitrary num-
ber of lanes. This does not include the processing of sen-
sory data responsible for rearview image analysis that
also impacts safety considerations. All these issues call
for a steady stream of decisions without abandoning
previous tactical choices for route navigation.
To enhance driver capabilities, autonomous comput-
ing can significantly aid in routine decision making:
Manufacturers have successfully
integrated automatic cruise control,
automatic transmissions, and fully
automatic parking in commercial
cars. Remotely operated vehicles
have performed successfully in space
missions and hostile environments
for more than two decades. Industry
has used robotic systems for moving
materials between locations on pro-
duction lines for half a century. These
accomplishments were mostly due to
precise measurement, controlled response, and precise
and reliable actuators with a manual override feature to
ensure safety.
Are these robotics systems ready to become personal
chauffeurs? Only recently has the scientific community
taken up the challenge of autonomous driving. The
recent DARPA Grand Challenge thrilled us by having
machines driving in a fully autonomous mode. More
exciting still is the realistic possibility that autonomous
vehicles might be able to navigate urban settings in the
near future.
The articles in this issue highlight advances in the field
of autonomous vehicles as demonstrated by the 2005
DARPA Grand Challenge, but also in the context of the
automotive industry in general. While the DARPA
Challenge is pushing for fully autonomous solutions,
there is a host of computer technology that could help
minimize traffic accidents due to driver fatigue, road
congestion, environmental effects such as snow and ice,
and unavoidable machine failures such as tire blowouts.
These technologies include precision GPS for naviga-
tion, coordinated control of LCD displays to monitor
and report traffic conditions, and increased processor/
sensor capabilities on a single vehicle. Technology can
aid a driver, but it requires system-wide planning that
mirrors the developments of the airline industry.
In “VisLab and the Evolution of Vision-Based UGVs,”
Massimo Bertozzi and coauthors provide a brief history
of the evolution of autonomous vehicles during the past
three decades. Early research focused on providing
advanced assistance for drivers, the success of which has
led to the bolder vision of complete autonomy. This arti-
cle describes both the history of autonomous vehicle
research worldwide and the history of research at
VisLab at the University of Parma, which partnered with
Oshkosh Truck Corporation to build TerraMax—one
of five robots that finished the DARPA Grand Challenge.
“Perception and Planning Architecture for Autono-
mous Ground Vehicles” by Bob Touchton and col-
leagues describes the systems-level integration issues that
the Grand Challenge poses. Team CIMAR of the
University of Florida, now known as Team Gator
Nation, consisted of multiple orga-
nizations working on different
aspects of the problem. They were
able to prevent the integration chaos
typical of large, disjoint teams by
using the standardized Joint Archi-
tecture for Unmanned Systems com-
ponent systems and messaging
framework, an architecture devel-
oped by the working group char-
tered by the United States Office of
the Secretary of Defense. JAUS aims
at creating plug-and-play unmanned systems, where
sensors from one vendor can seamlessly be swapped
with those from another.
In “Testing Driver Skill for High-Speed Autonomous
Vehicles,” Chris Urmson and coauthors outline a set of
tracking and planning tests for autonomous vehicles that
match the industry standard for driver skill. The CMU
team used these tests to evaluate the performance of
their two entries in the Grand Challenge—both of which
finished—and determine the readiness of the technol-
ogy, control solution, and robot’s sensing capability to
impact driving behavior.
“To Drive Is Human” by Isaac Miller and colleagues
provides a deeper insight into the technical challenges
of developing an autonomous ground vehicle. Based
on Team Cornell’s experience in the Grand Challenge,
the authors decompose the overall problem into three
parts—localization, sensing, and path planning—and
then use this decomposition to discuss the sensor needs
and the underlying algorithms. More importantly, the
article articulates the challenges in developing an
autonomous ground vehicle by contrasting it with
human experience. An AGV operates in a discrete
December 2006 27
To enhance
driver capabilities,
autonomous computing
can significantly
aid in routine
decision making.
r12geiGOOD2.qxp 27/11/06 1:03 PM Page 27
28 Computer
world, whereas humans operate in a continuous
world. Discretization, performed for computational
efficiency, introduces approximations that can lead to
anomalous or unsafe behavior that developers must
In “On the Importance of Being Contextual,” Paolo
Lombardi and colleagues describe a framework for fac-
toring “context” into processing continuous sensor data
and in making decisions to keep a vehicle on its course.
Instead of seeking one unified algorithm that works for
all cases by varying the parameters, the authors suggest
that the system must continuously evaluate its assump-
tions and choose from among many prescribed behav-
iors. They capture the dichotomy that exists in human
drivers: what to do and how, placing a significant
reliance on the sensory data, and what not to do, a
reflex-driven behavior. The authors introduce a frame-
work for analyzing video sensor data so that unmanned
vehicles can navigate in a context-driven fashion. This
approach emphasizes multimodal awareness as the key
to improved vehicle control performance across a broad
spectrum of future mission spaces.
It is also desirable that unmanned vehicles receive cues
from other vehicles on the road and factor those cues
into their decisions. Humans do this in traffic intuitively,
and birds demonstrate such a collective behavior as well.
In “Memory-Based In Situ Learning for Unmanned
Vehicles,” Patrick McDowell and coauthors describe
preliminary research based on acoustic sensors and
learning algorithms that seeks to demonstrate similar
emergent behavior in unmanned underwater vehicles.
Their work also points to an important issue for the
urbanization of unmanned ground vehicles.
Finally, “A Vision for Supporting Autonomous
Navigation in Urban Environments” by Vason P. Srini
envisions a future infrastructure in which sensors placed
on the road gather information and communicate with
autonomous vehicles. The infrastructure would enable
a vehicle to sense distant environments and dynamically
replan its route while coordinating and negotiating
between multiple other vehicles, allowing the traffic to
move faster through intersections and while merging.
Srini’s vision—Web-inspired infrastructure—adds a new
twist to current research into intelligent highways.
What can we expect next? The articles in this spe-
cial issue give us a good sense of how far we have
come as well as how far we have yet to go. As the
“The DARPA Grand Challenge: Past and Future” side-
bar describes, we are witnessing history in the making.
The upcoming 2007 DARPA Urban Challenge may pro-
duce more innovations in an urban setting. We need to
wait and see.
We hope that readers share the excitement of the
robotics community in bringing together the next great
The DARPA Grand Challenge: Past and Future
Saturday, 8 October 2005;6:30 a.m.; Primm, Nevada—the
time and place for the start of one of the most interesting
races in history.
At that moment, 23 ground vehicles, each different in
appearance and yet all strangely similar, began the final leg of a
long journey. They were in Nevada to try to win the DARPA
Grand Challenge—a $2 million cash prize for the fastest vehi-
cle to finish a challenging desert course in less than 10 hours.
These were no ordinary ground vehicles.Each was custom
made and completely autonomous. Each needed only one
human command to behave like any car on the road:Run. But
this command didn’t come from the driver, because these
were driverless vehicles.
The First Two Grand Challenges
In 2003, DARPA announced the Grand Challenge—a pro-
ject to accelerate unmanned ground vehicle technology that
could free drivers from operating vehicles during dangerous
military missions, such as running supply convoys in
unfriendly areas.
A total of 106 teams applied to compete in the race for the
$1 million prize. On 13 March 2004, after a series of qualifying
tests, 17 teams attempted the 140-mile course from Barstow,
Calif., to Primm, Nev. The farthest a vehicle traveled was 7
miles, or 5 percent of the route. Even though none of the
vehicles finished, the race was a tremendous success.
In a very short time, innovators from around the country
had made remarkable progress in the areas of sensors, algo-
rithms, and autonomous ground vehicle systems integration.
The Grand Challenge succeeded in inspiring young and old to
find new solutions to a tough technical problem.
DARPA announced a second race and increased the prize
to $2 million. The response: 197 teams applied to compete,
almost twice as many as the 2004 race drew.
In the 18 months between the two events,hundreds of
engineers, students, hobbyists, scientists,and backyard inventors
worked at their own expense to build and test their vehicles to
ready them for the increasingly difficult qualification trials.
Twenty-three teams emerged as finalists. The qualifying
tests demonstrated that they all had a good chance of finish-
r12geiGOOD2.qxp 27/11/06 1:03 PM Page 28
social personal robot, the intelligent vehicle. While the
DARPA Grand Challenge has revitalized interest in intel-
ligent highway systems, autonomous vehicles, and sens-
ing technology, a host of other novel issues afford
interesting design and computer-engineering challenges
for the future.
We hope you enjoy this issue dedicated to advances
in robotics as seen from the application of solutions in
the DARPA Grand Challenge.
The authors thank Col. Jack E. McCrae Jr., PhD,
USAF, for coordinating our efforts with DARPA. We
also thank the editorial staff for their outstanding sup-
port, understanding, and patience.
The authors’ affiliation with the US Air Force does not
imply the endorsement of the contents nor that this arti-
cle represents stated or implied direction of technology
emphases within the Air Force, the Department of
Defense, or the US government.
Guna Seetharaman is an associate professor of computer
science and engineering at the Air Force Institute of Tech-
nology, Wright Patterson AFB, Ohio. He is a cofounder of
Team CajunBot and led its obstacle-detection algorithms
development. His research interests include integrated
micro-optoelectronic mechanical systems, computer vision,
sensor networks, and high-performance algorithms for intel-
ligent systems. He received a PhD in electrical and com-
puter engineering from the University of Miami. He is a
member of the IEEE Computer Society and the ACM. Con-
tact him at
Arun Lakhotia is a professor of computer science with the
Center for Advanced Computer Studies at the University of
Louisiana at Lafayette. He is a founding member and team
leader of Team CajunBot, a contestant and finalist in the
2004 and 2005 DARPA Grand Challenges. His other
research interests include the analysis of malicious programs
such as computer viruses. He received a PhD in computer
science from Case Western Reserve University. He is a mem-
ber of the IEEE Computer Society and the ACM. Contact
him at
Erik Philip Blasch is the Fusion Evaluation Tech Lead for
the Air Force Research Laboratory, Sensors Directorate,
Dayton, Ohio; an adjunct professor at Wright State Uni-
versity; and a reserve major with the Air Force Office of
Scientific Research in Arlington, Va. His research interests
include information fusion, automatic target detection, and
intelligent systems. Blasch received a PhD in electrical engi-
neering from WSU. Contact him at Erik.Blasch@wpafb.
December 2006 29
ing the route. The 2005 Grand Challenge route would test
the vehicles’ autonomous capabilities to travel along narrow
roads, over dry and featureless lake beds,through narrow
passes and long tunnels, over railroad crossings,through
intersections, and along canyon walls. To complete the route,
they would have to operate autonomously for hours, some-
thing never previously done.
In the end, five teams finished the course, four of them in
under the 10-hour limit. Stanford University’s Stanley won the
race in 6 hours and 53 minutes, which amounted to an aver-
age of 19.2 mph. Remarkably,the next three finishers were
within 37 minutes of Stanley’s time. TerraMax finished the
course on the second day, after remaining parked overnight in
autonomous mode. The vehicles that did not finish the
course experienced mechanical, system, or software prob-
lems. All but one of the 23 finalist teams traveled farther than
the best vehicle in 2004.
The articles in this issue of Computer represent the innova-
tive ideas that teams came up with to win the Grand
Challenges. Technical achievement relies on new ideas, and
the Grand Challenges proved to be the catalyst for many such
ideas, particularly in computer science. All of the participants
deserve appreciation for the many hours of hard work they
put into their vehicles.While DARPA awarded only one cash
prize, every participant was a winner. The experience
enriched their lives, and they developed skills that will be a
lifelong benefit.
What’s Next?
The 2005 Grand Challenge demonstrated that
autonomous vehicles can travel long distances across difficult,
obstacle-laden roads. The Urban Challenge, scheduled for
3 November 2007, calls on autonomous vehicles to operate
in traffic, where reaction time is decreased significantly, but
vehicles must avoid obstacles and collisions. The technology
to do this does not yet exist, but I have every confidence that
those touched by the DARPA Grand Challenge experience
will work tirelessly until they find the right combination of
technologies to be successful. They will create a new future,
one that includes autonomous ground vehicles.
—Ron Kurjanowicz
2005 DARPA Grand Challenge Program Manager
r12geiGOOD2.qxp 27/11/06 1:03 PM Page 29
... With the gradual development of technology, a variety of AV races were organized to compare the performance of ADSs designed by different institutions and evaluated by task-driven methods. The most famous competitions were the series of Defense Advanced Research Projects Agency in the United States, which compared task completion by participating vehicles in preset test scenarios (2004,2005 desert challenge, and 2007 urban challenge) [6,7]. With the increasing complexity of the ADS function, the number of competitions is increasing, with famous races including the World Intelligent Driving Challenge, Intelligent Vehicles Future Challenge, and Grand Cooperative Driving Challenge [8,9]. ...
... The hazard situations of different concrete scenarios in the parameter space are calculated according to Eqs. (2)(3)(4)(5)(6)(7)(8). For ...
Full-text available
With the continuous improvement of automated driving technology, how to evaluate the performance of an automated driving system is attracting more and more attention. Meanwhile, with the creation of scenario-based test methods, the traditional evaluation index based on a single test can no longer meet the requirements of high-level safety verification for automated driving system, and the performance evaluation of such a system in logical scenarios will be the mainstream. Based on the scenario-based test method and Turing test theory, a performance evaluation method for an automated driving system in the whole parameter space of a logical scenario is proposed. The logical scenario parameter space is partitioned according to the risk degree of concrete scenario, and the evaluation process in different zones are determined. Subsequently, the anthropomorphic index in the safe zone and the collision-avoidance index in the danger zone are defined by comparing test results of human driving and ideal vehicle motion. Taking front vehicle low-speed and cut-out scenarios as examples, two automated driving algorithms are tested in the virtual environment, and the test results are evaluated both by the proposed method and by human observation. The results show that the results of the proposed method are consistent with the subjective feelings of humans; additionally, it can be applied to scenario-based tests and the verification process of an automated driving system.
... The robotics eld has grown steadily for the last two decades. The number of research initiatives in robotics around the world has surged and now includes staple programs like the US DARPA Challenges [1,3,29], US National Robotics Initiative [16], the Together Through Innovation robotics-related program in Germany, and Japan's New Robot Strategy [24]. Such research efforts combined with an emerging market have energized the robotics industry, which is projected to grow by 25% between 2020-2025 [2]. ...
... Driving is a demanding task requiring much more than precise, reliable, repetitive robotic behaviors [1]. Autonomous vehicles are complex decision-making systems that integrate advanced and interconnected sensing and control components [2]. ...
Considering the current development of science and technology, closed scenes such as factories, airports and warehouses are most suitable for applications and promotions of autonomous vehicles. Due to the high performance-cost ratio of monocular cameras, the concept of vision-based lane recognition has been widely used. The existing lane line extraction algorithms based on vision mainly include region extraction, threshold segmentation, lane line fitting and target point extraction. In the verification process, the threshold segmentation part is seriously affected by the surrounding environment because of its poor robustness; and the failure of lane line fitting will affect the subsequent tracking algorithm. To solve these problems, this paper proposes an Improved Pixel-based Lane Extraction (IPLE) algorithm integrating with the closed scenes’ characteristics. Firstly, the original image is converted into an aerial view by perspective transformation that can form equal-width lane lines in the image and restore the real scene. Secondly, the process of unifying the road and surrounding environments is integrated based on traditional OTSU threshold segmentation to achieve better lane extraction. Thirdly, the pixel distribution is simplified based on clustering method using distance calculation. Finally, the target point extraction is achieved based on pixel coordinates. Compared with the existing lane line fitting algorithm, IPLE algorithm is able to solve the problem of large curvature fitting failures with higher computational efficiency and stronger robustness.
... Scholars at home and abroad have carried out a large number of studies on SLAM algorithms based on vision [2][3][4][5] and LiDAR [6][7][8][9][10]. Due to advantages of intuitive mapping, high ranging accuracy, easily unaffected by the variation of illumination and view angle, and its ability to operate in all weather conditions [11], Lidar is widely used in the field of unmanned driving [12][13][14][15] and is more suitable for SLAM in Complex and Sensors 2022, 22, 520 2 of 15 changeable coal mine environments with poor light conditions. Huber and Vandapel [16] used a high-precision laser scanner to build a high-precision three-dimensional geological model of a coal mine, but this method needs post-processing of surveying data, which cannot meet the requirements of real-time mapping and positioning of the coal mine environment, and the cost is high. ...
Full-text available
Simultaneous localization and mapping (SLAM) is one of the key technologies for coal mine underground operation vehicles to build complex environment maps and positioning and to realize unmanned and autonomous operation. Many domestic and foreign scholars have studied many SLAM algorithms, but the mapping accuracy and real-time performance still need to be further improved. This paper presents a SLAM algorithm integrating scan context and Light weight and Ground-Optimized LiDAR Odometry and Mapping (LeGO-LOAM), LeGO-LOAM-SC. The algorithm uses the global descriptor extracted by scan context for loop detection, adds pose constraints to Georgia Tech Smoothing and Mapping (GTSAM) by Iterative Closest Points (ICP) for graph optimization, and constructs point cloud map and an output estimated pose of the mobile vehicle. The test with KITTI dataset 00 sequence data and the actual test in 2-storey underground parking lots are carried out. The results show that the proposed improved algorithm makes up for the drift of the point cloud map, has a higher mapping accuracy, a better real-time performance, a lower resource occupancy, a higher coincidence between trajectory estimation and real trajectory, smoother loop, and 6% reduction in CPU occupancy, the mean square errors of absolute trajectory error (ATE) and relative pose error (RPE) are reduced by 55.7% and 50.3% respectively; the translation and rotation accuracy are improved by about 5%, and the time consumption is reduced by 2~4%. Accurate map construction and low drift pose estimation can be performed.
Massive multiple-input multiple-output (MIMO) radar, enabled by millimeter-wave virtual MIMO techniques, provides great promises to high-resolution automotive sensing and target detection in unmanned ground/aerial vehicles (UGA/UAV). As a long-established problem, however, existing subspace methods suffer from either high complexity or low accuracy. In this work, we propose two efficient methods, to accomplish fast subspace computation and accurate angle of arrival (AoA) acquisition. By leveraging randomized low-rank approximation, our new fast multiple signal classification (MUSIC) methods, relying on random sampling and projection techniques, substantially accelerate the subspace estimation by orders of magnitude. Moreover, we establish the theoretical bounds of our methods, which ensure the accuracy of the approximated pseudo-spectrum. As demonstrated, the pseudo-spectrum acquired by fast-MUSIC would be highly precise; and the estimated AoA is almost as accurate as standard MUSIC. In contrast, our new methods are tremendously faster than standard MUSIC. Thus, our fast-MUSIC enables the high-resolution real-time environmental sensing with massive MIMO radars, which has great potential in the emerging unmanned systems.
Subspace methods, e.g., multiple signal classification algorithm (MUSIC), show great promise to high-resolution environment sensing in the 6G-enabled mobile Internet of Things (IoT), e.g., the emerging unmanned systems. Existing schemes, aiming to simplify the computational 1-D search of the MUSIC pseudospectrum, unfortunately have still an unaffordable complexity or the compromised accuracy, especially when the millimeter-wave massive multiple-input–multiple-output (MIMO) radar is considered. In this work, we address the fast and accurate estimation of the high-resolution pseudospectrum in massive MIMO radars. To enable real-time automotive sensing, we first formulate this computational procedure as one matrix product problem, which is then solved by leveraging randomized matrix sketching techniques. To be specific, we compute the large matrix product approximately by the product of two small matrices abstracted via random sampling. To minimize the approximation error, we further design another sampling, pruning, and recomputing (SaPRe) algorithm, which refines the approximated results and thus attains the exact pseudospectrum. Finally, the theoretical analysis and numerical simulations are provided to validate the proposed methods. Our fast approaches dramatically reduce the time complexity and simultaneously attain the accurate Direction-of-Arrival (DoA) estimation, which have the great potential to real time and high-resolution automotive sensing with massive MIMO radars.
ResearchGate has not been able to resolve any references for this publication.