Content uploaded by Andreas Birk
Author content
All content in this area was uploaded by Andreas Birk on Aug 17, 2022
Content may be subject to copyright.
3D Data Collection at Disaster City at the 2008
NIST Response Robot Evaluation Exercise (RREE)
Andreas Birk, S¨
oren Schwertfeger, Kaustubh Pathak, and Narunas Vaskevicius
Jacobs University Bremen
Campus Ring 1, 28759 Bremen, Germany
a.birk@jacobs-university.de
http://robotics.jacobs-university.de
Abstract — A collection of 3D data sets gathered at the 2008 NIST
Response Robot Evaluation Exercise (RREE) in Disaster City, Texas
is described. The data sets consist of 3D point clouds collected with
an actuated laser range finder in different disaster scenarios. The data
sets can be used for performance evaluation of robotics algorithms,
especially for 3D mapping. An example is discussed where a 3D
model is generated from scans taken in a collapsed car parking.
Keywords:performance evaluation, 3D simultaneous local-
ization and mapping (SLAM), 3D laser range finder, point
cloud
FINA L VER SION:
IEEE International Workshop on Safety, Security, and Rescue
Robotics (SSRR), 2009
@inproceedings{3Ddata-SARrobot-SSRR09,
author = {Birk, Andreas and Schwertfeger,
Soeren and Pathak, Kaustubh},
title = {3D Data Collection at Disaster
City at the 2008 NIST Response Robot
Evaluation Exercise (RREE)},
booktitle = {IEEE International Workshop
on Safety, Security, and Rescue Robotics
(SSRR)},
publisher = {IEEE Press},
year = {2009},
type = {Conference Proceedings}
}
I. INTRODUCTION
Safety, Security, and Rescue Robotics (SSRR) deals with
the most challenging unstructured domains. 3D perception
and modeling are hence core tasks as basis for autonomous
or semi-autonomous operations. Also, 3D models and mea-
surements are interesting mission deliverables for end users.
Recent advantages in 3D mapping [1], [2], [3], [4], [5], [6]
indicate some promising potential. But it is also clear that
there is still quite some research to be done in this area.
For fostering this research and to provide meaningful com-
parison bases for performance evaluations, data collections
based on commonly used sensors are of interest. Examples of
initiatives of this type include Robotics Data Set Repository
(RADISH) [7] and the Rawseeds website [8]. Further options
for the generation of reference data are standardized test
scenarios in simulated as well as real world environments [9],
[10].
In this paper, a collection of 3D data sets in a variety of
SSRR scenarios is described. The data was collected by a
robot with an actuated laser range finder during the 2008
Response Robot Evaluation Exercise (RREE) [11] at Disaster
City in College Station, Texas [12], an annual event organized
by the Intelligent Systems Division (ISD) of the National
Institute of Standards and Technology (NIST). In addition to
the presentation of the data sets, it is shown in an example in
section III that the data is indeed suited to generate 3D.
II. THE 3D DATASE TS F ROM DISASTER CI TY
The robot used to collect data at RREE 2008 is a Jacobs
response robot, an in-house development with significant sen-
sor payload and processing power [13]. It is equipped with an
actuated Laser Range Finder (aLRF). Concretely, the aLRF
is based on a SICK S 300. It has a horizontal field of view
of 270oof 541 beams. The sensor is rotated (pitched) by an
additional servo from −90oto +90oat a spacing of 0.5o. This
leads to a 3D point-cloud of 541 ×361 = 195301 points per
sample. Both min/max scan angle and the angular resolution
can be adapted. The concrete values are given on the website
for each particular data set in case they differ from the standard
settings. The maximum range of the sensor is about 20 meters.
The time to take one full scan is about 32 seconds. This
is quite slow and mainly owned to the fact that this is a
rather low cost solution. By using a much faster but also
more costly 3D LRF like a Velodyne HDL-64E [14], this data
acquisition time could be reduced by two orders of magnitude
[15]. Independent of the time it took for taking a single scan,
the 2008 RREE data sets are collections of quite prototypical
3D range data as aLRF are widely used [2]. Also other 3D
range sensors exist that may be of interest in this context like
stereo [16] and time of flight cameras [17], [18].
The robot operated under conditions with high amounts of
rubble and dust on slippery surfaces. This rendered odometry
data completely useless; it has just been recorded for the sake
of completeness. As shown later on in section III, it is possible
to generate 3D maps by just directly registering consecutive
scans.
(a) Front overview of the collapsed
car parking-lot. Note the rubble to the
right.
(b) Close-up front view correspond-
ing to the lower-right part of the
fac¸ade in Fig. 1(a).
(c) Left view of the crushed car. (d) The robot collecting data under
the collapsed ceiling of the “ground
floor” under the car.
Fig. 1. The collapsed parking lot at Disaster City in College Station, Texas.
Fig. 2. An example point cloud showing a human ”victim” in the collapsed
car parking scenario.
The data sets are provided for free for academic
and non-commercial use as long as proper credits
are given. They can be downloaded from http:
//robotics.jacobs-university.de/datasets/
DisasterCity2008 where also additional information is
provided. Each data set consist of some several dozens 3D
scans (table I). An example of a scan is shown in figure 2.
The 3D data is provided in the X3D, a simple ASCII based
format, for which free as well as commercial viewers are
available.
The scans are supplemented by images shot by the front
camera of the robot while it moves through the scenarios
and collects the scans. The images were acquired with about
10 Hz. The total number of images is hence some thousand
to ten-thousands (table I). Each image is 512×288 pixel. A
few examples are shown in figure 3. The sequence of images
gives a good overview of the scenarios; they can for example
be animated as short movies to follow the path the robot has
taken. An example movie can be downloaded from http:
//robotics.jacobs-university.de/datasets/
DisasterCity2008/CollapsedCarParking. The
spots where the robot takes a scan can be clearly recognized
TABLE I
AN OVE RVIE W OF TH E NUM BER O F POI NT CL OUD S (#PC) AND F RONT
CA MER A PIC TUR ES (#PIC)IN THE 3D DATASE TS
set name #PC #pic
1 collapsed car park 35 12657
2 house of pan cakes 62 72355
3 dwelling 96 13751
4 freight train 118 31317
5 maze 158 11845
6 forest1 46 6268
7 forest2 131 12657
in the sequences as the aLRF - and especially its pitching
motion - is partially visible in the images.
Fig. 3. A few examples of the front camera view of the robot.
The first data set covers a collapsed car parking (figure 1).
The scenario consists of a large structure with several floors
that are lying on top of each other with rubble and cars in
between. It also involves a large rubble pile. The robot mainly
moved in open space collecting data from the side of the
structure and the rubble pile, but it also moved underneath
the lowest floor of the collapsed building.
The second data set is in a scenario known as the house
of pancakes (figure 4) as it features a collection of pancake
collapses. The robot moved through the house covering most
of its inside and also moved once along the house in its
outside, especially at the part where the main collapses are
located.
The dwelling is a set taken in a house where a major
flooding has taken place (figure 5). There are quite some
amounts of rubble in the scenario. Parts of the house are
destroyed. Especially, the ceiling is collapsed in some rooms.
The freight train disaster (figure 6) is covered in the fourth
dataset. It consist of several different types of derailed wagons
and a train engine that ran into a truck.
Fig. 4. House of pancakes (set 2)
Fig. 5. The dwelling (set 3)
The previous data sets are all based on realistic disaster
scenarios. The following ones deal with special artificial set
ups developed by NIST to test localization and mapping
capabilities. The maze (figure 7) is literally a maze made up of
regular cells and orthogonal walls. The maze features inclined
floors with different orientations. The maze is particularly
interesting to do end user tests in robot operations, especially
as localization and orientation challenge.
The ”forest” in the ”little red riding hood” scenario (figure
8) consists of round tubes on inclined floors of changing ori-
Fig. 6. Freight train (set 4)
Fig. 7. The maze (set 5)
Fig. 8. The ”forest” of ”little red riding hood” (set 7 and 8)
entations. The scenario is especially designed to test mapping
capabilities.
III. BENCHMARK SOLUTION: 3D PLANE SLAM
The datasets are very challenging for several reasons
1) no proper motion estimates are available
2) the robot makes rather large motions between scans
3) some scenarios feature real 6 Degrees of Freedom pose
changes, i.e., not only the yaw but also the roll and the
pitch of the robot change significantly
It is hence not evident that the data can be used for 3D
mapping. So far, 3D mapping experiments were conducted
with two datasets, namely the collapsed car park and the
dwelling scenario. As discussed in detail in [19], the standard
Fig. 9. Passenger train wreck
Iterative Closest Point (ICP) approach has severe difficulties
with registering many of the consecutive scan pairs; figure 11
shows a typical failed example.
It is of quite some interest to provide a first solution on the
datasets, which can serve as a benchmark. In the following, an
overview to an approach dubbed Plane-SLAM is given; a more
detailed presentation can be found in [19]. As mentioned, the
robot operated under conditions that are like in real disaster
scenarios, i.e., with high amount of rubble and dust, moving on
non-flat, partially unstable surfaces. This rendered odometry
data completely useless. It is hence necessary to solely rely
on the registration of the scans for generating 3D maps.
This approach is based on the postulation that large surface
patches are an ideal representation for 3D mapping in general
for multiple reasons. They are compact, well suited for visu-
alization, and an ideal basis to actually make use of the maps
through intelligent autonomous functions. But the first and
foremost reason is that they allow a fast and robust generation
of maps through SLAM. Concretely, 3D Plane SLAM was
developed for this purpose. It consists of the following steps:
1) consecutive acquisition of 3D range scans
2) extraction of planes including uncertainties
3) registration of scans based on plane sets
a) determination of the correspondence set maximiz-
ing the global rigid body motion constraint
b) by finding the optimal decoupled rotations
(Wahba’s problem) and translations (closed form
least squares) with related uncertainties
4) embedding of the registrations in a pose graph
5) loop detection and relaxation, i.e., SLAM proper
The different steps are illustrated in figure 11. The range of
runtimes is mainly caused by differences in the sensors that
Fig. 10. An example of an unsuccessful pairwise registration by ICP on the
collapsed car park dataset. The misaligned facade - indicated by dashed red
lines - is clearly recognizable in this topview.
Fig. 11. An overview of the different steps in 3D Plane SLAM. The span of
runtimes reflects the usage of different 3D range sensors from low resolution
time of flight cameras to 3D Laser Range Finders with high density data.
can be used for acquiring the 3D range scans. The highest run
times correspond to the high resolution 3D aLRF scans that
are presented in this paper. A proper introduction to the plane
extraction with uncertainties can be found in [20], a more de-
tailed discussion of the plane registration is presented in [21].
The novel plane registration can be embedded in a pose graph
implementation [22], [23], [24] for Simultaneous Localization
and Mapping (SLAM). The plane registration is inherently
very robust for rotations, thus allowing an extremely fast pose
graph relaxation in a closed form solution concentrating on
the translational errors.
The exact mean run times for the different steps in 3D-
Plane-SLAM applied on the collapsed car parking are shown
in table II. The main bottleneck in the overall time is the
actuated laser range finder, which takes about 32 seconds
per scan. But as mentioned before, this can be significantly
speeded up by using high end devices if necessary. The run
Fig. 12. Overview of a map generated by 3D Plane SLAM, which can serve as a benchmark solution for this dataset. It shows a collapsed car park at
Disaster City, Texas. The red dots indicate robot position where a scan is taken.
TABLE II
MEA N RUNT IME S IN SE CON DS FO R THE 3 D PLA NE SLAM IN THE
EX AMP LE OF T HE CO LLA PSE D CAR PAR KIN G AT DISASTER CITY.
Planes-extraction per scan 2.68
Plane-matching and registration per scan-pair 5.42
Polygonization per scan 2.52
Time for relaxation of pose-graph 0.01
(with 26 nodes and 33 edges)
times to process scans, especially to register two consecutive
ones, are so fast that an online generation of a 3D map is
possible. The total time for plane extraction and registration
of two consecutive scans is in the order of 10 seconds. This is
less time than what the robot needs for locomotion between
the scans, which is typically some 20 to 30 seconds. The whole
generation of the map - including pose graph SLAM that just
takes in the order of milli-seconds - can hence be done while
the robot moves along.
It has to be noted that the plane based representation
is actually also well suited for non-planar objects; see for
example figure 13 where a human victim is shown that can
be clearly recognized. At least, it has several advantages
over the standard representation of point clouds. The surface
representation is much more compact, hence very well suited
for narrow bandwidth communications to an operator station.
Also, the visualization of point clouds is non-trivial as there
is the risk that they either cover an object too densely, then
(a) The robot control GUI. (b) Polygon representation of a vic-
tim.
Fig. 13. A human “victim” in the scenario. The polygon representation is
much more efficient than the point cloud and it is well suited to support the
recognition of this object in the scene.
there is only one ”big blob” of points, or too sparsely, then it
is barely visible. These restrictions do not apply to polygonal
patches.
The Disaster City data sets do not come with exact ground
truth information. But there are many clearly distinguishable
structures that can be used as ground truth references across
scans and in 3D maps generated from them. Figures 14 and 15
show some examples that are used to demonstrate the positive
effects of using pose graph SLAM instead of using registration
only.
IV. CONCLUSION
A collection of 3D data sets from SSRR scenarios was
presented. The 3D point clouds from an actuated laser range
finder were collected at the 2008 NIST Response Robot
Evaluation Exercise (RREE) in Disaster City, Texas. The
(a) Closeup view showing the tran-
sition of the robot from outside the
structure to underneath the collapsed
floor.
(b) There are several ground truth
structures that can be easily identified
in the planar models before and after
relaxation as shown in figure 15.
Fig. 14. Some ground truth structures can be used to asses the improvements
through pose-graph SLAM.
(a) Zoomed-in top-view before relax-
ation
(b) Zoomed-in top-view after relax-
ation
Fig. 15. Several easily identifiable ground truth structures (see also figure
14(b)) show the clear improvements in the representation of details after the
pose-graph relaxation.
data sets can be used for performance evaluation of robotics
algorithms, especially for 3D mapping. The datasets are very
challenging in several respects, especially due to the lack of
motion estimates and the presence of full 6 DOF pose changes
in some scenarios. An example is presented where a 3D map
is generated with 3D Plane SLAM in the collapsed car parking
scenario, which can hence serve as a first benchmark solution.
ACKNOWLEDGMENTS
The research on 3D Mapping presented here was supported
by the German Research Foundation (DFG). Participation of
the Jacobs Team at the 2008 NIST Response Robot Evaluation
Exercise (RREE) was supported by the US National Institute
of Standards and Technology (NIST).
REFERENCES
[1] D. Fischer and P. Kohlhepp, “3D geometry reconstruction from multiple
segmented surface descriptions using neuro-fuzzy similarity measures,”
Journal of Intelligent and Robotic Systems, vol. 29, pp. 389–431, 2000.
[2] H. Surmann, A. Nuechter, and J. Hertzberg, “An autonomous
mobile robot with a 3d laser range finder for 3d exploration and
digitalization of indoor environments,” Robotics and Autonomous
Systems, vol. 45, no. 3-4, pp. 181–198, 2003. [Online]. Available:
⟨GotoISI⟩://000187460000003
[3] S. Thrun, D. Haehnel, D. Ferguson, M. Montemerlo, R. Triebel, W. Bur-
gard, C. Baker, Z. Omohundro, S. Thayer, and W. Whittaker, A System
for Volumetric Robotic Mapping of Abandoned Mines, Taipei, Taiwan,
2003.
[4] J. Weingarten and R. Siegwart, “3D SLAM using planar segments,” in
IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS), Beijing, 2006.
[5] M. Magnusson, A. Lilienthal, and T. Duckett, “Scan registration for
autonomous mining vehicles using 3D-NDT,” Journal of Field Robotics,
vol. 24, no. 10, pp. 803–827, 2007.
[6] A. N¨
uchter, K. Lingemann, and J. Hertzberg, “6D SLAM– 3D mapping
outdoor environments,” Journal of Field Robotics, vol. 24, no. 8/9, pp.
699–722, 2007.
[7] A. Howard and N. Roy, “The robotics data set repository (radish),”
2003. [Online]. Available: http://radish.sourceforge.net/
[8] Rawseeds, “Rawseeds website,” 2008. [Online]. Available: http:
//rawseeds.elet.polimi.it/home/
[9] C. Scrapper, R. Madhavan, and S. Balakirsky, “Stable navigation solu-
tions for robots in complex environments,” in IEEE International Work-
shop on Safety, Security and Rescue Robotics (SSRR), 2007, Conference
Proceedings, pp. 1–6.
[10] ——, “Performance analysis for stable mobile robot navigation so-
lutions,” in Proceedings of SPIE. International Society for Optical
Engineering, 2008, Conference Proceedings.
[11] TEEX, “Nist response robot evaluation exercise,” 2008. [Online].
Available: http://www.teex.com/teex.cfm?pageid=USARprog\&area=
USAR\&templateid=1538
[12] ——, “Disaster city,” 2008. [Online]. Available: http://www.teex.com/
teex.cfm?pageid=USARprog\&area=USAR\&templateid=1117
[13] A. Birk, K. Pathak, S. Schwertfeger, and W. Chonnaparamutt, The
IUB Rugbot: an intelligent, rugged mobile robot for search and rescue
operations. IEEE Press, 2006.
[14] “Hdl-64e user manual.” [Online]. Available: http://www.velodyne.com/
lidar/ManualList.aspx
[15] “Hdl-64e data sheet.” [Online]. Available: http://www.velodyne.com/
lidar/products/specifications.aspx
[16] S. T. Barnard and M. A. Fischler, “Computational stereo,” ACM Com-
puting Surveys (CSUR), vol. 14, no. 4, pp. 553–572, 1982.
[17] R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” Quan-
tum Electronics, IEEE Journal of, vol. 37, no. 3, pp. 390–397, 2001.
[18] J. Weingarten, G. Gruener, and R. Siegwart, “A state-of-the-art 3d
sensor for robot navigation,” in IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), vol. 3. IEEE Press, 2004,
Conference Proceedings, pp. 2155–2160 vol.3.
[19] K. Pathak, A. Birk, N. Vaskevicius, M. Pfingsthorn, S. Schwertfeger,
and J. Poppinga, “Online 3d slam by registration of large planar
surface segments and closed form pose-graph relaxation,” Journal of
Field Robotics, Special Issue on 3D Mapping, vol. 27, no. 1, pp.
52–84, 2010. [Online]. Available: http://robotics.jacobs-university.de/
publications/JFR-3D- PlaneSLAM.pdf
[20] K. Pathak, N. Vaskevicius, and A. Birk, “Revisiting uncertainty analysis
for optimum planes extracted from 3d range sensor point-clouds,” in
International Conference on Robotics and Automation (ICRA). IEEE
press, 2009, Conference Proceedings, pp. 1631 – 1636.
[21] K. Pathak, N. Vaskevicius, J. Poppinga, M. Pfingsthorn, S. Schwertfeger,
and A. Birk, “Fast 3d mapping by matching planes extracted from range
sensor point-clouds,” in International Conference on Intelligent Robots
and Systems (IROS). IEEE Press, 2009, Conference Proceedings.
[22] M. Pfingsthorn and A. Birk, “Efficiently communicating map updates
with the pose graph,” in Proceedings of the International Conference on
Intelligent Robots and Systems (IROS), 2008.
[23] M. Pfingsthorn, B. Slamet, and A. Visser, “A scalable hybrid multi-robot
slam method for highly detailed maps,” in RoboCup 2007: Proceedings
of the International Symposium, ser. LNAI. Springer, 2007.
[24] E. Olson, J. Leonard, and S. Teller, “Fast iterative alignment of pose
graphs with poor initial estimates,” in Robotics and Automation, 2006.
ICRA 2006. Proceedings 2006 IEEE International Conference on,
J. Leonard, Ed., 2006, Conference Proceedings, pp. 2262–2269.