Conference PaperPDF Available

3D data collection at disaster city at the 2008 NIST Response Robot Evaluation Exercise (RREE)

Authors:
  • Constructor University

Abstract and Figures

A collection of 3D data sets gathered at the 2008 NIST Response Robot Evaluation Exercise (RREE) in Disaster City, Texas is described. The data sets consist of 3D point clouds collected with an actuated laser range finder in different disaster scenarios. The data sets can be used for performance evaluation of robotics algorithms, especially for 3D mapping. An example is discussed where a D model is generated from scans taken in a collapsed car parking.
Content may be subject to copyright.
3D Data Collection at Disaster City at the 2008
NIST Response Robot Evaluation Exercise (RREE)
Andreas Birk, S¨
oren Schwertfeger, Kaustubh Pathak, and Narunas Vaskevicius
Jacobs University Bremen
Campus Ring 1, 28759 Bremen, Germany
a.birk@jacobs-university.de
http://robotics.jacobs-university.de
Abstract A collection of 3D data sets gathered at the 2008 NIST
Response Robot Evaluation Exercise (RREE) in Disaster City, Texas
is described. The data sets consist of 3D point clouds collected with
an actuated laser range finder in different disaster scenarios. The data
sets can be used for performance evaluation of robotics algorithms,
especially for 3D mapping. An example is discussed where a 3D
model is generated from scans taken in a collapsed car parking.
Keywords:performance evaluation, 3D simultaneous local-
ization and mapping (SLAM), 3D laser range finder, point
cloud
FINA L VER SION:
IEEE International Workshop on Safety, Security, and Rescue
Robotics (SSRR), 2009
@inproceedings{3Ddata-SARrobot-SSRR09,
author = {Birk, Andreas and Schwertfeger,
Soeren and Pathak, Kaustubh},
title = {3D Data Collection at Disaster
City at the 2008 NIST Response Robot
Evaluation Exercise (RREE)},
booktitle = {IEEE International Workshop
on Safety, Security, and Rescue Robotics
(SSRR)},
publisher = {IEEE Press},
year = {2009},
type = {Conference Proceedings}
}
I. INTRODUCTION
Safety, Security, and Rescue Robotics (SSRR) deals with
the most challenging unstructured domains. 3D perception
and modeling are hence core tasks as basis for autonomous
or semi-autonomous operations. Also, 3D models and mea-
surements are interesting mission deliverables for end users.
Recent advantages in 3D mapping [1], [2], [3], [4], [5], [6]
indicate some promising potential. But it is also clear that
there is still quite some research to be done in this area.
For fostering this research and to provide meaningful com-
parison bases for performance evaluations, data collections
based on commonly used sensors are of interest. Examples of
initiatives of this type include Robotics Data Set Repository
(RADISH) [7] and the Rawseeds website [8]. Further options
for the generation of reference data are standardized test
scenarios in simulated as well as real world environments [9],
[10].
In this paper, a collection of 3D data sets in a variety of
SSRR scenarios is described. The data was collected by a
robot with an actuated laser range finder during the 2008
Response Robot Evaluation Exercise (RREE) [11] at Disaster
City in College Station, Texas [12], an annual event organized
by the Intelligent Systems Division (ISD) of the National
Institute of Standards and Technology (NIST). In addition to
the presentation of the data sets, it is shown in an example in
section III that the data is indeed suited to generate 3D.
II. THE 3D DATASE TS F ROM DISASTER CI TY
The robot used to collect data at RREE 2008 is a Jacobs
response robot, an in-house development with significant sen-
sor payload and processing power [13]. It is equipped with an
actuated Laser Range Finder (aLRF). Concretely, the aLRF
is based on a SICK S 300. It has a horizontal field of view
of 270oof 541 beams. The sensor is rotated (pitched) by an
additional servo from 90oto +90oat a spacing of 0.5o. This
leads to a 3D point-cloud of 541 ×361 = 195301 points per
sample. Both min/max scan angle and the angular resolution
can be adapted. The concrete values are given on the website
for each particular data set in case they differ from the standard
settings. The maximum range of the sensor is about 20 meters.
The time to take one full scan is about 32 seconds. This
is quite slow and mainly owned to the fact that this is a
rather low cost solution. By using a much faster but also
more costly 3D LRF like a Velodyne HDL-64E [14], this data
acquisition time could be reduced by two orders of magnitude
[15]. Independent of the time it took for taking a single scan,
the 2008 RREE data sets are collections of quite prototypical
3D range data as aLRF are widely used [2]. Also other 3D
range sensors exist that may be of interest in this context like
stereo [16] and time of flight cameras [17], [18].
The robot operated under conditions with high amounts of
rubble and dust on slippery surfaces. This rendered odometry
data completely useless; it has just been recorded for the sake
of completeness. As shown later on in section III, it is possible
to generate 3D maps by just directly registering consecutive
scans.
(a) Front overview of the collapsed
car parking-lot. Note the rubble to the
right.
(b) Close-up front view correspond-
ing to the lower-right part of the
fac¸ade in Fig. 1(a).
(c) Left view of the crushed car. (d) The robot collecting data under
the collapsed ceiling of the “ground
floor” under the car.
Fig. 1. The collapsed parking lot at Disaster City in College Station, Texas.
Fig. 2. An example point cloud showing a human ”victim” in the collapsed
car parking scenario.
The data sets are provided for free for academic
and non-commercial use as long as proper credits
are given. They can be downloaded from http:
//robotics.jacobs-university.de/datasets/
DisasterCity2008 where also additional information is
provided. Each data set consist of some several dozens 3D
scans (table I). An example of a scan is shown in figure 2.
The 3D data is provided in the X3D, a simple ASCII based
format, for which free as well as commercial viewers are
available.
The scans are supplemented by images shot by the front
camera of the robot while it moves through the scenarios
and collects the scans. The images were acquired with about
10 Hz. The total number of images is hence some thousand
to ten-thousands (table I). Each image is 512×288 pixel. A
few examples are shown in figure 3. The sequence of images
gives a good overview of the scenarios; they can for example
be animated as short movies to follow the path the robot has
taken. An example movie can be downloaded from http:
//robotics.jacobs-university.de/datasets/
DisasterCity2008/CollapsedCarParking. The
spots where the robot takes a scan can be clearly recognized
TABLE I
AN OVE RVIE W OF TH E NUM BER O F POI NT CL OUD S (#PC) AND F RONT
CA MER A PIC TUR ES (#PIC)IN THE 3D DATASE TS
set name #PC #pic
1 collapsed car park 35 12657
2 house of pan cakes 62 72355
3 dwelling 96 13751
4 freight train 118 31317
5 maze 158 11845
6 forest1 46 6268
7 forest2 131 12657
in the sequences as the aLRF - and especially its pitching
motion - is partially visible in the images.
Fig. 3. A few examples of the front camera view of the robot.
The first data set covers a collapsed car parking (figure 1).
The scenario consists of a large structure with several floors
that are lying on top of each other with rubble and cars in
between. It also involves a large rubble pile. The robot mainly
moved in open space collecting data from the side of the
structure and the rubble pile, but it also moved underneath
the lowest floor of the collapsed building.
The second data set is in a scenario known as the house
of pancakes (figure 4) as it features a collection of pancake
collapses. The robot moved through the house covering most
of its inside and also moved once along the house in its
outside, especially at the part where the main collapses are
located.
The dwelling is a set taken in a house where a major
flooding has taken place (figure 5). There are quite some
amounts of rubble in the scenario. Parts of the house are
destroyed. Especially, the ceiling is collapsed in some rooms.
The freight train disaster (figure 6) is covered in the fourth
dataset. It consist of several different types of derailed wagons
and a train engine that ran into a truck.
Fig. 4. House of pancakes (set 2)
Fig. 5. The dwelling (set 3)
The previous data sets are all based on realistic disaster
scenarios. The following ones deal with special artificial set
ups developed by NIST to test localization and mapping
capabilities. The maze (figure 7) is literally a maze made up of
regular cells and orthogonal walls. The maze features inclined
floors with different orientations. The maze is particularly
interesting to do end user tests in robot operations, especially
as localization and orientation challenge.
The ”forest” in the ”little red riding hood” scenario (figure
8) consists of round tubes on inclined floors of changing ori-
Fig. 6. Freight train (set 4)
Fig. 7. The maze (set 5)
Fig. 8. The ”forest” of ”little red riding hood” (set 7 and 8)
entations. The scenario is especially designed to test mapping
capabilities.
III. BENCHMARK SOLUTION: 3D PLANE SLAM
The datasets are very challenging for several reasons
1) no proper motion estimates are available
2) the robot makes rather large motions between scans
3) some scenarios feature real 6 Degrees of Freedom pose
changes, i.e., not only the yaw but also the roll and the
pitch of the robot change significantly
It is hence not evident that the data can be used for 3D
mapping. So far, 3D mapping experiments were conducted
with two datasets, namely the collapsed car park and the
dwelling scenario. As discussed in detail in [19], the standard
Fig. 9. Passenger train wreck
Iterative Closest Point (ICP) approach has severe difficulties
with registering many of the consecutive scan pairs; figure 11
shows a typical failed example.
It is of quite some interest to provide a first solution on the
datasets, which can serve as a benchmark. In the following, an
overview to an approach dubbed Plane-SLAM is given; a more
detailed presentation can be found in [19]. As mentioned, the
robot operated under conditions that are like in real disaster
scenarios, i.e., with high amount of rubble and dust, moving on
non-flat, partially unstable surfaces. This rendered odometry
data completely useless. It is hence necessary to solely rely
on the registration of the scans for generating 3D maps.
This approach is based on the postulation that large surface
patches are an ideal representation for 3D mapping in general
for multiple reasons. They are compact, well suited for visu-
alization, and an ideal basis to actually make use of the maps
through intelligent autonomous functions. But the first and
foremost reason is that they allow a fast and robust generation
of maps through SLAM. Concretely, 3D Plane SLAM was
developed for this purpose. It consists of the following steps:
1) consecutive acquisition of 3D range scans
2) extraction of planes including uncertainties
3) registration of scans based on plane sets
a) determination of the correspondence set maximiz-
ing the global rigid body motion constraint
b) by finding the optimal decoupled rotations
(Wahba’s problem) and translations (closed form
least squares) with related uncertainties
4) embedding of the registrations in a pose graph
5) loop detection and relaxation, i.e., SLAM proper
The different steps are illustrated in figure 11. The range of
runtimes is mainly caused by differences in the sensors that
Fig. 10. An example of an unsuccessful pairwise registration by ICP on the
collapsed car park dataset. The misaligned facade - indicated by dashed red
lines - is clearly recognizable in this topview.
Fig. 11. An overview of the different steps in 3D Plane SLAM. The span of
runtimes reflects the usage of different 3D range sensors from low resolution
time of flight cameras to 3D Laser Range Finders with high density data.
can be used for acquiring the 3D range scans. The highest run
times correspond to the high resolution 3D aLRF scans that
are presented in this paper. A proper introduction to the plane
extraction with uncertainties can be found in [20], a more de-
tailed discussion of the plane registration is presented in [21].
The novel plane registration can be embedded in a pose graph
implementation [22], [23], [24] for Simultaneous Localization
and Mapping (SLAM). The plane registration is inherently
very robust for rotations, thus allowing an extremely fast pose
graph relaxation in a closed form solution concentrating on
the translational errors.
The exact mean run times for the different steps in 3D-
Plane-SLAM applied on the collapsed car parking are shown
in table II. The main bottleneck in the overall time is the
actuated laser range finder, which takes about 32 seconds
per scan. But as mentioned before, this can be significantly
speeded up by using high end devices if necessary. The run
Fig. 12. Overview of a map generated by 3D Plane SLAM, which can serve as a benchmark solution for this dataset. It shows a collapsed car park at
Disaster City, Texas. The red dots indicate robot position where a scan is taken.
TABLE II
MEA N RUNT IME S IN SE CON DS FO R THE 3 D PLA NE SLAM IN THE
EX AMP LE OF T HE CO LLA PSE D CAR PAR KIN G AT DISASTER CITY.
Planes-extraction per scan 2.68
Plane-matching and registration per scan-pair 5.42
Polygonization per scan 2.52
Time for relaxation of pose-graph 0.01
(with 26 nodes and 33 edges)
times to process scans, especially to register two consecutive
ones, are so fast that an online generation of a 3D map is
possible. The total time for plane extraction and registration
of two consecutive scans is in the order of 10 seconds. This is
less time than what the robot needs for locomotion between
the scans, which is typically some 20 to 30 seconds. The whole
generation of the map - including pose graph SLAM that just
takes in the order of milli-seconds - can hence be done while
the robot moves along.
It has to be noted that the plane based representation
is actually also well suited for non-planar objects; see for
example figure 13 where a human victim is shown that can
be clearly recognized. At least, it has several advantages
over the standard representation of point clouds. The surface
representation is much more compact, hence very well suited
for narrow bandwidth communications to an operator station.
Also, the visualization of point clouds is non-trivial as there
is the risk that they either cover an object too densely, then
(a) The robot control GUI. (b) Polygon representation of a vic-
tim.
Fig. 13. A human “victim” in the scenario. The polygon representation is
much more efficient than the point cloud and it is well suited to support the
recognition of this object in the scene.
there is only one ”big blob” of points, or too sparsely, then it
is barely visible. These restrictions do not apply to polygonal
patches.
The Disaster City data sets do not come with exact ground
truth information. But there are many clearly distinguishable
structures that can be used as ground truth references across
scans and in 3D maps generated from them. Figures 14 and 15
show some examples that are used to demonstrate the positive
effects of using pose graph SLAM instead of using registration
only.
IV. CONCLUSION
A collection of 3D data sets from SSRR scenarios was
presented. The 3D point clouds from an actuated laser range
finder were collected at the 2008 NIST Response Robot
Evaluation Exercise (RREE) in Disaster City, Texas. The
(a) Closeup view showing the tran-
sition of the robot from outside the
structure to underneath the collapsed
floor.
(b) There are several ground truth
structures that can be easily identified
in the planar models before and after
relaxation as shown in figure 15.
Fig. 14. Some ground truth structures can be used to asses the improvements
through pose-graph SLAM.
(a) Zoomed-in top-view before relax-
ation
(b) Zoomed-in top-view after relax-
ation
Fig. 15. Several easily identifiable ground truth structures (see also figure
14(b)) show the clear improvements in the representation of details after the
pose-graph relaxation.
data sets can be used for performance evaluation of robotics
algorithms, especially for 3D mapping. The datasets are very
challenging in several respects, especially due to the lack of
motion estimates and the presence of full 6 DOF pose changes
in some scenarios. An example is presented where a 3D map
is generated with 3D Plane SLAM in the collapsed car parking
scenario, which can hence serve as a first benchmark solution.
ACKNOWLEDGMENTS
The research on 3D Mapping presented here was supported
by the German Research Foundation (DFG). Participation of
the Jacobs Team at the 2008 NIST Response Robot Evaluation
Exercise (RREE) was supported by the US National Institute
of Standards and Technology (NIST).
REFERENCES
[1] D. Fischer and P. Kohlhepp, “3D geometry reconstruction from multiple
segmented surface descriptions using neuro-fuzzy similarity measures,”
Journal of Intelligent and Robotic Systems, vol. 29, pp. 389–431, 2000.
[2] H. Surmann, A. Nuechter, and J. Hertzberg, An autonomous
mobile robot with a 3d laser range finder for 3d exploration and
digitalization of indoor environments, Robotics and Autonomous
Systems, vol. 45, no. 3-4, pp. 181–198, 2003. [Online]. Available:
GotoISI://000187460000003
[3] S. Thrun, D. Haehnel, D. Ferguson, M. Montemerlo, R. Triebel, W. Bur-
gard, C. Baker, Z. Omohundro, S. Thayer, and W. Whittaker, A System
for Volumetric Robotic Mapping of Abandoned Mines, Taipei, Taiwan,
2003.
[4] J. Weingarten and R. Siegwart, “3D SLAM using planar segments,” in
IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS), Beijing, 2006.
[5] M. Magnusson, A. Lilienthal, and T. Duckett, “Scan registration for
autonomous mining vehicles using 3D-NDT,” Journal of Field Robotics,
vol. 24, no. 10, pp. 803–827, 2007.
[6] A. N¨
uchter, K. Lingemann, and J. Hertzberg, “6D SLAM– 3D mapping
outdoor environments, Journal of Field Robotics, vol. 24, no. 8/9, pp.
699–722, 2007.
[7] A. Howard and N. Roy, “The robotics data set repository (radish),”
2003. [Online]. Available: http://radish.sourceforge.net/
[8] Rawseeds, “Rawseeds website, 2008. [Online]. Available: http:
//rawseeds.elet.polimi.it/home/
[9] C. Scrapper, R. Madhavan, and S. Balakirsky, “Stable navigation solu-
tions for robots in complex environments, in IEEE International Work-
shop on Safety, Security and Rescue Robotics (SSRR), 2007, Conference
Proceedings, pp. 1–6.
[10] ——, “Performance analysis for stable mobile robot navigation so-
lutions,” in Proceedings of SPIE. International Society for Optical
Engineering, 2008, Conference Proceedings.
[11] TEEX, “Nist response robot evaluation exercise,” 2008. [Online].
Available: http://www.teex.com/teex.cfm?pageid=USARprog\&area=
USAR\&templateid=1538
[12] ——, “Disaster city, 2008. [Online]. Available: http://www.teex.com/
teex.cfm?pageid=USARprog\&area=USAR\&templateid=1117
[13] A. Birk, K. Pathak, S. Schwertfeger, and W. Chonnaparamutt, The
IUB Rugbot: an intelligent, rugged mobile robot for search and rescue
operations. IEEE Press, 2006.
[14] “Hdl-64e user manual.” [Online]. Available: http://www.velodyne.com/
lidar/ManualList.aspx
[15] “Hdl-64e data sheet.” [Online]. Available: http://www.velodyne.com/
lidar/products/specifications.aspx
[16] S. T. Barnard and M. A. Fischler, “Computational stereo,” ACM Com-
puting Surveys (CSUR), vol. 14, no. 4, pp. 553–572, 1982.
[17] R. Lange and P. Seitz, “Solid-state time-of-flight range camera,” Quan-
tum Electronics, IEEE Journal of, vol. 37, no. 3, pp. 390–397, 2001.
[18] J. Weingarten, G. Gruener, and R. Siegwart, A state-of-the-art 3d
sensor for robot navigation, in IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), vol. 3. IEEE Press, 2004,
Conference Proceedings, pp. 2155–2160 vol.3.
[19] K. Pathak, A. Birk, N. Vaskevicius, M. Pfingsthorn, S. Schwertfeger,
and J. Poppinga, “Online 3d slam by registration of large planar
surface segments and closed form pose-graph relaxation, Journal of
Field Robotics, Special Issue on 3D Mapping, vol. 27, no. 1, pp.
52–84, 2010. [Online]. Available: http://robotics.jacobs-university.de/
publications/JFR-3D- PlaneSLAM.pdf
[20] K. Pathak, N. Vaskevicius, and A. Birk, “Revisiting uncertainty analysis
for optimum planes extracted from 3d range sensor point-clouds,” in
International Conference on Robotics and Automation (ICRA). IEEE
press, 2009, Conference Proceedings, pp. 1631 1636.
[21] K. Pathak, N. Vaskevicius, J. Poppinga, M. Pfingsthorn, S. Schwertfeger,
and A. Birk, “Fast 3d mapping by matching planes extracted from range
sensor point-clouds,” in International Conference on Intelligent Robots
and Systems (IROS). IEEE Press, 2009, Conference Proceedings.
[22] M. Pfingsthorn and A. Birk, “Efficiently communicating map updates
with the pose graph,” in Proceedings of the International Conference on
Intelligent Robots and Systems (IROS), 2008.
[23] M. Pfingsthorn, B. Slamet, and A. Visser, “A scalable hybrid multi-robot
slam method for highly detailed maps,” in RoboCup 2007: Proceedings
of the International Symposium, ser. LNAI. Springer, 2007.
[24] E. Olson, J. Leonard, and S. Teller, “Fast iterative alignment of pose
graphs with poor initial estimates,” in Robotics and Automation, 2006.
ICRA 2006. Proceedings 2006 IEEE International Conference on,
J. Leonard, Ed., 2006, Conference Proceedings, pp. 2262–2269.
... The major motivations for the first datasets were simultaneous localization and mapping (SLAM) and 3D modeling of disaster environments, which are fundamental capabilities for SAR UGVs (Droeschel et al. 2017) . Several works used Disaster City (Texas), a training facility with a variety of realistic mock-up SAR scenarios (e.g., a collapsed parking building and a train accident), to collect UGV-based datasets with physical fidelity, such as 3D lidar scans for mapping (Ohno et al. 2010) and combinations of lidar and RGB images for terrain classification and SLAM (Pellenz et al. 2010) and 3D representations with a simulated victim (Birk et al. 2009) (Pathak et al. 2010). This facility was also used to test the RESPOND-R data management framework during exercises with a UAV-UGV team, where data from heterogeneous sources, such as communications or humanrobot interactions, were collected in addition to video and scans (Shrewsbury et al. 2013)(Duncan andMurphy 2014). ...
Article
Full-text available
This article presents a collection of multimodal raw data captured from a manned all-terrain vehicle in the course of two realistic outdoor search and rescue (SAR) exercises for actual emergency responders conducted in Málaga (Spain) in 2018 and 2019: the UMA-SAR dataset. The sensor suite, applicable to unmanned ground vehicles (UGVs), consisted of overlapping visible light (RGB) and thermal infrared (TIR) forward-looking monocular cameras, a Velodyne HDL-32 three-dimensional (3D) lidar, as well as an inertial measurement unit (IMU) and two global positioning system (GPS) receivers as ground truth. Our mission was to collect a wide range of data from the SAR domain, including persons, vehicles, debris, and SAR activity on unstructured terrain. In particular, four data sequences were collected following closed-loop routes during the exercises, with a total path length of 5.2 km and a total time of 77 min. In addition, we provide three more sequences of the empty site for comparison purposes (an extra 4.9 km and 46 min). Furthermore, the data is offered both in human-readable format and as rosbag files, and two specific software tools are provided for extracting and adapting this dataset to the users’ preference. The review of previously published disaster robotics repositories indicates that this dataset can contribute to fill a gap regarding visual and thermal datasets and can serve as a research tool for cross-cutting areas such as multispectral image fusion, machine learning for scene understanding, person and object detection, and localization and mapping in unstructured environments. The full dataset is publicly available at: www.uma.es/robotics-and-mechatronics/sar-datasets .
... The datasets can be classified by the environment in which they have been collected: 1) Indoor: [16], [17], [4], provide indoor datasets with annotations. 2) Outdoor: [18], [19], [20], [21], [22] provide outdoor datasets of outdoor environments. 3) Simulation: [23] provides a simulation indoor scene dataset. ...
Preprint
Full-text available
This paper presents a fully hardware synchronized mapping robot with support for a hardware synchronized external tracking system, for super-precise timing and localization. Nine high-resolution cameras and two 32-beam 3D Lidars were used along with a professional, static 3D scanner for ground truth map collection. With all the sensors calibrated on the mapping robot, three datasets are collected to evaluate the performance of mapping algorithms within a room and between rooms. Based on these datasets we generate maps and trajectory data, which is then fed into evaluation algorithms. We provide the datasets for download and the mapping and evaluation procedures are made in a very easily reproducible manner for maximum comparability. We have also conducted a survey on available robotics-related datasets and compiled a big table with those datasets and a number of properties of them.
... The data collected with those 2D LRF sensors is often sufficient for robot to navigate in structured environments such as offices or homes. But research in outdoor robotics, for example also in the area of Safety, Security and Rescue Robotics (SSRR), has to deal with highly unstructured terrain for which we need information about distances to obstacles not just on a 2D plane but in the 3D volume surrounding the robot [2]. ...
Preprint
For mobile robots range sensors are important to perceive the environment. Sensors that can measure in a 3D volume are especially significant for outdoor robotics, because this environment is often highly unstructured. The quality of the data gathered by those sensors influences all algorithms relying on it. In this paper thus the precision of several 2D and 2.5D sensors is measured at different ranges and different incidence angles. The results of all tests are presented and analyzed.
... In other words, in USAR domain only those data transmission methods are preferable which transmit needed data as compared to the methods which passes as much data as possible [20] . Birk et al. partially address this issue by presenting an approach which takes three dimensional point data and compresses those surface points to corresponding plane patches, i.e. instead of representing the data as costly point clouds they calculate plane patches of the underling surfaces [21]. This approach takes in consideration the need of weak communication skills by minimizing the data volume, nevertheless it does not utilize color information and the plane patches optimized for human rescue workers perception. ...
Conference Paper
Full-text available
For the autonomous navigation of the robots in unknown environments, generation of environmental maps and 3D scene reconstruction play a significant role. Simultaneous localization and mapping (SLAM) helps the robots to perceive, plan and navigate autonomously whereas scene reconstruction helps the human supervisors to understand the scene and act accordingly during joint activities with the robots. For successful completion of these joint activities, a detailed understanding of the environment is required for human and robots to interact with each other. Generally, the robots are equipped with multiple sensors and acquire a large amount of data which is challenging to handle. In this paper we propose an efficient 3D scene reconstruction approach for such scenarios using vision and graphics based techniques. This approach can be applied to indoor, outdoor, small and large scale environments. The ultimate goal of this paper is to apply this system to joint rescue operations executed by human and robot teams by reducing a large amount of point cloud data to a smaller amount without compromising on the visual quality of the scene. From thorough experimentation, we show that the proposed system is memory and time efficient and capable to run on the processing unit mounted on the autonomous vehicle. For experimentation purposes, we use standard RGB-D benchmark dataset.
... IEEE Press. 2009 [Birk et al., 2009a] • Birk, A., Pathak, K., Poppinga, J., Schwertfeger, S., Pfingsthorn, M., and Bülow, H. The jacobs test arena for safety, security, and rescue robotics (ssrr). ...
Thesis
Full-text available
Being able to generate maps is a significant capability for mobile robots. Measuring the performance of robotic systems in general, but also particularly of their mapping, is important in different as- pects. Performance metrics help to assess the quality of developed solutions, thus driving the research towards more capable systems. During the procurement and safety testing of robots, performance metrics ensure comparability of different robots and allow for the definition of standards. In this thesis, evaluation methods for the maps produced by robotic systems are developed. Those maps always contain errors, but measuring and classifying those errors is a non trivial task. The algorithm has to analyze and evaluate the maps in a systematic, repeatable and reproducible way. The problem is approached systematically: First the different terms and concepts are introduced and the state of the art in map evaluation is presented. Then a special type of mapping using video data is introduced and a path-based evaluation of the performance of this mapping approach is made. This evaluation does not work on the produced map, but on the localization estimates of the mapping algorithm. The rest of the thesis then works on classical two-dimensional grid maps. A number of algorithms to process those maps are presented. An Image Thresholder extracts informations about occupied and free cells, while a Nearest Neighbor Remover or an Alpha Shape Remover are used to filter out noise from the maps. This all is needed to automatically process the maps. Then the first novel map evaluation method, the Fiducial algorithm, is developed. In this place- based method, artificial markers that are distributed in the environment are detected in the map. The errors of the positions of those markers with respect to the known ground truth positions are used to calculate a number of attributes of the map. Those attributes can then be weighted according to the needs of the application to generate a single number map metric. The main contribution of this thesis is the second novel map evaluation algorithm, that uses a graph that is representing the environment topologically. This structure-based approach abstracts from all other information in the map and just uses the topological information about which areas are directly connected to asses the quality of the map. The different steps needed to generate this topological graph are extensively described. Then different ways to compare the similarity of two vertices from two graphs are presented and compared. This is needed to match two graphs to each other - the graph from the map to be evaluated and the graph of a known ground truth map. Using this match, the same map attributes as those from the Fiducial algorithm can be computed. Additionally, another interesting attribute, the brokenness value, can be determined. It counts the large broken parts in the map that internally contain few errors but that have, relative to the rest of the map, an error in the orientation due to a singular error during the mapping process. Experiments made on many maps from different environments are then performed for both map metrics. Those experiments show the usefulness of said algorithms and compare their results among each other and against the human judgment of maps.
Article
Full-text available
BACKGROUND: Taking exercise in health sector is one of the important steps to implement the disaster risk management programs, especially preparedness phase. The present study aimed to identify indexes and factors affecting successful evaluation of disasters preparedness exercises in hot wash stage. MATERIALS AND METHODS: This study was a qualitative content analysis. Data were collected by purposeful sampling through in-depth and semi-structured individual interviews with 25 health professionals in the field of disasters. The data were analyzed using directed content analysis method by which the initial codes were extracted after transcribing the recorded interviews and immersing them in the data analysis. The initial codes were reviewed, classified, and subdivided into several stages to determine the main classes. RESULTS: The data analysis resulted in the production of 24 initial codes, 5 subcategories, 2 main categories of “evaluation and exercise debriefing” and “modification of programs and promotion of exercise operational functions” under the original theme of “exercise immediate feedback.” CONCLUSION: This study can be considered a suitable standard guide for health care organizations to evaluate successfully disasters exercises in hot wash stage, maintain and promote their preparedness, and properly respond to disasters.
Article
This paper presents a fully hardware synchronized mapping robot with support for a hardware synchronized external tracking system, for super-precise timing and localization. Nine high-resolution cameras and two 32-beam 3D Lidars were used along with a professional, static 3D scanner for ground truth map collection. With all the sensors calibrated on the mapping robot, three datasets are collected to evaluate the performance of mapping algorithms within a room and between rooms. Based on these datasets we generate maps and trajectory data, which is then fed into evaluation algorithms. We provide the datasets for download and the mapping and evaluation procedures are made in a very easily reproducible manner for maximum comparability. We have also conducted a survey on available robotics-related datasets and compiled a big table with those datasets and a number of properties of them.
Conference Paper
Robots play an increasingly important role in social life, especially the front desk robots. But the front desk robots seldom handle business in reality unless they have proper functionalities for Human-Robot Interaction (HRI). For enhancing the immersed sense in the interactive process, we consider the 3D image of the upper body of the operator in the remote control room as the upper body of front desk robots. However, it is a great challenge to transmit the 3D point cloud data in the way of remote interaction. The paper uses a simple method to deal with the problem of transmitting the 3D point cloud data and the idea of the method is to reduce the network data volume. In order to reduce the network data volume, we only consider that the 3D point cloud of interest will be transmitted. The filters are used to remove the noise and background of the 3D image, the segmentation algorithm will be used to acquire the 3D point cloud data of interest. The experiment result demonstrates that the method can reduce the network data volume and ensure high-quality image information. Thus, the method can reduce transmission time.
Conference Paper
In this paper, we propose a system of the security robot and its software architecture. The software architecture consists of two parts, which are an autonomous navigation part and a semantic perceptron part. Autonomous navigation software can drive a robot autonomously. An semantic perception software can perform the security missions. The security patrol scenario compromises three steps. In the first step, a robot collects information about the environments. Then the perceived structures are compared with previous information. If there is something to be changed or to be strange, a robot alarm to a supervisor. The proposed system and software architecture will make a robot to perform security missions.
Conference Paper
Full-text available
This paper describes two robotic systems developed for acquiring accurate volumetric maps of underground mines. One system is based on a cart instrumented by laser range finders, pushed through a mine by people. Another is a remotely controlled mobile robot equipped with laser range finders. To build consistent maps of large mines with many cycles, we describe an algorithm for estimating global correspondences and aligning robot paths. This algorithm enables us to recover consistent maps several hundreds of meters in diameter, without odometric information. We report results obtained in two mines, a research mine in Bruceton, PA, and an abandoned coal mine in Burgettstown, PA.
Conference Paper
Full-text available
The paper describes the IUB Rugbot, a rugged mobile robot that features quite some on-board intelligence. The robot and its software is the latest development of the IUB rescue robot team, which is active since 2001 in this research area. IUB robotics takes an integrated approach to rescue robots. This means that the development of the according systems ranges from the basic mechatronics to the high-level functionalities for intelligent behavior.
Article
Full-text available
Robot navigation in complex, dynamic and unstructured environments demands robust mapping and localization solutions. One of the most popular methods in recent years has been the use of scan-matching schemes where temporally correlated sensor data sets are registered for obtaining a Simultaneous Localization and Mapping (SLAM) navigation solution. The primary bottleneck of such scan-matching schemes is correspondence determination, i.e. associating a feature (structure) in one dataset to its counterpart in the other. Outliers, occlusions, and sensor noise complicate the determination of reliable correspondences. This paper describes testing scenarios being developed at NIST to analyze the performance of scan-matching algorithms. This analysis is critical for the development of practical SLAM algorithms in various application domains where sensor payload, wheel slippage, and power constraints impose severe restrictions. We will present results using a high-fidelity simulation testbed, the Unified System for Automation and Robot Simulation (USARSim).
Conference Paper
Full-text available
In this work, we utilize a recently studied more accurate range noise model for 3D sensors to derive from scratch the expressions for the optimum plane which best fits a point-cloud and for the combined covariance matrix of the plane's parameters. The parameters in question are the plane's normal and its distance to the origin. The range standard-deviation model used by us is a quadratic function of the true range and is a function of the incidence angle as well. We show that for this model, the maximum-likelihood plane is biased, whereas the least-squares plane is not. The plane-parameters' covariance matrix for the least-squares plane is shown to possess a number of desirable properties, e.g., the optimal solution forms its null-space and its components are functions of easily understood terms like the planar-patch's center and scatter. We verify our covariance expression with that obtained by the eigenvector perturbation method. We further compare our method to that of renormalization with respect to the theoretically best covariance matrix in simulation. The application of our approach to real-time range-image registration and plane fusion is shown by an example using a commercially available 3D range sensor. Results show that our method has good accuracy, is fast to compute, and is easy to interpret intuitively.
Conference Paper
Full-text available
Robot mapping is a task that can benefit a lot from cooperative multi-robot systems. In multi-robot simultaneous localization and mapping (SLAM), it becomes very important how efficiently a given map can be shared among the robot team. To this end, the recently proposed pose graph map representation is used, adapted for use in a particle filter based mapping algorithm, and compared to the standard occupancy grid representation. Through analysis of corner cases as well as experiments with real robot data, the two map representations are thoroughly compared. It is shown that the pose graph representation allows for much more efficient communication of map updates than the standard occupancy grid.
Conference Paper
Full-text available
This article addresses fast 3D mapping by a mobile robot in a predominantly planar environment. It is based on a novel pose registration algorithm based entirely on matching features composed of plane-segments extracted from point-clouds sampled from a 3D sensor. The approach has advantages in terms of robustness, speed and storage as compared to the voxel based approaches. Unlike previous approaches, the uncertainty in plane parameters is utilized to compute the uncertainty in the pose computed by scan-registration. The algorithm is illustrated by creating a full 3D model of a multi-level robot testing arena.
Data
D SLAM (Simultaneous Localization and Map- ping) of mobile robots considers six dimensions for the robot pose, namely, the x, y and z coordinates and the roll, yaw and pitch angles. Robot motion and localization on natural surfaces, e.g., when driving with a mobile robot outdoor, must regard these degrees of freedom. This paper presents a robotic mapping method based on locally consistent 3D laser range scans. Scan matching, combined with a heuristic for closed loop detection and a global relaxation method, results in a highly precise mapping system for outdoor environments. The mobile robot Kurt3D was used to acquire data of the Schloss Birlinghoven campus. The resulting 3D map is compared with ground truth, given by an aerial photograph.
Conference Paper
A robot exploring an environment can estimate its own motion and the relative positions of features in the environment. Simultaneous localization and mapping (SLAM) algorithms attempt to fuse these estimates to produce a map and a robot trajectory. The constraints are generally non-linear, thus SLAM can be viewed as a non-linear optimization problem. The optimization can be difficult, due to poor initial estimates arising from odometry data, and due to the size of the state space. We present a fast non-linear optimization algorithm that rapidly recovers the robot trajectory, even when given a poor initial estimate. Our approach uses a variant of stochastic gradient descent on an alternative state-space representation that has good stability and computational properties. We compare our algorithm to several others, using both real and synthetic data sets
Article
Digital 3D models of the environment are needed in rescue and inspection robotics, facility managements and architecture. This paper presents an automatic system for gaging and digitalization of 3D indoor environments. It consists of an autonomous mobile robot, a reliable 3D laser range finder and three elaborated software modules. The first module, a fast variant of the Iterative Closest Points algorithm, registers the 3D scans in a common coordinate system and relocalizes the robot. The second module, a next best view planner, computes the next nominal pose based on the acquired 3D data while avoiding complicated obstacles. The third module, a closed-loop and globally stable motor controller, navigates the mobile robot to a nominal pose on the base of odometry and avoids collisions with dynamical obstacles. The 3D laser range finder acquires a 3D scan at this pose. The proposed method allows one to digitalize large indoor environments fast and reliably without any intervention and solves the SLAM problem. The results of two 3D digitalization experiments are presented using a fast octree-based visualization method.