Conference PaperPDF Available

Hector Open Source Modules for Autonomous Mapping and Navigation with Rescue Robots

Authors:
  • Energy Robotics GmbH

Abstract and Figures

Key abilities for robots deployed in urban search and rescue tasks include autonomous exploration of disaster sites and recognition of victims and other objects of interest. In this paper, we present related open source software modules for the development of such complex capabilities which include hector_slam for self-localization and mapping in a degraded urban environment. All modules have been successfully applied and tested originally in the RoboCup Rescue competition. Up to now they have already been re-used and adopted by numerous international research groups for a wide variety of tasks. Recently, they have also become part of the basis of a broader initiative for key open source software modules for urban search and rescue robots.
Content may be subject to copyright.
Hector Open Source Modules for Autonomous
Mapping and Navigation with Rescue Robots
Stefan Kohlbrecher1, Johannes Meyer2, Thorsten Graber 2, Karen Petersen1,
Uwe Klingauf2, and Oskar von Stryk1
1Department of Computer Science, TU Darmstadt, Germany
2Department of Mechanical Engineering, TU Darmstadt, Germany
http://www.gkmm.tu-darmstadt.de/rescue
Abstract. Key abilities for robots deployed in urban search and rescue
tasks include autonomous exploration of disaster sites and recognition of
victims and other objects of interest. In this paper, we present related
open source software modules for the development of such complex ca-
pabilities which include hector slam for self-localization and mapping in
a degraded urban environment. All modules have been successfully ap-
plied and tested originally in the RoboCup Rescue competition. Up to
now they have already been re-used and adopted by numerous interna-
tional research groups for a wide variety of tasks. Recently, they have
also become part of the basis of a broader initiative for key open source
software modules for urban search and rescue robots.
1 Introduction
While robots used for Urban Search and Rescue (USAR) tasks will remain mainly
tele-operated for the immediate future when used in real disaster sites, increasing
the autonomy level is an important area of research that has the potential to
vastly improve the capabilites of robots used for disaster response in the future.
The RoboCup Rescue project aims at advancing research towards more ca-
pable rescue robots [1]. Rescue robotics incorporates a vast range of capabilities
needed to address the challenges involved, e.g. resulting from a degraded en-
vironment. The availability of re-useable and adaptable open source software
can significantly reduce development time and increase robot capabilities while
simultaneously freeing resources and, thus, accelerating progress in the field.
In this paper, we present open source modules that provide the building
blocks for a system capable of autonomous exploration in USAR environments.
Different modules have been applied with great success in RoboCup Rescue and
other applications, both by Team Hector (Heterogeneous Cooperating Team of
Robots) of TU Darmstadt and numerous other international research groups.
Robot Operating System (ROS) [2] is used as the robot middleware for the
software modules. It has been widely adopted in robotics research and can be
considered a de-facto standard. The provided modules have also become part of
a recently established, broader initiative of the RoboCup Rescue community for
providing standard software modules useful for USAR tasks [3].
2 Kohlbrecher et al.
At the RoboCup competition, we mainly use the Ackermann-steered Hector
UGV vehicle (Figure 1)[4]. While this method is in many ways more challenging
than differential steering, we do not focus on these challenges in this paper, in-
stead providing a simulated skid-steered vehicle based on the Hector Lightweight
UGV (Figure 1) that bears more similarity to differential drive vehicles com-
monly used for USAR tasks.
Fig. 1. Robots used by Team Hector. Left: Hector UGV based on Kyosho Twin Force
chassis. Right: Hector Lightweight UGV based on “Wild Thumper” robot kit.
1.1 Related Work
Research in Simultaneous Localization and Mapping (SLAM) and exploration of
unknown environments received a lot of attention in recent years, with impressive
results being demonstrated. Many of these results often cannot be reproduced
due several reasons, like a lack of standardized interfaces, closed source software
and limited robustness to different (e.g. environmental) conditions.
Evaluation of state-of-the-art visual SLAM approaches [5], [6] in the stan-
dardized RoboCup Rescue setting showed promising results, but consistent lo-
calization/mapping as with the system described in this paper could not be
achieved so far, as ramps and other obstacles lead to jerky vehicle motion and
pose significant challenges to any SLAM system.
The RoboCup Rescue Robot League competition provides especially chal-
lenging scenarios, as the competition setting enforces strict constraints on the
time and environment for robot operation.
2 System Overview
This paper covers many of the higher-level nodes originally developed and tested
for the Hector UGV system, which can be used and adapted for other platforms
without or with only slight modifications (Fig. 2). Hardware dependent modules
Hector Open Source Modules for Autonomous Rescue Robots 3
Fig. 2. System overview schematic. ROS nodes are represented by rectangles, topics
by arrow-headed and services by diamond-headed lines. Services are originated at the
service caller.
like camera and motor drivers or low-level controllers are not within the scope
of this work. It is assumed that robots intended to use the described modules
provide the necessary sensor data according to existing ROS standards and are
steerable by publishing velocity commands. All nodes holding some sort of state
information are subscribing the command topic which is primarily used to reset
the system whenever necessary.
The following sections describe the ROS nodes provided1. Section 3 presents
the open source software for 2D and 3D mapping, perception of objects of inter-
ests and the generation of GeoTIFF maps to visualize the relevant information
according to the RoboCup Rescue rules. The subsequent Section 4 introduces
the modules required for planning and autonomous exploration. While not di-
rectly related to autonomous robots being able to test individual modules and
the robots overall behavior in simulation in a close-to-reality scenario is cru-
cial in order to detect bugs or possible failure cases earlier and allows shorter
development cycles. We present our simulation environment in Section 5.
3 Localization and Mapping
Creating maps of the environment is important for two reasons: Allowing first
responders to both perform situation assessment and localize themselves inside
buildings and for path planning and high level autonomous behaviors of robot
systems.
While purely geometric maps such as occupancy grid maps are useful for
navigation and obstacle avoidance, additional semantic information like the lo-
1for details see http://www.ros.org/wiki/tu-darmstadt-ros-pkg
4 Kohlbrecher et al.
cation of objects of interest is very important for first responders and required
for intelligent high level autonomous behavior control.
3.1 Simultaneous Localization and Mapping (SLAM)
As disasters can significantly alter the environment compared to a pre-disaster
state, USAR robots have to be considered as operating in unknown environments
as to be most robust against changes. This means the SLAM problem has to be
solved to generate sufficiently accurate metric maps useful for navigation of first
responders or a robot system.
For this task we provide hector slam, consisting of hector mapping,hec-
tor map server,hector geotiff and hector trajectory server modules. As odome-
try is notoriously unreliable in USAR scenarios, the system is designed to not
require odometry data, instead purely relying on fast LIDAR data scan-matching
at full LIDAR update rate. Combined with an attitude estimation system and
an optional pitch/roll unit to stabilize the laser scanner, the system can provide
environment maps even if the ground is non-flat as encountered in the RoboCup
Rescue arena. A comprehensive discussion of hector slam is available in [7].
3.2 Pose Estimation
The estimation of the full 6 degrees of freedom robot pose and twist is realized
in the hector pose estimation node that implements an Extended Kalman Filter
(EKF) and fuses measurements from an inertial measurement unit (IMU), the
2D pose error from the laser scan matcher and optionally from additional local-
ization sensors like satellite navigation receivers, magnetometers and barometric
pressure sensors if available. The filter is based on a generic motion model for
ground vehicles and is primarily driven by the IMU, without using the control
inputs or wheel odometry as they typically are unreliable due to wheelspin or
side drift on uneven or slippery ground.
3.3 Elevation and Cost Mapping
In addition to a two-dimensional world representation obtained by the hec-
tor slam package, USAR robots have to take the traversability of the environ-
ment into account. To this end we developed hector elevation mapping. This
package fuses point cloud measurements obtained by a RGBD-camera such as
the Microsoft Kinect into an elevation map. The elevation map is represented by
a 2D grid map storing a height value with a corresponding variance for each cell.
The cell measurement update is based on a local Kalman Filter and adapted
from the approach described in [8].
Finally, hector costmap fuses the 2.5D elevation map with the 2D occupancy
grid map provided by hector mapping and computes a two-dimensional cost map
for the exploration task.
Hector Open Source Modules for Autonomous Rescue Robots 5
Fig. 3. Examples for autonomous exploration. Left: Simulated Thailand Rescue Robot
Championship 2012 arena. Right: Simulated random maze.
3.4 Objects of Interest
Plain occupancy grid maps provide information about the environment geome-
try, but do not contain semantic information. We track information about ob-
jects of interest in a separate module, using a Gaussian representation for their
position. The hector object tracker package is based on an approach described
comprehensively in [9]. It subscribes to percept messages from victim, QR code
or other object detectors, projects them to the map frame based on the robot’s
pose, camera view angle and calibration information and solves the association
and tracking problem for subsequent detections.
3.5 GeoTIFF Maps
To achieve comparability between environment maps generated by different ap-
proaches, the GeoTIFF format is used as standard map format in the RoboCup
Rescue League competition. Using geo-reference and scale information, maps can
be overlaid over each other using existing tools and accuracy can be compared.
The hector geotiff package allows generating RoboCup Rescue rules compliant
GeoTIFF maps which can be annotated through a plugin interface. Plugins for
adding the path travelled by the robot, victim and QR code locations are pro-
vided. The node can run onboard a robot system and save maps to permanent
storage based on a timer, reducing the likelihood of map loss in case of con-
nectivity problems. All map shown in figures in this paper have been generated
using the hector geotiff node.
4 Planning and Exploration
While a plethora of research results are available for exploration using au-
tonomous robots, there are very few methods readily available for re-use as
open source software. We provide the hector exploration planner that is based
on the exploration transform approach presented in [10]. In our exploration plan-
ner, frontiers towards the front of the robot are weighted favorably, to prevent
6 Kohlbrecher et al.
frequent costly turning of the robot. Inspired by wall following techniques used
by firefighters [11], a “follow wall” trajectory can also be generated using the
exploration planner. The planned trajectory is generated based on map data and
thus does not exhibit weaknesses associated with reactive approaches that only
consider raw sensor data [12]. High level behaviors can thus switch between using
the exploration transform and wall follow approach at any time. In case the en-
vironment has been completely explored, the planner has been extended to start
an “inner exploration” mode. Here, the traversed path of the robot containing
a discrete set of past robot poses is retrieved from the hector trajectory server
node. These positions are sampled based on distance from each other and added
to a list. This list is passed to the exploration transform algorithm as a list of
goal points. An exhaustive search for the exploration transform cell with the
highest value then yields a point that is farthest away from the previous path
and safe to reach for the robot.
5 Simulation
Experiments using real robots are time-consuming and costly as availability of
appropriate scenarios and wear and tear of robot systems have to be considered.
This holds especially for USAR environments (like the RoboCup Rescue arena)
as those also put high strain on robot hardware and lab space is often limited.
5.1 Environments
To conveniently be able to create simulated environments for experiments, the
hector nist arenas gazebo stack provides the necessary tools that allow the cre-
ation of scenarios by composition of provided NIST standard test arena elements.
Users can also easily add further elements. The hector nist arena worlds package
provides example arenas, including both models for the RoboCup German Open
2011 and the Thailand Robot Championship 2012 Rescue arenas. Gazebo does
not support multispectral sensor simulation originally. To enable simulation of
thermal images often used for the detection of victims that emit body heat, the
hector gazebo thermal camera package provides a gazebo camera plugin that can
be used for this task.
5.2 Ground Vehicles
The hlugv gazebo package provides a model of the Hector Lightweight UGV
system (Fig. 1 left). The robot uses differential drive for its six wheels and thus
behaves similar to tracked robot systems commonly applied in USAR scenarios.
6 Application and Impact
6.1 RoboCup
Within less than two years hector slam has become the de-facto standard SLAM
system used by many teams with great success in RoboCup competitions. With
Hector Open Source Modules for Autonomous Rescue Robots 7
Fig. 4. Maps learned using the provided Hector modules. Left: Map learned using
hector slam and hector exploration planner at the RoboCup 2012 final mission with the
Hector UGV robot. The robot started at the right middle position and autonomously
explored the majority of the arena, finding 3 victims (red markers). The fourth victim
was found using tele-operation for the last 4 meters of the travelled path. Blue markers
indicate the positions of 35 QR codes that were detected autonomously by the robot.
Right: Application of hector slam to the ccny quadrotor lobby dataset [13].
Team Hector winning the Best in Class Autonomy award both at the RoboCup
German Open 2012 and RoboCup Mexico 2012 and Team BARTlab winning
the award at the Thai Rescue Robot Championship 2012, the applicability and
adaptability of the system to challenging environments and different robot plat-
forms has clearly been demonstrated. Fig. 4 left shows a real-world map learned
using the presented modules with the Hector UGV system.
6.2 Other Applications
Hector open source modules have been re-used for both research and commer-
cial purposes2.hector mapping was succesfully deployed in different applications
such as mapping of littoral areas using a unmanned surface vehicle, mapping dif-
ferent environments using a handheld mapping system and building radio maps
for wireless sensor networks [14]. Fig. 4 right shows results when applied to the
quadrotor datasets provided in [13]. The resulting map is consistent and compa-
rable to the results in the original paper, showing the flexibility of the system.
7 Conclusion
A collection of open source modules has been presented for providing urban
search and rescue robots with abilities like mapping and exploration of disaster
sites and tracking of objects of interest. Many of the presented modules have
already been adopted by other research groups for RoboCup Rescue and beyond.
2http://www.youtube.com/playlist?list=PL0E462904E5D35E29
8 Kohlbrecher et al.
Acknowledgments. This work was supported by the DFG Research Training
Group 1362. We thank contributing past and present team members, notably
Florian Berz, Florian Kunz, Mark Sollweck, Johannes Simon, Georg Stoll and
Laura Strickland.
References
1. Jacoff, A., Sheh, R., Virts, A.M., Kimura, T., Pellenz, J., Schwertfeger, S.,
Suthakorn, J.: Using competitions to advance the development of standard test
methods for response robots. In: Proc. Workshop on Performance Metrics for
Intelligent Systems. PerMIS ’12, New York, NY, USA, ACM (2012) 182–189
2. Quigley, M., Gerkey, B., Conley, K., Faust, J., Foote, T., Leibs, J., Berger, E.,
Wheeler, R., Ng, A.: ROS: an open-source Robot Operating System. In: ICRA
workshop on open source software. Volume 3. (2009)
3. Kohlbrecher, S., Petersen, K., Steinbauer, G., Maurer, J., Lepej, P., Uran, S.,
Ventura, R., Dornhege, C., Hertle, A., Sheh, R., , Pellenz, J.: Community-Driven
Development of Standard Software Modules for Search and Rescue Robots. In:
IEEE Intern. Symposium on Safety, Security and Rescue Robotics (SSRR). (2012)
4. Graber, T., Kohlbrecher, S., Meyer, J., Petersen, K., von Stryk, O., Klingauf,
U.: RoboCupRescue 2013 - Robot League Team Hector Darmstadt (Germany).
Technical report, Technische Universit¨at Darmstadt (2013)
5. Geiger, A., Ziegler, J., Stiller, C.: StereoScan: Dense 3d Reconstruction in Real-
time. In: IEEE Intelligent Vehicles Symposium, Baden-Baden, Germany (2011)
6. Huang, A.S., Bachrach, A., Henry, P., Krainin, M., Maturana, D., Fox, D., Roy,
N.: Visual odometry and mapping for autonomous flight using an RGB-D camera.
In: International Symposium on Robotics Research (ISRR). (2011)
7. Kohlbrecher, S., Meyer, J., von Stryk, O., Klingauf, U.: A Flexible and Scalable
SLAM System with Full 3D Motion Estimation. In: IEEE International Symposium
on Safety, Security and Rescue Robotics. (2011)
8. Kleiner, A., Dornhege, C.: Real-time Localization and Elevation Mapping within
Urban Search and Rescue Scenarios. Journal of Field Robotics (8-9) (2007) 723–745
9. Meyer, J., Schnitzspan, P., Kohlbrecher, S., Petersen, K., Schwahn, O., Andriluka,
M., Klingauf, U., Roth, S., Schiele, B., von Stryk, O.: A Semantic World Model
for Urban Search and Rescue Based on Heterogeneous Sensors. In: RoboCup 2010:
Robot Soccer World Cup XIV. Lecture Notes in Computer Science (2011) 180–193
10. Wirth, S., Pellenz, J.: Exploration transform: A stable exploring algorithm for
robots in rescue environments. In: IEEE International Workshop on Safety, Secu-
rity and Rescue Robotics (SSRR). (2007) 1–5
11. International Association of Fire Chiefs and National Fire Protection Association:
Fundamentals of fire fighter skills. Jones & Bartlett Learning (2008)
12. Van Turennout, P., Honderd, G., Van Schelven, L.: Wall-following control of a
mobile robot. In: IEEE International Conference on Robotics and Automation
(ICRA). (1992) 280–285
13. Dryanovski, I., Morris, W., Xiao, J.: An open-source pose estimation system for
micro-air vehicles. In: IEEE International Conference on Robotics and Automation
(ICRA). (2011) 4449–4454
14. Scholl, P.M., Kohlbrecher, S., Sachidananda, V., van Laerhoven, K.: Fast Indoor
Radio-Map Building for RSSI-based Localization Systems. In: Demo Paper, Inter-
national Conference on Networked Sensing Systems. (2012)
... Future work will include the application of this algorithm to real-world speech source labelling for distributed signal en-hancement multi-device-multi-task (MDMT) wireless acoustic sensor networks [10], object labelling in distributed multi-view camera networks as well as labelling of semantic information based on occupancy grid maps for autonomous mapping and navigation with multiple rescue robots [62]. ...
Preprint
Distributed signal processing for wireless sensor networks enables that different devices cooperate to solve different signal processing tasks. A crucial first step is to answer the question: who observes what? Recently, several distributed algorithms have been proposed, which frame the signal/object labelling problem in terms of cluster analysis after extracting source-specific features, however, the number of clusters is assumed to be known. We propose a new method called Gravitational Clustering (GC) to adaptively estimate the time-varying number of clusters based on a set of feature vectors. The key idea is to exploit the physical principle of gravitational force between mass units: streaming-in feature vectors are considered as mass units of fixed position in the feature space, around which mobile mass units are injected at each time instant. The cluster enumeration exploits the fact that the highest attraction on the mobile mass units is exerted by regions with a high density of feature vectors, i.e., gravitational clusters. By sharing estimates among neighboring nodes via a diffusion-adaptation scheme, cooperative and distributed cluster enumeration is achieved. Numerical experiments concerning robustness against outliers, convergence and computational complexity are conducted. The application in a distributed cooperative multi-view camera network illustrates the applicability to real-world problems.
... Hector mapping, as described in Kohlbrecher et al., 2014, Kohlbrecher et al., 2011, and the hector_slam software package (2014, is specifically optimized for mapping narrow corridors and complex environments typically encountered in rescue scenarios. Unlike other algorithms, Hector mapping does not require additional odometry information, yet its performance remains unaffected. ...
... Besides that, Hector SLAM does not require external odometry, which can have pros and cons depending on the environment. In this study, the Hector SLAM program is designed according to Kohlbrecher et al. [25]. Google Cartographer is a LiDAR graphic-based method that produces a 2D grid map from the graphical representation. ...
Article
Full-text available
Service robots are becoming increasingly essential in offices or domestic environments, usually called domestic service robots (DSR). They must navigate and interact seamlessly with their surroundings, including humans and objects, which relies on effective mapping and localization. This study focuses on mapping, employing the light detection and ranging (LiDAR) sensor. The sensor, tested at proximity, gathers distance data to generate two-dimensional maps on a mini-PC. Additionally, it provides rotational positioning and robot odometry, broadening coverage through robot movement. A microcontroller with wireless smartphone connectivity facilitates control via Bluetooth. The robot is also equipped with ultrasonic sensors serving as a bumper. Testing in rooms of varying sizes using three methods (i.e., Hector simultaneous localization and mapping (SLAM), Google Cartographer, and real-time appearance-based mapping (RTAB-Map)) yielded good quality maps. The best F1-measure value was 96.88% achieved by Google Cartographer. All the results demonstrated the feasibility of this approach for DSR development across diverse applications.
Article
The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed.
Conference Paper
Abstract— The project aims to develop a robot capable of 2D mapping using a laser sensor and autonomous navigation within a defined interior area, with the goal of increasing the accuracy of the obtained map compared to other similar algorithms. The robot is designed to prioritize quietness and efficiency of operation, while maintaining cost-effectiveness. Maps are autonomously generated and transferred to the database for manual retrieval, thus enabling users to view them through the mobile application. During the mapping procedure, the data from the laser sensor is visualized as a live map using the ROS framework on a virtual machine with an SSH connection to a Raspberry Pi. Limitations include non-Wi-Fi environments and environmental obstacles such as the presence of mirrors. Additionally, a unique navigation algorithm was constructed to excel within indoor environments, (e.g. Houses & Buildings).
Conference Paper
Full-text available
Wireless Indoor localization systems based on RSSI-values typically consist of an offline training phase and online position determination phase. During the offline phase, geo-referenced RSSI measurements, called fingerprints, are recorded to build a radio-map of the building. This radiomap is then searched during the position determination phase to estimate another nodes' location. Usually the radiomap is build manually, either by users pin-pointing their location on a ready-made floorplan or by moving in pre-specified patterns while scanning the network for RSSI values. This cumbersome process leads to inaccuracies in the radiomap. Here, we propose a system to build the floorplan and radio-map simultaneously by employing a handheld laser mapping system in an IEEE802.15.4-compatible network. This makes indoor- and radio-mapping for wireless localization less cumbersome, faster, more reliable and delivers a new way to evaluate wireless localization systems.
Article
Full-text available
RGB-D cameras provide both a color image and per-pixel depth esti-mates. The richness of their data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, eliminating its depen-dence on unreliable wireless links. We evaluate the effectiveness of our system for stabilizing and controlling a quadrotor micro-air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations.
Article
Full-text available
For many applications in Urban Search and Rescue (USAR) scenarios robots need to learn a map of unknown environments. We present a system for fast online learning of occupancy grid maps requiring low computational resources. It combines a robust scan matching approach using a LIDAR system with a 3D attitude estimation system based on inertial sensing. By using a fast approximation of map gradients and a multi-resolution grid, reliable localization and mapping ca-pabilities in a variety of challenging environments are realized. Multiple datasets showing the applicability in an embedded hand-held mapping system are provided. We show that the system is sufficiently accurate as to not require explicit loop closing techniques in the considered scenarios. The software is available as an open source package for ROS.
Technical Report
RGB-D cameras provide both a color image and per-pixel depth estimates. The richness of their data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, reducing the dependence on unreliable wireless links. We evaluate the effectiveness of our system for stabilizing and controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations.
Conference Paper
The main goal of the paper is to continuously enlarge the set of software building blocks that can be reused in the search and rescue domain.
Conference Paper
Competitions are an effective aid to the development and dissemination of standard test methods, especially in rapidly developing, fields with a wide variety of requirements and capabilities such as Urban Search and Rescue robotics. By exposing the development process to highly developmental systems that push the boundaries of current capabilities, it is possible to gain an insight into how the test methods will respond to the robots of the future. The competition setting also allows for the rapid iterative refinement of the test methods and apparatuses in response to new developments. For the research community, introducing the concepts behind the test methods at the research and development stage can also help to guide their work towards the operationally relevant requirements embodied by the test methods and apparatuses. This also aids in the dissemination of the test methods themselves as teams fabricate them in their own laboratories and re-use them in work outside the competition. In this paper, we discuss how international competitions, and in particular the RoboCupRescue Robot League competition, have played a crucial role in the development of standard test metrics for response robots as part of the ASTM International Committee of Homeland Security Applications; Operational Equipment; Robots (E54.08.01). We will also discuss how the competition has helped to drive a vibrant robot developer community towards solutions that are relevant to first responders.
Conference Paper
Abstract— This paper gives an overview of ROS, an open- source robot operating,system. ROS is not an operating,system in the traditional sense of process management,and scheduling; rather, it provides a structured communications layer above the host operating,systems,of a heterogenous,compute,cluster. In this paper, we discuss how ROS relates to existing robot software frameworks, and briefly overview some of the available application software,which,uses ROS.
Conference Paper
The wall-following control problem is characterized by moving the robot along a wall in a desired direction while maintaining a constant distance to that wall. From ultrasonic distance measurements the distance and the orientation of the robot with respect to the wall can be calculated. This is solved by the use of an observer: the distance and orientation are estimated using a robot model and corrected by sensor measurements. Since a wall may not be available continuously (e.g., an open door), the robot must be able to navigate on its dead-reckoning as well. The feedback controller has been set up in such a way that it can handle both the observer data and dead-reckoning data. The controller has been verified by means of experiments. The results show a good performance with an absolute error of a few millimeters from the desired distance to the wall
Conference Paper
This paper presents the implementation of an open-source 6-DoF pose estimation system for micro-air vehicles and considers the future implications and benefits of open-source robotics. The system is designed to provide high frequency pose estimates in unknown, GPS-denied indoor environments. It requires a minimal set of sensors including a planar laser range-finder and an IMU sensor. The code is optimized to run entirely onboard, so no wireless link and ground station are explicitly needed. A major focus in our work is modularity, allowing each component to be benchmarked individually, or swapped out for a different implementation, without change to the rest of the system. We demonstrate how the pose estimation can be used for 2D SLAM or 3D mapping experiments. All the software and hardware which we have developed, as well as extensive documentation and test data, is available online.