Conference PaperPDF Available

Hector Open Source Modules for Autonomous Mapping and Navigation with Rescue Robots


Abstract and Figures

Key abilities for robots deployed in urban search and rescue tasks include autonomous exploration of disaster sites and recognition of victims and other objects of interest. In this paper, we present related open source software modules for the development of such complex capabilities which include hector_slam for self-localization and mapping in a degraded urban environment. All modules have been successfully applied and tested originally in the RoboCup Rescue competition. Up to now they have already been re-used and adopted by numerous international research groups for a wide variety of tasks. Recently, they have also become part of the basis of a broader initiative for key open source software modules for urban search and rescue robots.
Content may be subject to copyright.
Hector Open Source Modules for Autonomous
Mapping and Navigation with Rescue Robots
Stefan Kohlbrecher1, Johannes Meyer2, Thorsten Graber 2, Karen Petersen1,
Uwe Klingauf2, and Oskar von Stryk1
1Department of Computer Science, TU Darmstadt, Germany
2Department of Mechanical Engineering, TU Darmstadt, Germany
Abstract. Key abilities for robots deployed in urban search and rescue
tasks include autonomous exploration of disaster sites and recognition of
victims and other objects of interest. In this paper, we present related
open source software modules for the development of such complex ca-
pabilities which include hector slam for self-localization and mapping in
a degraded urban environment. All modules have been successfully ap-
plied and tested originally in the RoboCup Rescue competition. Up to
now they have already been re-used and adopted by numerous interna-
tional research groups for a wide variety of tasks. Recently, they have
also become part of the basis of a broader initiative for key open source
software modules for urban search and rescue robots.
1 Introduction
While robots used for Urban Search and Rescue (USAR) tasks will remain mainly
tele-operated for the immediate future when used in real disaster sites, increasing
the autonomy level is an important area of research that has the potential to
vastly improve the capabilites of robots used for disaster response in the future.
The RoboCup Rescue project aims at advancing research towards more ca-
pable rescue robots [1]. Rescue robotics incorporates a vast range of capabilities
needed to address the challenges involved, e.g. resulting from a degraded en-
vironment. The availability of re-useable and adaptable open source software
can significantly reduce development time and increase robot capabilities while
simultaneously freeing resources and, thus, accelerating progress in the field.
In this paper, we present open source modules that provide the building
blocks for a system capable of autonomous exploration in USAR environments.
Different modules have been applied with great success in RoboCup Rescue and
other applications, both by Team Hector (Heterogeneous Cooperating Team of
Robots) of TU Darmstadt and numerous other international research groups.
Robot Operating System (ROS) [2] is used as the robot middleware for the
software modules. It has been widely adopted in robotics research and can be
considered a de-facto standard. The provided modules have also become part of
a recently established, broader initiative of the RoboCup Rescue community for
providing standard software modules useful for USAR tasks [3].
2 Kohlbrecher et al.
At the RoboCup competition, we mainly use the Ackermann-steered Hector
UGV vehicle (Figure 1)[4]. While this method is in many ways more challenging
than differential steering, we do not focus on these challenges in this paper, in-
stead providing a simulated skid-steered vehicle based on the Hector Lightweight
UGV (Figure 1) that bears more similarity to differential drive vehicles com-
monly used for USAR tasks.
Fig. 1. Robots used by Team Hector. Left: Hector UGV based on Kyosho Twin Force
chassis. Right: Hector Lightweight UGV based on “Wild Thumper” robot kit.
1.1 Related Work
Research in Simultaneous Localization and Mapping (SLAM) and exploration of
unknown environments received a lot of attention in recent years, with impressive
results being demonstrated. Many of these results often cannot be reproduced
due several reasons, like a lack of standardized interfaces, closed source software
and limited robustness to different (e.g. environmental) conditions.
Evaluation of state-of-the-art visual SLAM approaches [5], [6] in the stan-
dardized RoboCup Rescue setting showed promising results, but consistent lo-
calization/mapping as with the system described in this paper could not be
achieved so far, as ramps and other obstacles lead to jerky vehicle motion and
pose significant challenges to any SLAM system.
The RoboCup Rescue Robot League competition provides especially chal-
lenging scenarios, as the competition setting enforces strict constraints on the
time and environment for robot operation.
2 System Overview
This paper covers many of the higher-level nodes originally developed and tested
for the Hector UGV system, which can be used and adapted for other platforms
without or with only slight modifications (Fig. 2). Hardware dependent modules
Hector Open Source Modules for Autonomous Rescue Robots 3
Fig. 2. System overview schematic. ROS nodes are represented by rectangles, topics
by arrow-headed and services by diamond-headed lines. Services are originated at the
service caller.
like camera and motor drivers or low-level controllers are not within the scope
of this work. It is assumed that robots intended to use the described modules
provide the necessary sensor data according to existing ROS standards and are
steerable by publishing velocity commands. All nodes holding some sort of state
information are subscribing the command topic which is primarily used to reset
the system whenever necessary.
The following sections describe the ROS nodes provided1. Section 3 presents
the open source software for 2D and 3D mapping, perception of objects of inter-
ests and the generation of GeoTIFF maps to visualize the relevant information
according to the RoboCup Rescue rules. The subsequent Section 4 introduces
the modules required for planning and autonomous exploration. While not di-
rectly related to autonomous robots being able to test individual modules and
the robots overall behavior in simulation in a close-to-reality scenario is cru-
cial in order to detect bugs or possible failure cases earlier and allows shorter
development cycles. We present our simulation environment in Section 5.
3 Localization and Mapping
Creating maps of the environment is important for two reasons: Allowing first
responders to both perform situation assessment and localize themselves inside
buildings and for path planning and high level autonomous behaviors of robot
While purely geometric maps such as occupancy grid maps are useful for
navigation and obstacle avoidance, additional semantic information like the lo-
1for details see
4 Kohlbrecher et al.
cation of objects of interest is very important for first responders and required
for intelligent high level autonomous behavior control.
3.1 Simultaneous Localization and Mapping (SLAM)
As disasters can significantly alter the environment compared to a pre-disaster
state, USAR robots have to be considered as operating in unknown environments
as to be most robust against changes. This means the SLAM problem has to be
solved to generate sufficiently accurate metric maps useful for navigation of first
responders or a robot system.
For this task we provide hector slam, consisting of hector mapping,hec-
tor map server,hector geotiff and hector trajectory server modules. As odome-
try is notoriously unreliable in USAR scenarios, the system is designed to not
require odometry data, instead purely relying on fast LIDAR data scan-matching
at full LIDAR update rate. Combined with an attitude estimation system and
an optional pitch/roll unit to stabilize the laser scanner, the system can provide
environment maps even if the ground is non-flat as encountered in the RoboCup
Rescue arena. A comprehensive discussion of hector slam is available in [7].
3.2 Pose Estimation
The estimation of the full 6 degrees of freedom robot pose and twist is realized
in the hector pose estimation node that implements an Extended Kalman Filter
(EKF) and fuses measurements from an inertial measurement unit (IMU), the
2D pose error from the laser scan matcher and optionally from additional local-
ization sensors like satellite navigation receivers, magnetometers and barometric
pressure sensors if available. The filter is based on a generic motion model for
ground vehicles and is primarily driven by the IMU, without using the control
inputs or wheel odometry as they typically are unreliable due to wheelspin or
side drift on uneven or slippery ground.
3.3 Elevation and Cost Mapping
In addition to a two-dimensional world representation obtained by the hec-
tor slam package, USAR robots have to take the traversability of the environ-
ment into account. To this end we developed hector elevation mapping. This
package fuses point cloud measurements obtained by a RGBD-camera such as
the Microsoft Kinect into an elevation map. The elevation map is represented by
a 2D grid map storing a height value with a corresponding variance for each cell.
The cell measurement update is based on a local Kalman Filter and adapted
from the approach described in [8].
Finally, hector costmap fuses the 2.5D elevation map with the 2D occupancy
grid map provided by hector mapping and computes a two-dimensional cost map
for the exploration task.
Hector Open Source Modules for Autonomous Rescue Robots 5
Fig. 3. Examples for autonomous exploration. Left: Simulated Thailand Rescue Robot
Championship 2012 arena. Right: Simulated random maze.
3.4 Objects of Interest
Plain occupancy grid maps provide information about the environment geome-
try, but do not contain semantic information. We track information about ob-
jects of interest in a separate module, using a Gaussian representation for their
position. The hector object tracker package is based on an approach described
comprehensively in [9]. It subscribes to percept messages from victim, QR code
or other object detectors, projects them to the map frame based on the robot’s
pose, camera view angle and calibration information and solves the association
and tracking problem for subsequent detections.
3.5 GeoTIFF Maps
To achieve comparability between environment maps generated by different ap-
proaches, the GeoTIFF format is used as standard map format in the RoboCup
Rescue League competition. Using geo-reference and scale information, maps can
be overlaid over each other using existing tools and accuracy can be compared.
The hector geotiff package allows generating RoboCup Rescue rules compliant
GeoTIFF maps which can be annotated through a plugin interface. Plugins for
adding the path travelled by the robot, victim and QR code locations are pro-
vided. The node can run onboard a robot system and save maps to permanent
storage based on a timer, reducing the likelihood of map loss in case of con-
nectivity problems. All map shown in figures in this paper have been generated
using the hector geotiff node.
4 Planning and Exploration
While a plethora of research results are available for exploration using au-
tonomous robots, there are very few methods readily available for re-use as
open source software. We provide the hector exploration planner that is based
on the exploration transform approach presented in [10]. In our exploration plan-
ner, frontiers towards the front of the robot are weighted favorably, to prevent
6 Kohlbrecher et al.
frequent costly turning of the robot. Inspired by wall following techniques used
by firefighters [11], a “follow wall” trajectory can also be generated using the
exploration planner. The planned trajectory is generated based on map data and
thus does not exhibit weaknesses associated with reactive approaches that only
consider raw sensor data [12]. High level behaviors can thus switch between using
the exploration transform and wall follow approach at any time. In case the en-
vironment has been completely explored, the planner has been extended to start
an “inner exploration” mode. Here, the traversed path of the robot containing
a discrete set of past robot poses is retrieved from the hector trajectory server
node. These positions are sampled based on distance from each other and added
to a list. This list is passed to the exploration transform algorithm as a list of
goal points. An exhaustive search for the exploration transform cell with the
highest value then yields a point that is farthest away from the previous path
and safe to reach for the robot.
5 Simulation
Experiments using real robots are time-consuming and costly as availability of
appropriate scenarios and wear and tear of robot systems have to be considered.
This holds especially for USAR environments (like the RoboCup Rescue arena)
as those also put high strain on robot hardware and lab space is often limited.
5.1 Environments
To conveniently be able to create simulated environments for experiments, the
hector nist arenas gazebo stack provides the necessary tools that allow the cre-
ation of scenarios by composition of provided NIST standard test arena elements.
Users can also easily add further elements. The hector nist arena worlds package
provides example arenas, including both models for the RoboCup German Open
2011 and the Thailand Robot Championship 2012 Rescue arenas. Gazebo does
not support multispectral sensor simulation originally. To enable simulation of
thermal images often used for the detection of victims that emit body heat, the
hector gazebo thermal camera package provides a gazebo camera plugin that can
be used for this task.
5.2 Ground Vehicles
The hlugv gazebo package provides a model of the Hector Lightweight UGV
system (Fig. 1 left). The robot uses differential drive for its six wheels and thus
behaves similar to tracked robot systems commonly applied in USAR scenarios.
6 Application and Impact
6.1 RoboCup
Within less than two years hector slam has become the de-facto standard SLAM
system used by many teams with great success in RoboCup competitions. With
Hector Open Source Modules for Autonomous Rescue Robots 7
Fig. 4. Maps learned using the provided Hector modules. Left: Map learned using
hector slam and hector exploration planner at the RoboCup 2012 final mission with the
Hector UGV robot. The robot started at the right middle position and autonomously
explored the majority of the arena, finding 3 victims (red markers). The fourth victim
was found using tele-operation for the last 4 meters of the travelled path. Blue markers
indicate the positions of 35 QR codes that were detected autonomously by the robot.
Right: Application of hector slam to the ccny quadrotor lobby dataset [13].
Team Hector winning the Best in Class Autonomy award both at the RoboCup
German Open 2012 and RoboCup Mexico 2012 and Team BARTlab winning
the award at the Thai Rescue Robot Championship 2012, the applicability and
adaptability of the system to challenging environments and different robot plat-
forms has clearly been demonstrated. Fig. 4 left shows a real-world map learned
using the presented modules with the Hector UGV system.
6.2 Other Applications
Hector open source modules have been re-used for both research and commer-
cial purposes2.hector mapping was succesfully deployed in different applications
such as mapping of littoral areas using a unmanned surface vehicle, mapping dif-
ferent environments using a handheld mapping system and building radio maps
for wireless sensor networks [14]. Fig. 4 right shows results when applied to the
quadrotor datasets provided in [13]. The resulting map is consistent and compa-
rable to the results in the original paper, showing the flexibility of the system.
7 Conclusion
A collection of open source modules has been presented for providing urban
search and rescue robots with abilities like mapping and exploration of disaster
sites and tracking of objects of interest. Many of the presented modules have
already been adopted by other research groups for RoboCup Rescue and beyond.
8 Kohlbrecher et al.
Acknowledgments. This work was supported by the DFG Research Training
Group 1362. We thank contributing past and present team members, notably
Florian Berz, Florian Kunz, Mark Sollweck, Johannes Simon, Georg Stoll and
Laura Strickland.
1. Jacoff, A., Sheh, R., Virts, A.M., Kimura, T., Pellenz, J., Schwertfeger, S.,
Suthakorn, J.: Using competitions to advance the development of standard test
methods for response robots. In: Proc. Workshop on Performance Metrics for
Intelligent Systems. PerMIS ’12, New York, NY, USA, ACM (2012) 182–189
2. Quigley, M., Gerkey, B., Conley, K., Faust, J., Foote, T., Leibs, J., Berger, E.,
Wheeler, R., Ng, A.: ROS: an open-source Robot Operating System. In: ICRA
workshop on open source software. Volume 3. (2009)
3. Kohlbrecher, S., Petersen, K., Steinbauer, G., Maurer, J., Lepej, P., Uran, S.,
Ventura, R., Dornhege, C., Hertle, A., Sheh, R., , Pellenz, J.: Community-Driven
Development of Standard Software Modules for Search and Rescue Robots. In:
IEEE Intern. Symposium on Safety, Security and Rescue Robotics (SSRR). (2012)
4. Graber, T., Kohlbrecher, S., Meyer, J., Petersen, K., von Stryk, O., Klingauf,
U.: RoboCupRescue 2013 - Robot League Team Hector Darmstadt (Germany).
Technical report, Technische Universit¨at Darmstadt (2013)
5. Geiger, A., Ziegler, J., Stiller, C.: StereoScan: Dense 3d Reconstruction in Real-
time. In: IEEE Intelligent Vehicles Symposium, Baden-Baden, Germany (2011)
6. Huang, A.S., Bachrach, A., Henry, P., Krainin, M., Maturana, D., Fox, D., Roy,
N.: Visual odometry and mapping for autonomous flight using an RGB-D camera.
In: International Symposium on Robotics Research (ISRR). (2011)
7. Kohlbrecher, S., Meyer, J., von Stryk, O., Klingauf, U.: A Flexible and Scalable
SLAM System with Full 3D Motion Estimation. In: IEEE International Symposium
on Safety, Security and Rescue Robotics. (2011)
8. Kleiner, A., Dornhege, C.: Real-time Localization and Elevation Mapping within
Urban Search and Rescue Scenarios. Journal of Field Robotics (8-9) (2007) 723–745
9. Meyer, J., Schnitzspan, P., Kohlbrecher, S., Petersen, K., Schwahn, O., Andriluka,
M., Klingauf, U., Roth, S., Schiele, B., von Stryk, O.: A Semantic World Model
for Urban Search and Rescue Based on Heterogeneous Sensors. In: RoboCup 2010:
Robot Soccer World Cup XIV. Lecture Notes in Computer Science (2011) 180–193
10. Wirth, S., Pellenz, J.: Exploration transform: A stable exploring algorithm for
robots in rescue environments. In: IEEE International Workshop on Safety, Secu-
rity and Rescue Robotics (SSRR). (2007) 1–5
11. International Association of Fire Chiefs and National Fire Protection Association:
Fundamentals of fire fighter skills. Jones & Bartlett Learning (2008)
12. Van Turennout, P., Honderd, G., Van Schelven, L.: Wall-following control of a
mobile robot. In: IEEE International Conference on Robotics and Automation
(ICRA). (1992) 280–285
13. Dryanovski, I., Morris, W., Xiao, J.: An open-source pose estimation system for
micro-air vehicles. In: IEEE International Conference on Robotics and Automation
(ICRA). (2011) 4449–4454
14. Scholl, P.M., Kohlbrecher, S., Sachidananda, V., van Laerhoven, K.: Fast Indoor
Radio-Map Building for RSSI-based Localization Systems. In: Demo Paper, Inter-
national Conference on Networked Sensing Systems. (2012)
... Ellipse are symbols for ROS nodes, arrow-headed lines for topics, circleheaded lines for services. Services are originally generated by the service caller, adopted from [17] system, and the standard platform that is used has been introduced. ...
... As demonstrated in Fig. 2, it is expected that the requisite sensor data, for example, a laser scanner will be provided. Additionally, all of the modules are verified in the RoboCup rescue competitions [17]. The Hector ...
... (a) The outline of environment (b) The environment in graphic simulator Fig. 3: A customized environment with the RoboCup rescue league standards, where 3a represents an outline of the custom environment and 3b illustrates a schematic of the entire environment Schematics of the software architecture in ROS: running nodes on the robot operating system open-source system has two main sections: an open-source software for generating maps to recognize surrounding objects and modules for autonomous exploration[17]. ...
Conference Paper
Full-text available
One of the keys accepts of the fourth industrial revolution (called industry 4.0) is the proper usage of mobile robots, for which using an efficient motion planning algorithm is of importance. Every year, universities and educational institutes from around the world present their accomplishments in robotics within the RoboCup competitions. One of the main purposes of the rescue robots competitions is to increase the awareness of robots during their missions. This paper presents an open-source motion planning algorithm, simulated in the robot operating system (ROS) platform. The proposed planner develops a semiautonomous algorithm for the rescue robots utilized in the RoboCup competitions. The Gazebo-simulated results validate the efficient performance of the presented algorithms.
... Depending on the application, the users can easily switch between a basic (noise-free) quadrotor model and a more advanced rigid-body dynamics, including friction and rotor drag, or directly use the real platform dynamics like [4]. Inertial sensing and motor encoders, which directly depend on the physics model, can also be noise-free or include different degrees of noise [3,6]. ...
... RotorS and Hector: Both RotorS [3] and Hector [6] are popular Micro Aerial Vehicle (MAV) simulators built on Gazebo [10], which is a general robotic simulation platform and generally used with the popular Robot Operating System (ROS). Hector is a collection of open-source modules and is primarily used for autonomous mapping and navigation with rescue robots. ...
... Systems for solving the SLAM problem can be roughly divided into two groups based on sensors used in the system: LiDAR-and Visual-based SLAM approaches. The representatives of the former group include 1) GMapping (2007) which uses Rao-Blackwellized particle filters [11], 2) Hector SLAM (2013) which relies on fast LiDAR data scan-matching, and does not require any odometry data due to its unreliability in Urban Search and Rescue scenarios [12], and 3) Google Cartographer (2016) which creates sub-maps instead of individual maps and tries to optimized poses of sub-maps [13]. ...
... For mapping and localization, the ROS package hector mapping [28] was used, while hector navigation [29] was used for path-planning. These packages are further detailed in [12]. The map is saved as an occupancy grid, while the current position is published as a PosedStamped object, with its current 2D coordinates, and the orientation in quaternion form. ...
... A lot of research and reviews have been undertaken to explore the simultaneous localization and mapping (SLAM) in mobile manipulation [6,[32][33][34]. Kohlbrecher et al. [35] described a Hector-SLAM-based [36] open-source SLAM system for urban search-and-rescue missions, which enables robots to map and locate themselves in a degraded urban environment, and independently explore disaster sites to identify victims and other interesting objects. Similarly, Wang et al. [37] used optical flow calculation and wavelet transformation to solve the problem of SLAM in emergency search-and-rescue environments. ...
... The robot navigates to a target point and measures the error of the position. To evaluate the proposed method, the AMCL [5] and Hector-SLAM [35] algorithms are compared for the position tasks. As shown in Figure 9, the position error provided by our approach is better than the others and has almost the same errors for its orientation. ...
Full-text available
Mobile manipulation, which has more flexibility than fixed-base manipulation, has always been an important topic in the field of robotics. However, for sophisticated operation in complex environments, efficient localization and dynamic tracking grasp still face enormous challenges. To address these challenges, this paper proposes a mobile manipulation method integrating laser-reflector-enhanced adaptive Monte Carlo localization (AMCL) algorithm and a dynamic tracking and grasping algorithm. First, by fusing the information of laser-reflector landmarks to adjust the weight of particles in AMCL, the localization accuracy of mobile platforms can be improved. Second, deep-learning-based multiple-object detection and visual servo are exploited to efficiently track and grasp dynamic objects. Then, a mobile manipulation system integrating the above two algorithms into a robotic with a 6-degrees-of-freedom (DOF) operation arm is implemented in an indoor environment. Technical components, including localization, multiple-object detection, dynamic tracking grasp, and the integrated system, are all verified in real-world scenarios. Experimental results demonstrate the efficacy and superiority of our method.
Full-text available
Research and development of autonomous mobile robotic solutions that can perform several active agricultural tasks (pruning, harvesting, mowing) have been growing. Robots are now used for a variety of tasks such as planting, harvesting, environmental monitoring, supply of water and nutrients, and others. To do so, robots need to be able to perform online localization and, if desired, mapping. The most used approach for localization in agricultural applications is based in standalone Global Navigation Satellite System-based systems. However, in many agricultural and forest environments, satellite signals are unavailable or inaccurate, which leads to the need of advanced solutions independent from these signals. Approaches like simultaneous localization and mapping and visual odometry are the most promising solutions to increase localization reliability and availability. This work leads to the main conclusion that, few methods can achieve simultaneously the desired goals of scalability, availability, and accuracy, due to the challenges imposed by these harsh environments. In the near future, novel contributions to this field are expected that will help one to achieve the desired goals, with the development of more advanced techniques, based on 3D localization, and semantic and topological mapping. In this context, this work proposes an analysis of the current state-of-the-art of localization and mapping approaches in agriculture and forest environments. Additionally, an overview about the available datasets to develop and test these approaches is performed. Finally, a critical analysis of this research field is done, with the characterization of the literature using a variety of metrics.
Conference Paper
Full-text available
Wireless Indoor localization systems based on RSSI-values typically consist of an offline training phase and online position determination phase. During the offline phase, geo-referenced RSSI measurements, called fingerprints, are recorded to build a radio-map of the building. This radiomap is then searched during the position determination phase to estimate another nodes' location. Usually the radiomap is build manually, either by users pin-pointing their location on a ready-made floorplan or by moving in pre-specified patterns while scanning the network for RSSI values. This cumbersome process leads to inaccuracies in the radiomap. Here, we propose a system to build the floorplan and radio-map simultaneously by employing a handheld laser mapping system in an IEEE802.15.4-compatible network. This makes indoor- and radio-mapping for wireless localization less cumbersome, faster, more reliable and delivers a new way to evaluate wireless localization systems.
Full-text available
RGB-D cameras provide both a color image and per-pixel depth esti-mates. The richness of their data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, eliminating its depen-dence on unreliable wireless links. We evaluate the effectiveness of our system for stabilizing and controlling a quadrotor micro-air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations.
Full-text available
For many applications in Urban Search and Rescue (USAR) scenarios robots need to learn a map of unknown environments. We present a system for fast online learning of occupancy grid maps requiring low computational resources. It combines a robust scan matching approach using a LIDAR system with a 3D attitude estimation system based on inertial sensing. By using a fast approximation of map gradients and a multi-resolution grid, reliable localization and mapping ca-pabilities in a variety of challenging environments are realized. Multiple datasets showing the applicability in an embedded hand-held mapping system are provided. We show that the system is sufficiently accurate as to not require explicit loop closing techniques in the considered scenarios. The software is available as an open source package for ROS.
Conference Paper
The main goal of the paper is to continuously enlarge the set of software building blocks that can be reused in the search and rescue domain.
Conference Paper
Competitions are an effective aid to the development and dissemination of standard test methods, especially in rapidly developing, fields with a wide variety of requirements and capabilities such as Urban Search and Rescue robotics. By exposing the development process to highly developmental systems that push the boundaries of current capabilities, it is possible to gain an insight into how the test methods will respond to the robots of the future. The competition setting also allows for the rapid iterative refinement of the test methods and apparatuses in response to new developments. For the research community, introducing the concepts behind the test methods at the research and development stage can also help to guide their work towards the operationally relevant requirements embodied by the test methods and apparatuses. This also aids in the dissemination of the test methods themselves as teams fabricate them in their own laboratories and re-use them in work outside the competition. In this paper, we discuss how international competitions, and in particular the RoboCupRescue Robot League competition, have played a crucial role in the development of standard test metrics for response robots as part of the ASTM International Committee of Homeland Security Applications; Operational Equipment; Robots (E54.08.01). We will also discuss how the competition has helped to drive a vibrant robot developer community towards solutions that are relevant to first responders.
Conference Paper
Abstract— This paper gives an overview of ROS, an open- source robot operating,system. ROS is not an operating,system in the traditional sense of process management,and scheduling; rather, it provides a structured communications layer above the host operating,systems,of a heterogenous,compute,cluster. In this paper, we discuss how ROS relates to existing robot software frameworks, and briefly overview some of the available application software,which,uses ROS.
Conference Paper
The wall-following control problem is characterized by moving the robot along a wall in a desired direction while maintaining a constant distance to that wall. From ultrasonic distance measurements the distance and the orientation of the robot with respect to the wall can be calculated. This is solved by the use of an observer: the distance and orientation are estimated using a robot model and corrected by sensor measurements. Since a wall may not be available continuously (e.g., an open door), the robot must be able to navigate on its dead-reckoning as well. The feedback controller has been set up in such a way that it can handle both the observer data and dead-reckoning data. The controller has been verified by means of experiments. The results show a good performance with an absolute error of a few millimeters from the desired distance to the wall
Conference Paper
This paper presents the implementation of an open-source 6-DoF pose estimation system for micro-air vehicles and considers the future implications and benefits of open-source robotics. The system is designed to provide high frequency pose estimates in unknown, GPS-denied indoor environments. It requires a minimal set of sensors including a planar laser range-finder and an IMU sensor. The code is optimized to run entirely onboard, so no wireless link and ground station are explicitly needed. A major focus in our work is modularity, allowing each component to be benchmarked individually, or swapped out for a different implementation, without change to the rest of the system. We demonstrate how the pose estimation can be used for 2D SLAM or 3D mapping experiments. All the software and hardware which we have developed, as well as extensive documentation and test data, is available online.
Conference Paper
Accurate 3d perception from video sequences is a core subject in computer vision and robotics, since it forms the basis of subsequent scene analysis. In practice however, online requirements often severely limit the utilizable camera resolution and hence also reconstruction accuracy. Furthermore, real-time systems often rely on heavy parallelism which can prevent applications in mobile devices or driver assistance systems, especially in cases where FPGAs cannot be employed. This paper proposes a novel approach to build 3d maps from high-resolution stereo sequences in real-time. Inspired by recent progress in stereo matching, we propose a sparse feature matcher in conjunction with an efficient and robust visual odometry algorithm. Our reconstruction pipeline combines both techniques with efficient stereo matching and a multi-view linking scheme for generating consistent 3d point clouds. In our experiments we show that the proposed odometry method achieves state-of-the-art accuracy. Including feature matching, the visual odometry part of our algorithm runs at 25 frames per second, while - at the same time - we obtain new depth maps at 3-4 fps, sufficient for online 3d reconstructions.