Conference Paper

Uncertain map making in natural environments

Lab. d'Autom. et d'Anal. des Syst., CNRS, Toulouse
DOI: 10.1109/ROBOT.1996.506847 Conference: Robotics and Automation, 1996. Proceedings., 1996 IEEE International Conference on, Volume: 2
Source: IEEE Xplore


Building on previous work on incremental natural scene modelling
for mobile robot navigation, we focus in this paper on the problem of
representing and managing uncertainties. The environment is composed of
ground regions and objects. Objects (e.g., rocks) are represented by an
uncertain state vector (location) and a variance-covariance matrix.
Their shapes are approximated by ellipsoids. Landmarks are defined as
objects with specific properties (discrimination, accuracy) that permit
to use them for robot localization and for anchoring the environment
model. Model updating is based on an extended Kalman filter.
Experimental results are given that show the construction of a
consistent model over tens of meters

4 Reads
  • Source
    • "Work by Leonard [13], Manyika [14], and others demonstrated increasingly sophisticated robot mapping and localization using related EKF techniques, but the single state vector and " full covariance " approach of Smith et al. did not receive widespread attention until the mid to late 1990s, perhaps when computing power reached the point where it could be practically tested. Several early implementations [15], [16], [17], [18], [19] proved the single EKF approach for building modest-sized maps in real robot systems and demonstrated convincingly the importance of maintaining estimate correlations. These successes gradually saw very widespread adoption of the EKF as the core estimation technique in SLAM and its generality as a Bayesian solution was understood across a variety of different platforms and sensors. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.
    IEEE Transactions on Pattern Analysis and Machine Intelligence 07/2007; 29(6):1052-67. DOI:10.1109/TPAMI.2007.1049 · 5.78 Impact Factor
  • Source
    • "Different platforms and methodologies have been used for the terrain mapping task. In [13] an incremental mapping algorithm using a 3D laser range finder is proposed focusing on the representation of the uncertainties in the pose of landmarks. The approach presented by [14] builds terrain maps from sequences of unregistered low altitude stereo vision image pairs. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a new approach for terrain mapping and classification using mobile robots with 2D laser range finders. Our algorithm generates 3D terrain maps and classifies navigable and non-navigable regions on those maps using Hidden Markov models. The maps generated by our approach can be used for path planning, navigation, local obstacle avoidance, detection of changes in the terrain, and object recognition. We propose a map segmentation algorithm based on Markov Random Fields, which removes small errors in the classification. In order to validate our algorithms, we present experimental results using two robotic platforms.
    Proceedings of the 2005 IEEE International Conference on Robotics and Automation, ICRA 2005, April 18-22, 2005, Barcelona, Spain; 01/2005
  • Source
    • "Indeed, if the consideration of 3D rover motions (i.e., six state parameters) is not a problem, the landmark extraction and modeling from the acquired data is much more challenging; unstructured environments do not contain features that can easily be represented by geometric equations (e.g., lines or planes), with a covariance matrix associated to their parameters. One of our first works in rover navigation has been devoted to this problem, using obstacle peaks as landmarks (a landmark is here a single 3D point), detected thanks to the application of gradient operators in depth images, and an extended Kalman filter to refine the state vector (Betge-Brezetz, Chatila, and Devy 1995; Betge-Brezetz et al. 1996). The results were satisfactory, but not applicable in every kind of terrain, as the peaks had to be clearly separated from the ground to be faithfully detected. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Autonomous long-range navigation in partially known planetary-like terrains is still an open challenge for robotics. Navigating hun-dreds of meters without any human intervention requires the robot to be able to build various representations of its environment, to plan and execute trajectories according to the kind of terrain tra-versed, to control its motions and to localize itself as it moves. All these activities have to be scheduled, triggered, controlled and in-terrupted according to the rover context. In this paper, we briefly review some functionalities that have been developed in our labora-tory, and implemented on board the Marsokhod model robot, Lama. We then present how the various concurrent instances of the per-ception, localization and motion generation functionalities are inte-grated. Experimental results illustrate the functionalities throughout the paper.
    The International Journal of Robotics Research 10/2002; 21(10). DOI:10.1177/027836402128964152 · 2.54 Impact Factor
Show more