Conference Paper

Natural scene understanding for mobile robot navigation

Lab. d'Autom. et d'Anal. des Syst., CNRS, Toulouse
DOI: 10.1109/ROBOT.1994.351400 Conference: Robotics and Automation, 1994. Proceedings., 1994 IEEE International Conference on
Source: IEEE Xplore

ABSTRACT In this paper, we focus on some perceptual functions required by a
generic task “GOTO” in natural environments; in previous
works, only geometrical modeling has been used to deal with two
fundamental tasks: landmark extraction and recognition for sensor-based
motion control or robot localization, and terrain modeling for motion
planning. Geometrical representations alone lead to a bulky model, and,
after some iterations, to a combinatorial explosion. We present here,
higher level representations; from a range image, given some assumptions
on the perceived scene (even ground with few objects), we propose a
segmentation algorithm to extract simple semantical representations for
the ground and the objects; then, we can analyze the relative positions
of the objects to build a topological scene description. Both models
constitute the scene model, needed for further incremental environment

  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a method for automatically building 3D virtual worlds which correspond to the the objects detected in a real environment is presented. The proposed method can be used in many applications such as for example Virtual Reality, Augmented Reality, remote inspection and Virtual Worlds generation. Our method requires an operator equipped with a stereo camera and moving in an office environment. The operator takes a picture of the environment and, with the proposed method, the Regions of Interest (ROI) are extracted from each picture, their content is classified and 3D virtual scenarios are reconstructed using icons which resemble the classified object categories. ROI extraction, pose and height estimation of the classified objects are performed using stereo vision. The ROIs are obtained using a Dempster-Shafer technique for fusing different information detected from the image such as the Speeded Up Robust Features (SURF) and depth data obtained with the stereo camera. Experimental results are presented in office environments.
    SMVC '10: Proceedings of the 2010 ACM workshop on Surreal media and virtual cloning; 10/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Traditional simultaneous localization and mapping (SLAM) algorithms have been used to great effect in flat, indoor environments such as corridors and offices. We demonstrate that with a few augmentations, existing 2D SLAM technology can be extended to perform full 3D SLAM in less benign, outdoor, undulating environments. In particular, we use data acquired with a 3D laser range finder. We use a simple segmentation algorithm to separate the data stream into distinct point clouds, each referenced to a vehicle position. The SLAM technique we then adopt inherits much from 2D delayed state (or scan-matching) SLAM in that the state vector is an ever growing stack of past vehicle positions and inter-scan registrations are used to form measurements between them. The registration algorithm used is a novel combination of previous techniques carefully balancing the need for maximally wide convergence basins, robustness and speed. In addition, we introduce a novel post-registration classification technique to detect matches which have converged to incorrect local minima
    Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on; 06/2006
  • [Show abstract] [Hide abstract]
    ABSTRACT: A cross-country rover cannot rely in general on permanent and immediate communications with a control station. This precludes direct teleoparation of its motions. It has therefore to be endowed with a large autonomy in achieving its navigation. We have designed and experimented with the mobile robot ADAM a complete system for autonomous navigation in a natural unstructured environment. We describe this work in this paper. The approach is primarly based on the adaptation of the perception and motion actions to the environment and to the status of the robot. The navigation task involves several levels of reasoning, several environment representations, and several action modes. The robot is able to select sub-goals, navigation modes, complex trajectories and perception actions according to the situation.
    04/2006: pages 1-19;