Conference Paper

Natural scene understanding for mobile robot navigation

Lab. d'Autom. et d'Anal. des Syst., CNRS, Toulouse
DOI: 10.1109/ROBOT.1994.351400 Conference: Robotics and Automation, 1994. Proceedings., 1994 IEEE International Conference on
Source: IEEE Xplore

ABSTRACT In this paper, we focus on some perceptual functions required by a
generic task “GOTO” in natural environments; in previous
works, only geometrical modeling has been used to deal with two
fundamental tasks: landmark extraction and recognition for sensor-based
motion control or robot localization, and terrain modeling for motion
planning. Geometrical representations alone lead to a bulky model, and,
after some iterations, to a combinatorial explosion. We present here,
higher level representations; from a range image, given some assumptions
on the perceived scene (even ground with few objects), we propose a
segmentation algorithm to extract simple semantical representations for
the ground and the objects; then, we can analyze the relative positions
of the objects to build a topological scene description. Both models
constitute the scene model, needed for further incremental environment
modeling

0 Bookmarks
 · 
74 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper concerns the exploration of a natural environment by a mobile robot equipped with both a video color camera and a stereo-vision system. We focus on the interest of such a multi-sensory system to deal with the navigation of a robot in an a priori unknown environment, including (1) the incremental construction of a landmark-based model, and the use of these landmarks for (2) the 3-D localization of the mobile robot and for (3) a sensor-based navigation mode. For robot localization, a slow process and a fast one are simultaneously executed during the robot motions. In the modeling process (currently 0.1 Hz), the global landmark-based model is incrementally built and the robot situation can be estimated from discriminant landmarks selected amongst the detected objects in the range data. In the tracking process (currently 4 Hz), selected landmarks are tracked in the visual data; the tracking results are used to simplify the matching between landmarks in the modeling process. Finally, a sensor-based visual navigation mode, based on the same landmark selection and tracking, is also presented; in order to navigate during a long robot motion, different landmarks (targets) can be selected as a sequence of sub-goals that the robot must successively reach.
    Autonomous Robots 01/2002; 13:143-168. · 1.91 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper addresses the problem of perception and representation of space for a mobile agent. A probabilistic hierarchical framework is suggested as a solution to this problem. The method proposed is a combination of probabilistic belief with "Object Graph Models" (OGM). The world is viewed from a topological optic, in terms of objects and relationships between them. The hierarchical representation that we propose permits an efficient and reliable modeling of the information that the mobile agent would perceive from its environment. The integration of both navigational and interactional capabilities through efficient representation is also addressed. Experiments on a set of images taken from the real world that validate the approach are reported. This framework draws on the general understanding of human cognition and perception and contributes towards the overall efforts to build cognitive robot companions.
    Proc SPIE 01/2005;
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a method for automatically building 3D virtual worlds which correspond to the the objects detected in a real environment is presented. The proposed method can be used in many applications such as for example Virtual Reality, Augmented Reality, remote inspection and Virtual Worlds generation. Our method requires an operator equipped with a stereo camera and moving in an office environment. The operator takes a picture of the environment and, with the proposed method, the Regions of Interest (ROI) are extracted from each picture, their content is classified and 3D virtual scenarios are reconstructed using icons which resemble the classified object categories. ROI extraction, pose and height estimation of the classified objects are performed using stereo vision. The ROIs are obtained using a Dempster-Shafer technique for fusing different information detected from the image such as the Speeded Up Robust Features (SURF) and depth data obtained with the stereo camera. Experimental results are presented in office environments.
    SMVC '10: Proceedings of the 2010 ACM workshop on Surreal media and virtual cloning; 10/2010