Conference Paper

Outdoor visual path following experiments.

DOI: 10.1109/IROS.2007.4399247 Conference: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 29 - November 2, 2007, Sheraton Hotel and Marina, San Diego, California, USA
Source: DBLP

ABSTRACT In this paper the performance of a topological- metric visual path following framework is investigated in different environments. The framework relies on a monocular camera as the only sensing modality. The path is represented as a series of reference images such that each neighboring pair contains a number of common landmarks. Local 3D geometries are reconstructed between the neighboring reference images in order to achieve fast feature prediction which allows the recovery from tracking failures. During navigation the robot is controlled using image-based visual servoing. The experiments show that the framework is robust against moving objects and moderate illumination changes. It is also shown that the system is capable of on-line path learning.

Download full-text


Available from: François Chaumette, Jun 20, 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper deals with the improvement of visual path following through velocity variation according to the coordinate of feature points. Visual path follow first teaches driving path by selecting milestone images then follows the route by comparing the milestone image and current image. We follow the visual path following algorithm of Chen and Birchfield [8]. In [8], they use fixed translational and rotational velocity. We propose an algorithm that uses different translational velocity according to the driving condition. Translational velocity is adjusted according to the variation of the coordinate of feature points on image. Experimental results including diverse indoor cases show the feasibility of the proposed algorithm.
    04/2011; 17(4). DOI:10.5302/J.ICROS.2011.17.4.375
  • Source
    Proceedings of the European Conference on Mobile Robotics (ECMR); 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Visual teach-and-repeat navigation enables long-range rover autonomy without solving the simultaneous localization and mapping problem or requiring an accurate global reconstruction. During a learning phase, the rover is piloted along a route, logging images. After post-processing, the rover is able to repeat the route in either direction any number of times. This paper describes and evaluates the localization algorithm at the core of a teach-and-repeat system that has been tested on over 32 kilometers of autonomous driving in an urban environment and at a planetary analog site in the High Arctic. We show how a stereo visual odometry pipeline can be extended to become a mapping and localization system, then evaluate the performance of the algorithm with respect to accuracy, robustness to path-tracking error, and the effects of lighting.
    Robotics and Automation (ICRA), 2010 IEEE International Conference on; 06/2010