Visual path following using only monocular vision for urban environments.
ABSTRACT This document provides a summary to a short video with the same title. The video shows the French intelligent transportation vehicle CyCab performing visual path following using only monocular vision. All phases of the process are shown with a spoken commentary. In the teaching phase, the user drives the robot manually while images from the camera are stored. Key images with corresponding images features are stored as a map together with 2D and 3D local information. In the navigation phase, CyCab follows the learned path by tracking the images features projected from the map and with a simple visual servoing control law.
Full-textDOI: · Available from: Anthony Remazeilles, May 29, 2015
SourceAvailable from: Anthony Remazeilles[Show abstract] [Hide abstract]
ABSTRACT: Autonomous cars will likely play an important role in the future. A vision system designed to support outdoor navigation for such vehicles has to deal with large dynamic environments, changing imaging conditions, and temporary occlusions by other moving objects. This paper presents a novel appearance-based navigation framework relying on a single perspective vision sensor, which is aimed towards resolving of the above issues. The solution is based on a hierarchical environment representation created during a teaching stage, when the robot is controlled by a human operator. At the top level, the representation contains a graph of key-images with extracted 2D features enabling a robust navigation by visual servoing. The information stored at the bottom level enables to efficiently predict the locations of the features which are currently not visible, and eventually (re-)start their tracking. The outstanding property of the proposed framework is that it enables robust and scalable navigation without requiring a globally consistent map, even in interconnected environments. This result has been confirmed by realistic off-line experiments and successful real-time navigation trials in public urban areas.2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007), 18-23 June 2007, Minneapolis, Minnesota, USA; 01/2007
Conference Paper: 3D navigation based on a visual memory[Show abstract] [Hide abstract]
ABSTRACT: This paper addresses the design of a control law for vision-based robot navigation. The method proposed is based on a topological representation of the environment. Within this context, a learning stage enables a graph to be built in which nodes represent views acquired by the camera, and edges denote the possibility for the robotic system to move from one image to an other. A path finding algorithm then gives the robot a collection of views describing the environment it has to go through in order to reach its desired position. This article focuses on the control law used for controlling the robot motion's online. The particularity of this control law is that it does not require any reconstruction of the environment, and does not force the robot to converge towards each intermediary position in the path. Landmarks matched between each consecutive views of the path are considered as successive features that the camera has to observe within its field of view. An original visual servoing control law, using specific features, ensures that the robot navigates within the visibility path. Simulation results demonstrate the validity of the proposed approachRobotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on; 06/2006
Conference Paper: Outdoor visual path following experiments.[Show abstract] [Hide abstract]
ABSTRACT: In this paper the performance of a topological- metric visual path following framework is investigated in different environments. The framework relies on a monocular camera as the only sensing modality. The path is represented as a series of reference images such that each neighboring pair contains a number of common landmarks. Local 3D geometries are reconstructed between the neighboring reference images in order to achieve fast feature prediction which allows the recovery from tracking failures. During navigation the robot is controlled using image-based visual servoing. The experiments show that the framework is robust against moving objects and moderate illumination changes. It is also shown that the system is capable of on-line path learning.2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, October 29 - November 2, 2007, Sheraton Hotel and Marina, San Diego, California, USA; 01/2007