Conference Paper

Monocular SLAM for Visual Odometry

Catalonia Univ., Barcelona
DOI: 10.1109/WISP.2007.4447564 Conference: Intelligent Signal Processing, 2007. WISP 2007. IEEE International Symposium on
Source: IEEE Xplore

ABSTRACT The ego-motion online estimation process from a video input is often called visual odometry. Typically optical flow and structure from motion (SFM) techniques have been used for visual odometry. Monocular simultaneous localization and mapping (SLAM) techniques implicitly estimate camera ego-motion while incrementally build a map of the environment. However in monocular SLAM, when the number of features in the system state increases, the computational cost grows rapidly; consequently maintaining frame rate operation becomes impractical. In this paper monocular SLAM is proposed for map-based visual odometry. The number of features is bounded removing features dynamically from the system state, for maintaining a stable processing time. In the other hand if features are removed then previous visited sites can not be recognized, nevertheless in an odometry context this could not be a problem. A method for feature initialization and a simple method for recovery metric scale are proposed. The experimental results using real image sequences show that the scheme presented in this paper is promising.

0 Bookmarks
 · 
263 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: As space agencies are currently looking at Near Earth Asteroids as a next step on their exploration roadmap, high precision autonomous landing control schemes using passive sensors (i.e. cameras) will be required for upcoming missions. Attempting to address this need, the Guidance Navigation and Control (GNC) system presented here is an online visual navigation scheme that relies on only one camera and a range sensor to guide and control a spacecraft from its observation point about an asteroid to a designated landing site on its surface. This scheme combines monocular visual odometry with a Rao-Blackwellized Particle Filter-based Simultaneous Localization and Mapping (SLAM) system. The SLAM uses octree occupancy grids to store observed landmarks, mapping the topography of the asteroid while providing inertial data to the spacecraft's position and attitude controller. The approach, descent and landing scheme used in this work ensures that the spacecraft will complete at least one rotation around the asteroid over the course of the landing phase, guaranteeing loop closure of the SLAM algorithm, justifying the extra computational cost of this high-precision landing scheme.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The SLAM or Simultaneous Localization and Mapping, is a technique in which a robot or autonomous vehicle operate in an a priori unknown environment, using only its onboard sensors to simultaneously build a map of its surroundings and use it to track its position. The sensors have a large impact on the algorithm used for SLAM. Recent approaches are focusing on the use of cameras as the main sensor, because they yield a lot of information and are well adapted for embedded systems: they are light, cheap and power saving. However, unlike range sensors which provide range and angular information, a camera is a projective sensor which measures the bearing of images features. Therefore depth information (range) cannot be obtained in a single step. This fact has propitiated the emergence of a new family of SLAM algorithms: the Bearing-Only SLAM methods, which mainly rely in especial techniques for features system-initialization in order to enable the use of bearing sensors (as cameras) in SLAM systems. In this work a practical method is presented, for initializing new features in bearing-only SLAM systems. The proposed method, defines a single hypothesis for the initial depth of features, by the use of an stochastic technique of triangulation. Several simulations as well two scenarios of applications with real data are presented, in order to show the performance of the proposed method.
    Ingeniería, investigación y tecnología. 06/2013; 14(2):257-274.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper focuses on the problem of real-time pose tracking using visual and inertial sensors in systems with limited processing power. Our main contribution is a novel approach to the design of estimators for these systems, which optimally utilizes the available resources. Specifically, we design a hybrid estimator that integrates two algorithms with complementary computational characteristics, namely a sliding-window EKF and EKF-SLAM. To decide which algorithm is best suited to process each of the available features at runtime, we learn the distribution of the feature number and of the lengths of the feature tracks. We show that using this information, we can predict the expected computational cost of each feature-allocation policy, and formulate an objective function whose minimization determines the optimal way to process the feature data. Our results demonstrate that the hybrid algorithm outperforms each individual method (EKF-SLAM and sliding-window EKF) by a wide margin, and allows processing the sensor data at real-time speed on the processor of a mobile phone.