Map-Aided Localization in Sparse Global Positioning System Environments Using Vision and Particle Filtering
ABSTRACT A map-aided localization approach using vision, inertial sensors when available, and a particle filter is proposed and empirically evaluated. The approach, termed PosteriorPose, uses a Bayesian particle filter to augment global positioning system (GPS) and inertial navigation solutions with vision-based measurements of nearby lanes and stop lines referenced against a known map of environmental features. These map-relative measurements are shown to improve the quality of the navigation solution when GPS is available, and they are shown to keep the navigation solution converged in extended GPS blackouts. Measurements are incorporated with careful hypothesis testing and error modeling to account for non-Gaussian and multimodal errors committed by GPS and vision-based detection algorithms. Using a set of data collected with Cornell's autonomous car, including a measure of truth via a high-precision differential corrections service, an experimental investigation of important design elements of the PosteriorPose estimator is conducted. The algorithm is shown to statistically outperform a tightly coupled GPS/inertial navigation solution both in full GPS coverage and in extended GPS blackouts. Statistical performance is also studied as a function of road type, filter likelihood models, bias models, and filter integrity tests. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc.
- SourceAvailable from: Sergio Alberto Rodriguez Florez[Show abstract] [Hide abstract]
ABSTRACT: Vehicle localization is the primary information needed for advanced tasks like navigation. This information is usually provided by the use of Global Positioning System (GPS) receivers. However, the low accuracy of GPS in urban environments makes it unreliable for further treatments. The combination of GPS data and additional sensors can improve the localization precision. In this article, a marking feature based vehicle localization method is proposed, able to enhance the localization performance. To this end, markings are detected using a multi-kernel estimation method from an on-vehicle camera. A particle filter is implemented to estimate the vehicle position with respect to the detected markings. Then, map-based markings are constructed according to an open source map database. Finally, vision-based markings and map-based markings are fused to obtain the improved vehicle fix. The results on road traffic scenarios using a public database show that our method leads to a clear improvement in localization accuracy.International Conference on Control, Automation, Robotics and Vision, Marina Bay Sands, Singapore; 12/2014
- [Show abstract] [Hide abstract]
ABSTRACT: Estimating the pose in real-time is a primary function for intelligent vehicle navigation. Whilst different solutions exist, most of them rely on the use of high-end sensors. This paper proposes a solution that exploits an automotive type L1-GPS receiver, features extracted by low-cost perception sensors and vehicle proprioceptive information. A key idea is to use the lane detection function of a video camera to retrieve accurate lateral and orientation information with respect to road lane markings. To this end, lane markings are mobile-mapped by the vehicle itself during a first stage by using an accurate localizer. Then, the resulting map allows for the exploitation of camera-detected features for autonomous real-time localization. The results are then combined with GPS estimates and dead-reckoning sensors in order to provide localization information with high availability. As L1-GPS errors can be large and are time correlated, we study in the paper several GPS error models that are experimentally tested with shaping filters. The approach demonstrates that the use of low-cost sensors with adequate data-fusion algorithms should lead to computer-controlled guidance functions in complex road networks.Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on; 01/2013
- [Show abstract] [Hide abstract]
ABSTRACT: A Gaussian sum filter (GSF) with component extended Kalman filters (EKF) is proposed as an approach to localizing an autonomous vehicle in an urban environment with limited GPS availability. The GSF uses vehicle-relative vision-based measurements of known map features coupled with inertial navigation solutions to accomplish localization in the absence of GPS. The vision-based measurements have multimodal measurement likelihood functions that are well represented as weighted sums of Gaussian densities. The GSF is used because of its ability to represent the posterior distribution of the vehicle pose with better efficiency (fewer terms, less computational complexity) than a corresponding bootstrap particle filter with various numbers of particles because of the interaction with measurement hypothesis tests. The expectation-maximization algorithm is used off line to determine the representational efficiency of the particle filter in terms of an effective number of Gaussian densities. In comparison, the GSF, which uses an iterative condensation procedure after each iteration of the filter to maintain real-time capabilities, is shown through a series of in-depth empirical studies to more accurately maintain a representation of the posterior distribution than the particle filter using 37 min of recorded data from Cornell University's autonomous vehicle driven in an urban environment, including a 32 min GPS blackout. © 2012 Wiley Periodicals, Inc. © 2012 Wiley Periodicals, Inc.Journal of Field Robotics 03/2012; 29(2):240-257. DOI:10.1002/rob.20430 · 1.88 Impact Factor