[Show abstract][Hide abstract] ABSTRACT: Vehicle localization is the primary information needed for advanced tasks like navigation. This information is usually provided by the use of Global Positioning System (GPS) receivers. However, the low accuracy of GPS in urban environments makes it unreliable for further treatments. The combination of GPS data and additional sensors can improve the localization precision. In this article, a marking feature based vehicle localization method is proposed, able to enhance the localization performance. To this end, markings are detected using
a multi-kernel estimation method from an on-vehicle camera. A particle filter is implemented to estimate the vehicle position with respect to the detected markings. Then, map-based markings are constructed according to an open source map database. Finally, vision-based markings and map-based markings are fused to obtain the improved vehicle fix. The results on road traffic scenarios using a public database show that our method leads
to a clear improvement in localization accuracy.
International Conference on Control, Automation, Robotics and Vision, Marina Bay Sands, Singapore; 12/2014
"Automotive systems used for mapping and localization the map information is moved from feature level to signal level.  integrated OpenStreetMap data into the robot tasks of localization, and in  a multi-camera system has been developed for outdoor localization. "
[Show abstract][Hide abstract] ABSTRACT: Estimating the pose in real-time is a primary function for intelligent vehicle navigation. Whilst different solutions exist, most of them rely on the use of high-end sensors. This paper proposes a solution that exploits an automotive type L1-GPS receiver, features extracted by low-cost perception sensors and vehicle proprioceptive information. A key idea is to use the lane detection function of a video camera to retrieve accurate lateral and orientation information with respect to road lane markings. To this end, lane markings are mobile-mapped by the vehicle itself during a first stage by using an accurate localizer. Then, the resulting map allows for the exploitation of camera-detected features for autonomous real-time localization. The results are then combined with GPS estimates and dead-reckoning sensors in order to provide localization information with high availability. As L1-GPS errors can be large and are time correlated, we study in the paper several GPS error models that are experimentally tested with shaping filters. The approach demonstrates that the use of low-cost sensors with adequate data-fusion algorithms should lead to computer-controlled guidance functions in complex road networks.
Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on; 11/2013
"In , 2D road maps are used as the geographic reference for global vehicle localization with 3 degree of freedom (3DoF) particle filtering. Miller et al.  presented a similar map-aided approach for visual SLAM with particle filtering but combined it with GPS data. In , a method to recover position and attitude using a combination of monocular visual odometry and GPS measurements was presented, and the SLAM errors were carefully analysed after filtering. "
[Show abstract][Hide abstract] ABSTRACT: Accurate localization of moving sensors is essential for many fields, such as robot navigation and urban mapping. In this paper, we present a framework for GPS-supported visual Simultaneous Localization and Mapping with Bundle Adjustment (BA-SLAM) using a rigorous sensor model in a panoramic camera. The rigorous model does not cause system errors, thus representing an improvement over the widely used ideal sensor model. The proposed SLAM does not require additional restrictions, such as loop closing, or additional sensors, such as expensive inertial measurement units. In this paper, the problems of the ideal sensor model for a panoramic camera are analysed, and a rigorous sensor model is established. GPS data are then introduced for global optimization and georeferencing. Using the rigorous sensor model with the geometric observation equations of BA, a GPS-supported BA-SLAM approach that combines ray observations and GPS observations is then established. Finally, our method is applied to a set of vehicle-borne panoramic images captured from a campus environment, and several ground control points (GCP) are used to check the localization accuracy. The results demonstrated that our method can reach an accuracy of several centimetres.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.