Conference Paper

A probabilistic representation of LiDAR range data for efficient 3D object detection

Dept. of Electr., Comput., & Syst. Eng., Rensselaer Polytech. Inst., Troy, NY
DOI: 10.1109/CVPRW.2008.4563033 Conference: Computer Vision and Pattern Recognition Workshops, 2008. CVPRW '08. IEEE Computer Society Conference on
Source: IEEE Xplore

ABSTRACT We present a novel approach to 3D object detection in scenes scanned by LiDAR sensors, based on a probabilistic representation of free, occupied, and hidden space that extends the concept of occupancy grids from robot mapping algorithms. This scene representation naturally handles LiDAR sampling issues, can be used to fuse multiple LiDAR data sets, and captures the inherent uncertainty of the data due to occlusions and clutter. Using this model, we formulate a hypothesis testing methodology to determine the probability that given 3D objects are present in the scene. By propagating uncertainty in the original sample points, we are able to measure confidence in the detection results in a principled way. We demonstrate the approach in examples of detecting objects that are partially occluded by scene clutter such as camouflage netting.

0 Followers
 · 
132 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present a new algorithm named IPCC for Iterative Photo Consistency Check. The goal of this one is to detect a posteriori moving object in both camera and range data. The range data may be provided by different sensor such as: Riegl, Kinect or Velodyne with no distinction. The key idea is to consider that range data acquired on static objects are photo-consistent, they have the same color and texture in all the camera images, but range data acquired on moving object are not photo-consistent. The main problem is to take in account that range finding sensor and camera are not synchronous, so what is seen in camera is not what range finding sensors acquires. The major contribution in this paper is an original way to find non photo-consistent range data using the camera images in an erosion process of the scene. Experiments show the relevance of the proposed method in terms of both accuracy and computation time.
    Intelligent Vehicles Symposium (IV), 2013 IEEE; 01/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present a novel probabilistic compact representation of the on-road environment, i.e., the dynamic probabilistic drivability map (DPDM), and demonstrate its utility for predictive lane change and merge (LCM) driver assistance during highway and urban driving. The DPDM is a flexible representation and readily accepts data from a variety of sensor modalities to represent the on-road environment as a spatially coded data structure, encapsulating spatial, dynamic, and legal information. Using the DPDM, we develop a general predictive system for LCMs. We formulate the LCM assistance system to solve for the minimum-cost solution to merge or change lanes, which is solved efficiently using dynamic programming over the DPDM. Based on the DPDM, the LCM system recommends the required acceleration and timing to safely merge or change lanes with minimum cost. System performance has been extensively validated using real-world on-road data, including urban driving, on-ramp merges, and both dense and free-flow highway conditions.
    IEEE Transactions on Intelligent Transportation Systems 10/2014; 15(5):2063-2073. DOI:10.1109/TITS.2014.2309055 · 2.47 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Automatic change detection in 3D environments requires the comparison of multi-temporal data. By comparing current data with past data of the same area, changes can be automatically detected and identified. Volumetric changes in the scene hint at suspicious activities like the movement of military vehicles, the application of camouflage nets, or the placement of IEDs, etc. In contrast to broad research activities in remote sensing with optical cameras, this paper addresses the topic using 3D data acquired by mobile laser scanning (MLS). We present a framework for immediate comparison of current MLS data to given 3D reference data. Our method extends the concept of occupancy grids known from robot mapping, which incorporates the sensor positions in the processing of the 3D point clouds. This allows extracting the information that is included in the data acquisition geometry. For each single range measurement, it becomes apparent that an object reflects laser pulses in the measured range distance, i.e., space is occupied at that 3D position. In addition, it is obvious that space is empty along the line of sight between sensor and the reflecting object. Everywhere else, the occupancy of space remains unknown. This approach handles occlusions and changes implicitly, such that the latter are identifiable by conflicts of empty space and occupied space. The presented concept of change detection has been successfully validated in experiments with recorded MLS data streams. Results are shown for test sites at which MLS data were acquired at different time intervals.
    SPIE - Electro-Optical Remote Sensing, Photonic Technologies, and Applications VIII, Amsterdam; 09/2014

Preview

Download
2 Downloads
Available from