Conference Paper

A probabilistic representation of LiDAR range data for efficient 3D object detection

Dept. of Electr., Comput., & Syst. Eng., Rensselaer Polytech. Inst., Troy, NY
DOI: 10.1109/CVPRW.2008.4563033 Conference: Computer Vision and Pattern Recognition Workshops, 2008. CVPRW '08. IEEE Computer Society Conference on
Source: IEEE Xplore

ABSTRACT We present a novel approach to 3D object detection in scenes scanned by LiDAR sensors, based on a probabilistic representation of free, occupied, and hidden space that extends the concept of occupancy grids from robot mapping algorithms. This scene representation naturally handles LiDAR sampling issues, can be used to fuse multiple LiDAR data sets, and captures the inherent uncertainty of the data due to occlusions and clutter. Using this model, we formulate a hypothesis testing methodology to determine the probability that given 3D objects are present in the scene. By propagating uncertainty in the original sample points, we are able to measure confidence in the detection results in a principled way. We demonstrate the approach in examples of detecting objects that are partially occluded by scene clutter such as camouflage netting.

0 Bookmarks
 · 
125 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents methods for 3D object detection and multi-object (or multi-agent) behavior recognition using a sequence of 3D point clouds of a scene taken over time. This motion 3D data can be collected using different sensors and techniques such as flash LIDAR (Light Detection And Ranging), stereo cameras, time-of-flight cameras, or spatial phase imaging sensors. Our goal is to segment objects from the D point cloud data in order to construct tracks of multiple objects (i.e., persons and vehicles) and then classify the multi-object tracks as one of a set of known behaviors, such as “A person drives a car and gets out”. A track is a sequence of object locations changing over time and is the compact object-level information we use and obtain from the motion 3D data. Leveraging the rich structure of dynamic 3D data makes many visual learning problems better posed and more tractable. Our behavior recognition method is based on combining the Dynamic Time Warping-based behavior distances from the multiple object-level tracks using a normalized car-centric coordinate system to recognize the interactive behavior of those multiple objects. We apply our techniques for behavior recognition on data collected using a LIDAR sensor, with promising results.
    01/2011;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Recent advances in LIDAR technologies have increased the resolution of airborne instruments to the sub-meter level, which opens up the possibility of creating detailed maps over a large area. The ability to map complex 3D structure is especially challenging in urban environments, where both natural and manmade obstructions make comprehensive mapping difficult. LIDAR remains unsurpassed in its capability to capture fine geometric details in this type of environment, making it the ideal choice for many purposes. One important application of urban remote sensing is the creation of line-of-sight maps, or viewsheds, which determine the visibility of areas from a given point within a scene. Using a voxelized approach to LIDAR processing allows us to retain detail in overlapping structures, and we show how this provides a better framework for handling line-of-sight calculations than existing approaches. Including additional information about the instrument position during the data collection allows us to identify any scene areas which are poorly sampled, and to determine any detrimental effect on line-of-sight maps. An experiment conducted during the summer of 2011 collected both visible imagery and LIDAR at multiple returns per square meter of the downtown region of Rochester, NY. We demonstrate our voxelized technique on this large real-world dataset, and derive where errors in line-of-sight mapping are likely to occur.
    Proceedings of SPIE - The International Society for Optical Engineering 05/2012; · 0.20 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Automatic change detection in 3D environments requires the comparison of multi-temporal data. By comparing current data with past data of the same area, changes can be automatically detected and identified. Volumetric changes in the scene hint at suspicious activities like the movement of military vehicles, the application of camouflage nets, or the placement of IEDs, etc. In contrast to broad research activities in remote sensing with optical cameras, this paper addresses the topic using 3D data acquired by mobile laser scanning (MLS). We present a framework for immediate comparison of current MLS data to given 3D reference data. Our method extends the concept of occupancy grids known from robot mapping, which incorporates the sensor positions in the processing of the 3D point clouds. This allows extracting the information that is included in the data acquisition geometry. For each single range measurement, it becomes apparent that an object reflects laser pulses in the measured range distance, i.e., space is occupied at that 3D position. In addition, it is obvious that space is empty along the line of sight between sensor and the reflecting object. Everywhere else, the occupancy of space remains unknown. This approach handles occlusions and changes implicitly, such that the latter are identifiable by conflicts of empty space and occupied space. The presented concept of change detection has been successfully validated in experiments with recorded MLS data streams. Results are shown for test sites at which MLS data were acquired at different time intervals.
    SPIE - Electro-Optical Remote Sensing, Photonic Technologies, and Applications VIII, Amsterdam; 09/2014

Preview

Download
2 Downloads
Available from