A Probabilistic Representation of LiDAR Range Data for Efficient 3D Object
Theodore C. Yapo1∗, Charles V. Stewart2, and Richard J. Radke1
1Department of Electrical, Computer, and Systems Engineering
2Department of Computer Science
Rensselaer Polytechnic Institute, Troy, New York 12180
firstname.lastname@example.org, email@example.com, firstname.lastname@example.org
We present a novel approach to 3D object detection in
scenes scanned by LiDAR sensors, based on a probabilistic
representation of free, occupied, and hidden space that ex-
tends the concept of occupancy grids from robot mapping
algorithms. This scene representation naturally handles Li-
DAR sampling issues, can be used to fuse multiple LiDAR
data sets, and captures the inherent uncertainty of the data
due to occlusions and clutter. Using this model, we for-
mulate a hypothesis testing methodology to determine the
probability that given 3D objects are present in the scene.
By propagating uncertainty in the original sample points,
we are able to measure confidence in the detection results
in a principled way. We demonstrate the approach in ex-
amples of detecting objects that are partially occluded by
scene clutter such as camouflage netting.
Light Detection and Ranging (LiDAR) scanners use
time-of-flight measurements of narrow beams of laser light
The resolution of commercially available LiDAR scanners
can be very good, achieving an accuracy of a few mm at
100m range . However, unlike digital image sensors
that use an optical low-pass filter to prevent the aliasing of
high spatial frequencies in the scene, LiDAR sensors are
very susceptible to sampling artifacts, as illustrated in Fig-
ure 1. For example, if the samples are too far apart, a Li-
DAR scan of a picket fence might be interpreted as a solid
wall. Conversely, if a solid wall is sampled at a shallow
grazing angle by nearly parallel LiDAR rays, it can be dif-
ficult to connect the distant sample points into a single sur-
∗This work was supported in part by the US Army Intelligence and Se-
curity Command under the award W9124Q-04-F-2159, and by the DARPA
Computer Science Study Group under the award HR0011-07-1-0016.
face. Hence, even though each range point is measured with
high accuracy, there can still be quite a bit of uncertainty
about the scene in each LiDAR scan.
Occlusions in the scene introduce a second source of un-
certainty into LiDAR range data. Objects may be wholly
or partially hidden from the point of view of the scanner,
resulting in uncertainty about their presence or position in
the scene. To deal with this issue effectively, a 3D object
detection algorithm must allow fusion of data taken from
different viewpoints, and model occlusion explicitly, noting
what parts of the scene are visible from each viewpoint.
Much previous research on analyzing LiDAR data is
based on generating a 3D model of the scene, either reduc-
ing the data to a polygonal model, or in some cases, pro-
ducing an implicit function representation of the scene sur-
faces. Instead of irrevocably collapsing information about
the scene into a likely “crisp” estimate, we propose to pre-
serve the inherent uncertainty of the original data when
testing hypotheses against the scene using a probabilistic
We propose a discrete scene data structure to maintain a
probabilistic model of the 3D scene, and provide a natural
and tractable means to update this model that properly han-
dles LiDAR sampling issues. The scene data structure is
fundamentally a site occupancy probability model, extend-
ing the concept of occupancy grids from robotics . We
approximate the scene by a set of random fields that de-
scribe the probabilities that any single site (3D voxel) is in
oneof threestates: freespace, occupied, orhidden. This ap-
proach provides a sound basis for fusing data from disparate
sensors that observe the scene from different viewpoints.
While we believe that the precision of available LiDAR
sensors far surpasses that required for reliable object detec-
tion (since most objects of interest in outdoor scenes are
very large relative to the uncertainty of a single LiDAR re-
turn), we cannot scan the scene with fewer LiDAR points
without exacerbating the undersampling and aliasing prob-
978-1-4244-2340-8/08/$25.00 ©2008 IEEE
Although using a logarithmic representation avoids nu-
merical issues involving the small magnitudes of the detec-
tion probabilities, these magnitudes currently depend on the
cardinality of the points in the object model. Hence, while
we can straightforwardly interpret the detection results for
a single object, comparing results derived from object mod-
els of widely different sizes is problematic. We are investi-
gating the normalization of the results relative to the object
model size so that different detection maps can be directly
Finally, we note that if multiple objects are to be tested
against a single scene, the linearity of the cross correlation
operations can be exploited to improve efficiency. If the
collection of object models can be decomposed into a com-
mon set of primitive objects, these primitives can be tested
against the scene, and the detection results combined in the
logarithmic representation of (11) to produce detection re-
sults for the composite objects.
 S. Belongie, J. Malik, and J. Puzicha. Shape matching and
object recognition using shape contexts. IEEE Trans. Pattern
Anal. Mach. Intell., 24(4):509–522, 2002.
 V. Boyer and J.-J. Bourdin. A faster algorithm for 3D dis-
crete lines. In Proceedings of the Euorpean Association for
Computer Graphics Conference, 1998.
 R. J. Campbell and P. J. Flynn. A survey of free-form ob-
ject representation and recognition techniques. Comput. Vis.
Image Underst., 81(2):166–210, 2001.
 C. Connolly. Cumulative generation of octree models from
range data. In Proceedings of the IEEE International Con-
ference on Robotics and Automation, pages 25–32, 1984.
 A. Elfes. Using occupancy grids for mobile robot perception
and navigation. IEEE Computer, 22(6):46–57, 1989.
 A. Frome, D. Huber, R. Kolluri, T. Bulow, and J. Malik. Rec-
ognizing objects in range data using regional point descrip-
tors. In Proceedings of the European Conference on Com-
puter Vision, 2004.
 D. Gorodnichy. On using regression in range data fusion. In
Proceedings of the Canadian Conference on Electrical and
Computer Engineering (CCECE’99), May 1999.
 D. Gorodnichy and W. Armstrong. A parametric alternative
to grids for occupancy-based world modeling. In Proceed-
ings of Quality Control by Artificial Vision (QCAV’99), May
 D. Huber, A. Kapuria, R. Donamukkala, and M. Hebert.
Parts-based 3D object classification. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recogni-
tion, June 2004.
 B. Jang, T. Choi, and J. Lee. Adaptive occupancy grid map-
ping with clusters.Artificial Life and Robotics, 10:162–
165(4), November 2006.
 A. Johnson. Spin-Images: A Representation for 3-D Surface
Matching. PhD thesis, Robotics Institute, Carnegie Mellon
University, Pittsburgh, PA, August 1997.
 A. Johnson and M. Hebert. Using spin images for efficient
object recognition incluttered 3D scenes. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 21(5):433–
449, May 1999.
 T. Kremen, B. Koska, and J. Pospil. Verification of laser
scanning systems quality.
 L. Lucchese, G. Doretto, and G. Cortelazzo.
quency domain technique for range data registration. IEEE
Transactions on Pattern Analysis and Machine Intelligence,
24(11):1468–1484, Nov 2002.
 L. Matthies and A. Elfes. Integration of sonar and stereo
range data using a grid-based representation. In Proceed-
ings of the IEEE International Conference on Robotics and
Automation, volume 2, April 1988.
 D.Pagac, E.Nebot, andH.Durrant-Whyte. Anevidentialap-
proach to probabilistic map-building. In Proceedings of the
IEEE International Conference on Robotics and Automation,
volume 1, pages 745–750, 1996.
 M. A. Paskin and S. Thrun. Robotic mapping with polygonal
random fields. In Proceedings of the 21st Conference on
Uncertainty in Artificial Intelligence, July 2005.
 P. Payeur. Improving robot path planning efficiency with
probabilistic virtual environment models.
of the IEEE Symposium on Virtual Environments, Human-
Computer Interfaces and Measurement Systems, pages 13–
 P. Payeur, P. Hebert, D. Laurendeau, and C. Gosselin. Prob-
abilistic octree modeling of a 3D dynamic environment.
In Proceedings of the IEEE International Conference on
Robotics and Automation, pages 1289–1296, 1997.
 M. Ribo and A. Pinz. A comparison of three uncertainty
calculi for building sonar-based occupancy grids.
Robotics and Autonomous Systems, 35:201–209, 2001.
 G. Shafer. A Mathematical Theory of Evidence. Princeton
University Press, 1976.
 S. Thrun. Learning occupancy grids with forward models.
In Proceedings of the IEEE/RSJ International Conference on
Intelligent Robots and Systems, volume 3, pages 1676–1681,
 A. N. Vasile and R. M. Marino.
matic target detection and recognition using 3D laser radar
imagery. Lincoln Laboratory Journal, 15(1), 2005.
 M. Yguel, O. Aycard, and C. Laugier. Wavelet occupancy
grids: a method for compact map building. In Proc. of the
Int. Conf. on Field and Service Robotics, 2005.
In XXIII International FIG