ArticlePDF Available

Advanced point cloud processing


Abstract and Figures

The high pulse frequencies of today's airborne, mobile and terrestrial laser scanners enable the acquisition of point clouds with densities from some 20-50 points/m2 for airborne scanners to several thousands points/m2 for mobile and terrestrial scanners. For the (semi-)automated extraction of geo-information from point clouds these high point densities are very beneficial. The large number of points on the surfaces of objects to be extracted describe the surface geometry with a high redundancy. This allows the reliable detection of such surfaces in a point cloud. In this paper various examples are presented on how point cloud segmentations can be used to automatically extract geo-information. The paper focusses on the extraction of man-made objects in the urban environment. The examples include the processing of point clouds acquired by airborne, mobile as well as terrestrial laser scanners. The usage of generic knowledge on the objects to be mapped is shown to play a key role in the automation of the point cloud interpretation.
Content may be subject to copyright.
George Vosselman 1
Advanced point cloud processing
GEORGE VOSSELMAN, Enschede, the Netherlands
The high pulse frequencies of today’s airborne, mobile and terrestrial laser scanners enable the acquisition of point
clouds with densities from some 20-50 points/m
for airborne scanners to several thousands points/m
for mobile and
terrestrial scanners. For the (semi-)automated extraction of geo-information from point clouds these high point densities
are very beneficial. The large number of points on the surfaces of objects to be extracted describe the surface geometry
with a high redundancy. This allows the reliable detection of such surfaces in a point cloud. In this paper various
examples are presented on how point cloud segmentations can be used to automatically extract geo-information. The
paper focusses on the extraction of man-made objects in the urban environment. The examples include the processing of
point clouds acquired by airborne, mobile as well as terrestrial laser scanners. The usage of generic knowledge on the
objects to be mapped is shown to play a key role in the automation of the point cloud interpretation.
The past two decades have shown a very rapid development and acceptance of various laser
scanning technologies for a wide variety of purposes. After a start with airborne laser profilers
combined with GPS and inertial navigation systems in 1988 (Lindenberger, 1989), the first
scanning airborne laser rangers were introduced in the early 1990’s. Since then airborne laser
scanning technology advanced from 2 to 250 kHz pulse frequencies, from single echo recordings to
multiple echo and full waveform recordings, and from a few decimetre to a few centimetre accuracy
(Mallet and Bretar, 2009). Terrestrial laser scanning entered the market in the late 1990’s.
Terrestrial laser scanners based on the principles of time-of-flight measurement of pulses, phase
measurement with continuous waves, or optical triangulation offer solutions for point cloud
acquisition at various ranges and with different accuracies (Fröhlich and Mettenleiter, 2004). A few
years later mobile laser scanners were added to the spectrum to enable corridor mapping from
terrestrial platforms like cars, trains or vessels (Barber et al., 2008).
All three types of laser scanning systems, airborne, mobile and terrestrial, are used to acquire point
clouds. The same application may sometimes be served by different sensor platforms. For example,
surveying of road and rail road environments is done both by airborne and mobile laser scanning.
Likewise, both mobile and terrestrial laser scanners are used in projects to reconstruct building
façade models.
Although the point densities and accuracies vary with the type of scanner and distance to the
scanned surface, the processing of point clouds acquired by airborne, mobile and terrestrial scanners
shows many similarities. In most cases the point clouds will be used to extract geo-referenced
information. Common steps in the information extraction procedures are the detection of planar or
smooth surfaces and the classification of points or point clusters based on local point patterns, echo
intensity and echo count information (Vosselman et al., 2004; Darmawati, 2008).
Point cloud segmentation and classification will briefly be discussed in the next section. The paper
then continues with an overview on various point cloud processing projects performed at ITC,
Enschede. They show how the results of segmentations, classifications and other point cloud
processing can be used for the extraction of geo-information on man-made objects like buildings
and roads. The processing of point clouds for the purpose of accurate geo-referencing, DTM
production or forestry and engineering applications will be left outside the scope of this paper
(Pfeifer and Briese, 2007; Vosselman and Maas, 2009). The applications presented in this paper are
2 George Vosselman
grouped according to the platform used for data acquisition. They demonstrate the high quality of
point clouds that can nowadays be acquired with laser scanning and show the large potential for the
automated and semi-automated extraction of geo-information from this data.
The geometry of man-made objects can often be described by a set of planar surfaces. To a large
extent the terrain can be described by smooth surfaces. For extracting the geometry of man-made
objects and the terrain algorithms are required that recognise planar and smooth surfaces in a point
cloud (Vosselman et al., 2004). The mostly used algorithms are those that first try to find a small set
of nearby points with a good fit to a plane. This set of points then constitutes the seed segment for a
surface growing procedure in which adjacent non-classified points are added to the segment if their
distance to the plane or locally defined smooth surface is below some threshold. Once there are no
points left that satisfy this condition, further seed segments are selected and expanded until all
points have been assigned to a segment. This procedure is very similar to the well-known region
growing algorithm used for image segmentation (Ballard and Brown, 1982). The analysis whether
or not a local set of points contains a large percentage of co-planar points that can be used as seed
segment is performed with a 3D Hough transform or RANSAC plane detection. These plane
estimation methods are very robust and quick if only a small set of points needs to be analysed
(Vosselman and Maas, 2009). As the point densities of today’s laser scanning surveys typically
result into many points on the object surfaces, the detection of these surfaces in the point cloud is
usually very reliable.
(a) (b) (c)
Fig. 1: (a) Point cloud with colour coded heights, (b) Point cloud segmented into planar surfaces, (c) Segments
classified based on various segment properties (red (light grey): building, blue (dark grey): terrain: white: other classes).
Once a point cloud has been segmented, segment attributes can be collected to classify the
segments. Like in image processing, a segment-wise classification is more reliable than a point-wise
(or pixel-wise) classification. An example of a segment-wise classification is shown in Fig. 1. The
point cloud of Fig. 1(a) was segmented into planar surfaces using a 3D Hough transform and a
surface growing algorithm. The segmentation result (Fig. 1(b)) shows that most roof planes have a
one-to-one relationship with a segment. In the areas with vegetation, many small segments are
generated that consist of points in the tree canopy that are approximately co-planar. The
segmentation algorithm used a minimum segment size of ten points. Many points in the vegetation
could not be grouped to segments of this size and were left without a segment number. These points
are shown in white in Fig. 1(b). The terrain surface is represented by various large segments,
because it was not planar enough to be described by a single plane. Three segment attributes were
George Vosselman 3
(a) (b)
Fig. 2: (a) Roof segments with classified relationships. (b)
Reconstructed outlines. (c) Reconstructed building shapes.
used to obtain the classification of Fig. 1(c): the number of points in a segment, the percentage of
last echo points in a segment, and the average height of a segment above a local minimum height.
Next to the segment size, the percentage of last echo points is very useful to discriminate between
roof segments and vegetation segments (Darmawati, 2008). The points on a roof plane usually are
the last echo of a laser pulse (except for a few points on the roof edge). In contrast, the points on a
vegetation segment are usually not corresponding to the last echo. Note that the classification is
done on segments in a 3D point cloud. This implies that multiple segments may be present on top of
each other such that roof segments and terrain segments below vegetation can also be detected.
Other attributes than the ones used above example could be added to further improve the
segmentation and classification accuracy. Such attributes include the reflectance strength of the
laser pulse, the width of a pulse as extracted from a recorded full waveform (Rutzinger et al, 2008),
and multispectral information obtained from a simultaneous recording with an optical camera
(Rottensteiner et al., 2005).
Airborne laser scanning has its main
application in the production of digital terrain
models. With the high point densities of
current airborne laser scanners, applications
that require higher planimetric accuracies and
higher detail become feasible. In this section
two projects are described that process point
clouds with 20 points/m
to extract 3D roof
landscapes and road sides.
3.1. Modelling of roof landscapes
The extraction of 3D building models from
laser scanning data has been the focus of a
large number of studies in the past years
(Brenner, 1999, 2009; Rottensteiner, 2003;
Vosselman and Dijkman, 2001). Most
approaches either are strictly model driven or
data driven. Oude Elberink (2009) tries to
combine these approaches by utilising graph
matching to recognise the topology of common
roof shapes. After the segmentation into planar
faces, the topology of the roof segments is
described by a graph (Fig 2(a)). The detected
segments are the nodes of the graph and the
edges correspond to pairs of adjacent
segments. These edges are labelled by the type
of edge (e.g. horizontal intersection line, height
jump edge, sloped convex intersection line).
Subgraph isomorphisms are sought between the graph of the building segments and a library of roof
shapes. The graph matching may result into incomplete matches. In this case, hypotheses are
generated for segments that may not have been found in the laser scanning data, but are required to
4 George Vosselman
make a topologically correct description. After determining the best match on the topology, the
geometry of the roof segments is reconstructed (Fig. 2(b)). The graphs in the roof shape library only
describe simple shapes. For more complex roof shapes the graph matching will result in multiple
subgraph isomorphisms. The topologies of these matching subgraphs are then combined to one
graph and allow the reconstruction of the complex roof shapes (Fig. 2(c)). For suburban areas of a
complexity as shown in Fig. 2(c), the topology of roof parts larger than 1.5 m
is reconstructed
correctly in 75% of the buildings. The largest problems are caused by missing laser data on flat roof
parts that were covered by water. As water absorbs infrared light, laser pulses are not or hardly
reflected on these roof parts.
The building models of Fig. 2(c) combine the roof outlines as extracted from the point cloud with
building outlines obtained from a large scale map. Because the map lines represent the location of
the walls, roof extensions are introduced in the models when the point cloud segments extend
beyond the walls. The map lines are also used to outline flat roof parts with water, as the lack of
points on those roof parts does not allow a data driven reconstruction of the roof geometry.
3.2. Outlining of road sides
In urban areas curbstones are often used to separate the street from the pavement. The height
difference between the street and the raised pavement can very well be seen in point clouds
acquired by airborne laser scanning. Fig 3(a) shows a point cloud with a colour cycle length of 0.5
m in height. Height differences of 5 cm therefore already appear in different colours. This makes
the small height differences at road sides and traffic islands visible. As the noise in the distance
measurements by the laser ranger is in the order of 1-2 cm (σ), the signal-to-noise ratio at
curbstones enables an automated extraction of curbstone locations. Because all points within a
radius of a few meters are acquired within a fraction of a second, the locations of these points all
depend on the interpolation between the same two GPS observations. Noise in these observations (σ
of 2-3 cm) does therefore not have an effect on the signal-to-noise ratio of local height jumps.
(a) (b)
Fig. 3: (a) Perspective view of colour coded points showing small height differences of traffic islands.
(b) Automatically extracted curbstone locations.
George Vosselman 5
The road sides shown in Fig. 3(b) were extracted with the following procedure (Vosselman and
Zhou, 2009). First, a coarse DTM is produced by selecting all low segments of the segmented point
cloud. Second, all pairs of nearby points that show a small height difference and are close to the
DTM are selected. Third, the mid points of the edges between the nearest selected point pairs are
taken as locations of the curbstone. These points are put in a sequence in order to define a polygon.
Short sequences can safely be ignored as road sides are expected to be relatively long. Finally,
smooth curves are fitted to the point sequences and smaller gaps between collinear curves are
closed. These gaps were usually caused by either locally lowered pavement (wheel chair crossings)
or curbstones that were occluded by parked cars.
Both the completeness and correctness of the extracted road sides are between 85 and 90% for
scenes like the one in Fig. 3. In case many cars occlude larger parts of a road side, the completeness
may drop to 50%. This does not affect the correctness of the extracted road sides. The missing road
sides in Fig. 3(b) were caused by some larger road side parts without curbstones that could not be
bridged automatically. The accuracy of the extracted road sides was estimated to be 0.18 m in a
comparison with GPS reference measurements. With some modifications of the smoothing step,
accuracies around 0.10 cm seem feasible. This shows the potential of using height data for the
extraction of this often required type of topographic information.
Mobile laser scanners can acquire data at a much higher point density then airborne laser scanners.
Point densities may be in the order of 100-1000 points/m
, depending on the distance to the
reflecting object. The absolute accuracy is better than 5 cm standard deviation and comparable to
airborne laser scanning at low altitudes. However, this accuracy can only be obtained if a sufficient
number of GPS satellites can be tracked. In urban areas with high buildings this may be a problem.
Curbstones as extracted above from airborne laser scanning data are just one of the features that can
be acquired as well by surveying the street environment with a mobile laser scanner. Other objects
that are acquired for road inventory surveys include traffic lights, traffic signs, road markings, street
lights, and buildings and vegetation near the street.
Two examples of processing mobile laser scanning point clouds are presented in this section. The
first one is extracting road markings from the reflection strength of the laser scanner. The second
example shows the potential of extracting building walls. Brenner (2009) describes a further feature
extraction process to automatically locate poles and use those for the relative positioning of cars to
their environment.
4.1. Extraction of road markings
Laser scanners usually also record the strength of the reflected laser pulse. This property can be
used to distinguish road markings from the road surface. Figs. 4(a) and (b) show a mobile laser
scanning data set to which two different thresholds on the reflectance strength were applied. With
the higher threshold (Fig. 4(a)) only points on road markings near the path of the vehicle are
selected. The lower threshold (Fig. 4(b)) leads to the selection of points on more distant road
markings, but also to the selection of many points on the road surface close to the vehicle. The
reflectance strength, of course, depends on the distance of the reflecting surface to the laser scanner.
To properly select the points based on reflectance strength, the threshold should be specified as a
function of the distance, or the reflectance should be normalised prior to the thresholding. With
large variations in distances this normalisation is much more important than for airborne laser
scanning (Höfle and Pfeifer, 2007).
6 George Vosselman
(a) (b) (c)
Fig. 4: (a) Point selected with high threshold on reflectance. (b) Points selected with low threshold on reflectance.
(c) Detected road markings after distance dependent thresholding and connected component analysis.
With a distance dependent threshold all points on road marks can be selected successfully. Points
with high reflectance values on cars and other objects are easily removed by including the distance
to the ground level as a selection criterion. With a connected component analysis of the remaining
points, points can be grouped to mark segments (Fig. 4 (c)). Full outlining of the markings can then
be obtained by fitting predefined shapes to those segments. Alternatively, one could also use the
location of the detected marks as an approximate value for a more accurate outlining in
photographs. The detection of markings is, however, easier done in a point cloud. Segmentation of a
photograph may also be used to detect bright spots on a dark background, but the point cloud
processing enables an easier determination of the distance of such a bright spot to the ground and
will therefore have a lower false alarm rate.
4.2. Extraction of walls
Beside the usage in typical corridor mapping applications, mobile laser scanning can also be used to
acquire the building façade geometry to generate realistic building models for visualisations at
street level. Rutzinger et al. (2009) presented a first analysis on the automated extraction of building
walls from mobile laser scanning data. Building walls were extracted from the point cloud by a
segmentation of the point cloud into planar faces, followed by a selection of segments that could be
walls. Segments were classified as wall segment when the inclination was less than 3°, the segment
dimensions were larger than 2 m in height and 0.5 m in width and the segment contained more than
1000 points.
The results of this study are presented in Fig. 5. The left figure shows the building outlines of the
large scale map (yellow, grey) and the black lines are the walls that are theoretically visible from
the perspective of the survey path (shown by the dots). The right figure shows the extracted vertical
point cloud segments that were classified as wall, again overlaid on the building outlines of the
large scale map. It shows that the walls facing the street could be identified well. Side walls,
however, proved to be difficult to extract. In total the extracted walls are only 56% of those that are
theoretically visible. Further analysis of this result has to be performed, but it is assumed that the
major reason for the low detection rate of the side walls is the occlusion by fences and vegetation.
George Vosselman 7
(a) (b)
Fig. 5: (a) Theoretically visible walls and (b) automatically detected walls of a mobile laser scanning survey. The
survey path is shown by dots. The yellow (light grey) lines are taken from the building layer of a large scale map
Building façades have also been modelled from data acquired by terrestrial laser scanners mounted
on a tripod. The point densities obtained with these scanners may be even higher than those of
mobile laser scanners. Within the data recorded at a single scan position, object dimensions can be
obtained more accurately than with mobile laser scanning as these measurements are not influenced
by platform positioning errors. Yet, for recording different sides of an object multiple scan positions
are required and the point clouds of the scans have to be registered with respect to each other. The
first example in this section shows the processing of a terrestrial laser scanning point cloud for the
detailed reconstruction of a building façade. The second example shows how a building model can
be aligned to optical imagery in order to improve the photorealistic rendering.
5.1. Extraction of façade models
Like for the detection of walls in mobile laser scanning data, the first step in the processing is the
segmentation into planar pieces as most building parts can be described by planar surfaces. The
segmentation result in Fig. 6(a) shows that even though doors, window frames or curtains may only
be a few centimetres behind the plane of the wall, they are detected as different segments (Pu and
Vosselman, 2009a). By formulating rules on the possible sizes, relative position, and orientation of
the building components, segments can be classified into the categories of wall, roof faces, doors,
protrusions, and ground surface (Fig 6(b)). As window frames are often partially occluded, their
detection is the point cloud is not reliable. However, windows can be easily found by outlining the
gaps in the wall segment that are not explained by doors and protrusions. Texturing the model with
registered imagery and artificial textures for less visible parts is used to obtain the visualisation of
Fig. 6(c).
(a) (b) (c)
Fig. 6: (a) Segmented point cloud of a building façade. (b) Segments classified as wall (blue, dark grey), door (orange,
grey), roof (yellow, light grey), protrusion (turquoise, light grey) or ground (green, grey). (c) Textured building model.
8 George Vosselman
5.2. Matching models with imagery for accurate texturing
Photo textures are often applied to obtain a realistic visualisation. Unfortunately, the human eye is
very sensitive to discrepancies between the geometric model and the texture. Such misfits may be
caused by incorrect orientation parameters of the photograph or model, image distortions or errors
in the model. Fig. 7(a) shows a part of a panoramic photograph that was used for texturing a
building model reconstructed from a terrestrial laser scanning point cloud. The texture applied in
Fig. 7(b) includes a small part of the sky. Producers of realistic visualisations often spend
considerable time to repair these kinds of errors. To automatically obtain a better alignment edges
can be extracted from the image and matched against the edges of the reconstructed (3D) model
(Fig. 7(c), Pu and Vosselman, 2009b). For the generation of Fig. 7(d) it was assumed that the misfit
was caused by errors in the model. The outlines of the front façade were therefore shifted in the
plane of the façade in order to obtain good correspondence with the projected image edges.
(a) (b) (c) (d)
Fig. 7: (a) Image used for texturing, (b) Initial texture projection, (c) Extracted edges from model (blue) and image (red,
pink), (d) Texture projection after adapting the model to the image edges.
The high pulse frequencies of current laser scanners allows to generate point clouds in which object
surfaces are captured by hundreds to thousands of points. As described in the above examples, these
high point densities enable a reliable segmentation of point clouds into planar and smooth surfaces.
Such segmentations form the basis for semi-automated or automated geometric modelling of man-
made objects.
The interpretation of segments requires the modelling of knowledge on the objects that are to be
reconstructed. Progress in knowledge modelling will be the key to further automation in mapping
from point clouds. In this sense, point cloud understanding encounters the same problem as image
understanding, even though the 3D features (segments) of point clouds may be richer than the
points and edges extracted from imagery.
Photogrammetric workstations are equipped with software tools that have been optimised over
several decades. Only in recent years commercial software packages have become available for
information extraction from point clouds. It is likely that there is still much room for improvement
and that the next years will show more advanced tools for semi-automated mapping in point clouds.
As imagery is often acquired together with laser scanning surveys, it would be desirable that tools
would be developed that enable the operator to simultaneously work with both data sources, or at
least to easily select the source that is most suitable for mapping the object at hand.
George Vosselman 9
The mobile laser scanning data was kindly provided by TopScan GbmH. The terrestrial laser
scanning data was kindly provided by Oranjewoud B.V. Panoramic imagery was kindly provided
by Cyclomedia B.V.
Ballard, D. H., Brown, C. M. (1982): Computer Vision, Prentice Hall, Englewood Cliffs, USA.
Barber, D., Mills, J., Smith-Voysey, S. (2008): Geometric validation of a ground-based mobile laser
scanning system. ISPRS Journal of Photogrammetry and Remote Sensing 63 (1), 128-141.
Brenner, C. (1999): Interactive Modeling tools for 3D Building Reconstruction. In:
Photogrammetric Week '99, Fritsch, D., Spiller, R., (Eds.), Herbert Wichmann Verlag,
Heidelberg, pp. 23–34.
Brenner, C. (2009): Building extraction. In: Airborne and Terrestrial Laser Scanning, Vosselman,
G., Maas, H.-G. (Eds.), Whittles Publishing. To appear in autumn 2009.
Brenner, C. (2009): Extraction of Features from Mobile Laser Scanning Data for Future Driver
Assistance Systems. In: Sester M., Bernard, L., Paelke, V. (Eds.), Advances in GIScience,
Lecture Notes in Geoinformation and Cartography, Springer, pp. 25-42.
Darmawati, A. T. (2008): Utilization of multiple echo information for classification of airborne
laser scanning data. Master’s Thesis, International Institute for Geo-Information Science and
Earth Observation (ITC), Enschede, the Netherlands.
Fröhlich, C., Mettenleiter, M. (2004): Terrestrial laser scanning – new perspectives in 3D
surveying. The International Archives of Photogrammetry, Remote Sensing and Spatial
Information Sciences, vol. 36, part 8/W2, Freiburg, Germany, pp. 7-13.
Höfle, B., Pfeifer, N. (2007): Correction of laser scanning intensity data: Data and model-driven
approaches. ISPRS Journal of Photogrammetry and Remote Sensing 62 (6), 415-433.
Lindenberger, J. (1989): Test results of laser profiling for topographic terrain survey. Proceedings
42nd Photogrammetric Week, Stuttgart, pp. 25-39.
Mallet, C., Bretar, F. (2009): Full-waveform topographic lidar: State-of-the-art. ISPRS Journal of
Photogrammetry and Remote Sensing 64 (1), 1-16.
Oude Elberink, S. (2009): Target graph matching for building reconstruction. The International
Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 38, part
3/W8, Paris, 1-2 September, accepted.
10 George Vosselman
Pfeifer, N., Briese, C. (2007): Geometrical aspects of airborne laser scanning and terrestrial laser
scanning. The International Archives of Photogrammetry, Remote Sensing and Spatial
Information Sciences, vol. 36, part 3/W52, pp 311-319.
Pu, S., Vosselman, G. (2009a): Knowledge based reconstruction of building models from terrestrial
laser scanning data. ISPRS Journal of Photogrammetry and Remote Sensing, Article in Press,
Pu, S., Vosselman, G. (2009b): Building façade reconstruction by fusing terrestrial laser points and
images. Sensors 9 (6), 4525-4542.
Rottensteiner, F. (2003): Automatic generation of high-quality building models from LIDAR Data.
IEEE Computer Graphics and Applications 23 (6), 42-50.
Rottensteiner, F., Trinder, J., Clode, S., Kubik, K. (2005): Using the Dempster-Shafer method for
the fusion of LIDAR data and multi-spectral images for building detection. Information fusion
6 (4), 283–300.
Rutzinger, M., Höfle, B., Hollaus, M., Pfeifer, N. (2008): Object-Based Point Cloud Analysis of
Full-Waveform Airborne Laser Scanning Data for Urban Vegetation Classification. Sensors 8
(8), 4505-4528.
Rutzinger, M., Oude Elberink, S., Pu, S., Vosselman, G. (2009): Automatic extraction of vertical
walls from mobile and airborne laser scanning data. The International Archives of
Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 38, part 3/W8, Paris,
1-2 September, accepted.
Vosselman, G., Dijkman, S. (2001): 3D Building Model Reconstruction from Point Clouds and
Ground Plans. The International Archives of the Photogrammetry, Remote Sensing and Spatial
Information Sciences, vol. 34, part 3/W4, Annapolis, MA, USA, October 22-24, pp.37- 44.
Vosselman, G., Gorte, B.G.H., Sithole, G., Rabbani, T. (2004): Recognising structure in laser
scanner point clouds. The International Archives of Photogrammetry, Remote Sensing and
Spatial Information Sciences, vol. 46, part 8/W2, Freiburg, Germany, 4-6 October, pp. 33-38.
Vosselman, G., Maas, H.-G., Eds. (2009): Airborne and Terrestrial Laser Scanning. Whittles
Publishing. To appear in autumn 2009.
Vosselman, G., Zhou, L. (2009): Detection of curbstones in airborne laser scanning data. The
International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences,
vol. 38, part 3/W8, Paris, 1-2 September, accepted.
... No literature was identified that reported on the relationship between LiDAR intensity values and the mobile retroreflectivity readings. Identifying relationships between LiDAR intensity and retro reflectivity provides additional use cases for MMS because the same LiDAR data can be utilized for various transportation applications, such as lane marking extraction [14][15][16][17][18] and road feature detection [19][20][21]. ...
... Therefore, several researchers began to extract lane markings directly from point clouds. Vosselman et al. [16] presented range-dependent thresholding to identify lane markings from point clouds. Lane marking points were detected by an intensity threshold, which was determined based on the range. ...
Full-text available
The United States has over 8.8 million lane miles nationwide, which require regular maintenance and evaluations of sign retroreflectivity, pavement markings, and other pavement information. Pavement markings convey crucial information to drivers as well as connected and autonomous vehicles for lane delineations. Current means of evaluation are by human inspection or semi-automated dedicated vehicles, which often capture one to two pavement lines at a time. Mobile LiDAR is also frequently used by agencies to map signs and infrastructure as well as assess pavement conditions and drainage profiles. This paper presents a case study where over 70 miles of US-52 and US-41 in Indiana were assessed, utilizing both a mobile retroreflectometer and a LiDAR mobile mapping system. Comparing the intensity data from LiDAR data and the retroreflective readings, there was a linear correlation for right edge pavement markings with an R2 of 0.87 and for the center skip line a linear correlation with an R2 of 0.63. The p-values were 0.000 and 0.000, respectively. Although there are no published standards for using LiDAR to evaluate pavement marking retroreflectivity, these results suggest that mobile LiDAR is a viable tool for network level monitoring of retroreflectivity.
... Choice of these platforms or combinations of these platforms depends on the scope of the scanning projects. For example, for road surveys and railroad surveys, mobile and airborne systems can be used, while for building reconstruction terrestrial and mobile systems can be used in combination (Vosselman, 2009). Figure 3 (a) represents a point cloud data obtained using an ALS system and Figure 3 (b) represents a point cloud data obtained using an MLS system. ...
Conference Paper
With the rapid advancement of technologies such as Laser Scanning and Point Cloud Data processing, the influence of these technologies in Civil Engineering Projects are inevitable. These technologies are used in various industries, including Civil Engineering, in tasks namely, 3D model preparation, Construction progress monitoring, Quality control, Virtual walk-throughs, etc. Many countries have already made the most out of these technologies whereas Sri Lanka as a developing country, seems to have lagged in implementing such tools, especially in the construction sector. Further applications of these digital technologies extend to preserve ancient monuments from disasters, create digital copies of structures, update timely variations of structures and predict the life-cycle of buildings. For instance, during renovations, the availability of any BIM models or any related drawings may be limited and renovations with destructions must be avoided; these limitations pave the way to the adaptation of non-destructive laser scanning and other related technologies. Moreover, there are significant advancements in efficient 3D point cloud data acquisition and accurate processing techniques around the globe, making it a reliable and effective solution for various civil engineering challenges. This study reviews the available technologies and their applications in the civil engineering domain and the feasibility of implementing these technologies in the Sri Lankan civil engineering industry.
... Attribute-based methods initially process a PCD by leveraging point-wise features and generating higher-level features, such as surface normals Sampath and Shan 2010), mesh ) or patches (Vosselman 2009;Zhang et al. 2015). ...
Geometric modelling from point cloud data is a fundamental step of the digital twinning process for rail infrastructure. Currently, this onerous procedure outweighs the anticipated benefits of the resulting model and expends 74% of the modellers’ effort on converting point cloud data to a model. This is particularly true for rail infrastructure because railway digital twin generation could be an efficient means for maintaining and retrofitting the fundamental mode of the nation’s transport. Recent statistics illustrate that the UK increased £5.4 billion of expenditure on railway maintenance per year. This expenditure could go up to £7.2 billion increments per year per incident for railway closures due to unplanned maintenance. Better documentation of current conditions enables tracking and detecting defects in existing railways, potentially avoiding irreversible harm without impeding the national economy. This explains why there is a huge market demand for less labour-intensive railway maintenance techniques that can efficiently boost railway operations and productivity. Railway geometric modelling in the literature has focused mostly on rail detection using laser scanner profile information, with state-of-the-art methods achieving a 99% F1 score for rail detection. However, the existing methods cannot offer large-scale digital twinning required over kilometres without forfeiting precision and manual cost. They only work well in much shorter track segments (on average 300 m) or simplified cases (without complex geometries such as bridges, tunnels, crossings). This is because the search space for railway elements is much longer in multi-kilometre segments and has thousands of element types that are relatively small and thin compared to the asset as a whole. Besides, the real-railway point clouds that stretch over kilometres on the ground often encounter huge challenges, such as occlusions caused by vegetation around the track and unevenly distributed points. Varying horizontal, vertical elevations and cross-sections define the railway geometries. These characteristics complicate the modelling, which is why none of the existing methods can manage them reliably. The aim of this PhD research is to devise, implement and benchmark a novel framework that can accurately generate individually labelled geometric objects of existing rail infrastructure comprising railway track structure and overhead catenary system elements in an established data format [i.e. Industry Foundation Classes (IFC)]. The research provided in this thesis first automatically and effectively segments labelled point clusters of railway elements in point cloud data. It then automatically reconstructs the 3D geometry of the segmented point clusters in IFC format to achieve this objective. The author formed five research questions to answer the segmentation task: (1) how to automatically remove vegetation and other noise surrounding railways without using any additional prior information such as neighbourhood structures, scanning geometries and intensities of the input data? (2) how to automatically segment masts in the form of point clusters by differentiating masts from other pole-like objects in imperfect railway Point Cloud Dataset (PCD)s where occlusions, data gaps and varying point densities exist? (3) how to automatically segment Overhead Line Equipment (OLE) elements in the form of point clusters from real-world railway PCDs with complex railway geometries while occlusions, data gaps and varying point densities are present? (4) how to automatically segment railway track structure elements in the form of point clusters from real-world railway PCDs with varying horizontal and vertical elevations and complex railway geometries?, and (5) how to automatically separate rails from other linear elements adjacent to the railway corridor without relying on prior knowledge such as scanning geometry? The research presented in this thesis exploits standard railway design guidelines to automatically reconstruct the 3D geometry of the segmented point clusters as 3D solid models. The author formed two further research questions to answer the 3D solid model generation task: (1) how to automatically reconstruct labelled point clusters into 3D IFC objects for the railway domain?, and (2) how to evaluate the accuracy of a railway GDT reconstructed from a PCD?. Railways are a linear asset type; their geometric relations stay roughly unchanged, often over very long distances. The proposed framework uses the knowledge of the highly regulated and standardised railway topology and railway design engineering knowledge to segment and model railway elements in point clouds. This framework directly segments railway track structure and overhead catenary system elements and then models them without generating low-level shape primitives and without using any prior information such as user inputs, intensities of input data and scanning geometries. Experiments reveal that the proposed framework can perform quickly and reliably with complex and incomplete real-world railway PCDs featuring occlusions, extreme vegetation around the track, and local variable densities of points. Experiments on 18 km railway PCDs yield an average segmentation F1 score of 88%, an average modelling accuracy below 6 cm Root Mean Square Error (RMSE). The proposed framework can realise an estimated time savings of 94% on average compared to the current manual geometric twinning practice. The proposed framework is the first of its kind to achieve such high and reliable performance of geometric digital twin generation of existing rail infrastructure. Contributions. This PhD research provides the unprecedented ability to rapidly and intelligently model geometric railway track structure and overhead catenary system elements based on quantitative measurements. This is a huge leap over the current practice and a significant step towards the automated generation of Railway Digital Twins.
... For detecting blunders, the random sample consensus (RANSAC) [218] algorithm was created. Another focus and advance were thematic information extraction [219]. In this respect, classifications such as support vector machine (SVM) [220] and random forest (RF) [221] have been applied actively. ...
Full-text available
Conventionally, land administration—incorporating cadastres and land registration—uses ground-based survey methods. This approach can be traced over millennia. The application of photogrammetry and remote sensing is understood to be far more contemporary, only commencing deeper into the 20th century. This paper seeks to counter this view, contending that these methods are far from recent additions to land administration: successful application dates back much earlier, often complementing ground-based methods. Using now more accessible historical works, made available through archive digitisation, this paper presents an enriched and more complete synthesis of the developments of photogrammetric methods and remote sensing applied to the domain of land administration. Developments from early phototopography and aerial surveys, through to analytical photogrammetric methods, the emergence of satellite remote sensing, digital cameras, and latterly lidar surveys, UAVs, and feature extraction are covered. The synthesis illustrates how debates over the benefits of the technique are hardly new. Neither are well-meaning, although oft-flawed, comparative analyses on criteria relating to time, cost, coverage, and quality. Apart from providing this more holistic view and a timely reminder of previous work, this paper brings contemporary practical value in further demonstrating to land administration practitioners that remote sensing for data capture, and subsequent map production, are an entirely legitimate, if not essential, part of the domain. Contemporary arguments that the tools and approaches do not bring adequate accuracy for land administration purposes are easily countered by the weight of evidence. Indeed, these arguments may be considered to undermine the pragmatism inherent to the surveying discipline, traditionally an essential characteristic of the profession. That said, it is left to land administration practitioners to determine the relevance of these methods for any specific country context.
... In the last decades, it has been intensively studied and a large number of different approaches has been reported. Overviews of approaches before 2010 are given in (Brenner, 2005, Schnabel et al., 2008, Vosselman, 2009). ...
Full-text available
We propose a pipeline for the detection as well as modeling of individual buildings based on multi-source images. It allows to consistently reconstruct whole buildings at Level of Detail 3 (LoD3): the roof from airborne images and the facades including elements such as windows and doors mainly from terrestrial images. We employ a parametrized top-down model – the “shell model” – with the roof as well as the facades semantically and geometrically integrated. This generative model fosters stability for building detection by enabling the use of multi-source data and offers flexibility in modeling by means of a fully CAD-compatible integration of building components. Experiments performed on imagery from different terrestrial and airborne (Unmanned Aerial Vehicle – UAV) cameras demonstrate the potential of the approach.
... Vosselman [3] paper titled advanced point cloud processing shows some extraordinary models on how point cloud divisions can be used to normally isolate Geo-information. The paper focusses on the extraction of man-made items in the urban condition. ...
... In the points-based methods, an algorithm directly applies the raw points and their available information to extract road markings. For instance, Vosselman (2009) extracted road markings points by a range of dependent thresholds and grouped them by connected component analysis. The road markings were extracted by fitting predefined shapes to the grouped segments (Yang and Dong, 2013) used support vector machine (SVM) classification techniques to classify point clouds according to calculated geometric features of each point in a specified neighborhood. ...
Mobile LiDAR systems (MLS) are rapid and accurate technologies for acquiring three-dimensional (3D) point clouds that can be used to generate 3D models of road environments. Because manual extraction of desirable features such as road traffic signs, trees, and pavement markings from these point clouds is tedious and time-consuming, automatic information extraction of these objects is desirable. This paper proposes a novel automatic method to extract pavement lane markings (LMs) using point attributes associated with the MLS point cloud based on fuzzy inference. The proposed method begins with dividing the MLS point cloud into a number of small sections (e.g. tiles) along the route. After initial filtering of non-ground points, each section is vertically aligned. Next, a number of candidate LM areas are detected using a Hough Transform (HT) algorithm and considering a buffer area around each line. The points inside each area are divided into “probable-LM” and “non-LM” clusters. After extracting geometric and radiometric descriptors for the “probable-LM” clusters and analyzing them in a fuzzy inference system, true-LM clusters are eventually detected. Finally, the extracted points are enhanced and transformed back to their original position. The efficiency of the method was tested on two different point cloud datasets along 15.6 km and 9.5 km roadway corridors. Comparing the LMs extracted using the algorithm with the manually extracted LMs, 88% of the LM lines were successfully extracted in both datasets.
... For extracting various features, LiDAR data records a number of attributes including elevation, intensity, pulse width, multiple returns and range information (Kumar et al., 2015). The methods developed for segmenting LiDAR data are mostly based on the identification of planar surfaces and the classification of point cloud data based on its attributes (Vosselman, 2009). ...
Full-text available
To acquire 3D geospatial information, LiDAR technology provides the rapid, continuous and cost-effective capability. In this paper, two automated approaches for extracting building features from the integrated aerial LiDAR point cloud and digital imaging datasets are proposed. The assumption of the two approaches is that the LiDAR data can be used to distinguish between high-and low-rise objects while the multispectral dataset can be used to filter out vegetation from the data. Object-based image analysis techniques are applied to the extracted building objects. The two automated buildings extraction approaches are tested on a fusion of aerial LiDAR point cloud and digital imaging datasets of Istanbul city. The object-based automated technique presents better results compared to the threshold-based technique for extraction of building objects in term of visual interpretation.
Building extraction from light detection and ranging (LiDAR) data for 3-dimensional (3D) reconstruction requires accurately classified LiDAR points. In recent years, approaches developed for the classification mostly based on gridded LiDAR data. In the gridding process of LiDAR data, there is a characteristic point loss which results in reduced height accuracy. The effect of such loss can be eliminated using classified raw LiDAR data. In this study, an automatic point-based classification approach for raw LiDAR data classification with spatial features has been proposed for 3D building reconstruction. Using spatial features, the hierarchical rules have been determined. The spatial features, such as height, the local environment, and multi-return, of the LiDAR points were analyzed, and every single LiDAR points automatically assigned to the classes based on these features. The proposed classification approach based on raw LiDAR data had an overall accuracy of 79.7% in the test site located in Istanbul, Turkey. Finally, 3D building reconstruction was performed using the results of the proposed automatic point-based classification approach.
Full-text available
We present a building reconstruction approach, which is based on a target graph matching algorithm to relate laser data with building models. Establishing this relation is important for adding building knowledge to the data. Our targets are topological representations of the most common roof structures which are stored in a database. Laser data is segmented into planar patches. Topological relations between segments, in terms of intersection lines and height jumps, are represented in a building roof graph. This graph is matched with the graphs from the database. Segments and intersection lines that do not fit to an existing target roof topology will be removed from the automated reconstruction approach. For the geometric reconstruction our approach is flexible to use information from data and/or model. For specific object parts it might be better to use model constraints as the data might not appropriately represent the object. As our approach combines data and model driven techniques, we speak of an object driven reconstruction approach. We present our algorithm using airborne laser scanner data with about 15 pts/m2. Existing 2D map data with scale 1:1000 has been used for selection of building segments, for outlining flat building roofs and to reconstruct walls.
Full-text available
Building outlines in cadastral maps are often created from different sources such as terrestrial surveying and photogrammetric analyses. In the latter case the position of the building wall cannot be estimated correctly if a roof overhang is present. This causes an inconsistent representation of the building outlines in cadastral map data. Laser scanning can be used to correct for such estimation inconsistencies and additional occurring changes in the building shape. Nowadays, airborne (ALS) and mobile laser scanning (MLS) data for overlapping areas are available. The object representation in ALS and MLS point clouds is rather different regarding point density, representation of object details (scale), and completeness, which is caused by the different platform position i.e. distance to the object and scan direction. These differences are analysed by developing a workflow for automatic extraction of vertical building walls from D laser scanning point clouds. A region growing segmentation using Hough transform derives the initial segments. These are then classified based on planarity, inclination, wall height and width. The planar position accuracy of corresponding walls and completeness of the automatically extracted vertical walls are investigated. If corresponding vertical wall segments are defined by a maximum distance of 0.1 m and maximum angle of 3º then 24 matches with a planimetric accuracy of 0.05 m RMS and 0.04 m standard deviation of the X- and Y-coordinates could be found. Finally the extracted walls are compared to building outlines of a cadastral map for map updating. The completeness of building walls in both ALS and MLS depends strongly on the relative position between sensor and object. A visibility analysis for the building façades is performed to estimate the potential completeness in the MLS data. Vertical walls in ALS data are represented as less detailed façades caused by lower point densities, which is enforced by large incidence angles. This can be compensated by the denser MLS data if the façade is covered by the survey.
Full-text available
This paper reviews the current state of laser scanning from airborne and terrestrial platforms for geometric reconstruction of object shape and size. The current performance figures of sensor systems are presented in an overview. Next, their calibration and the orientation of the acquired point clouds is discussed. For airborne deployment this is usually one step, whereas in the terrestrial case laboratory calibration and registration of point clouds are (still) two distinct, independent steps. As laser scanning is an active measurement technology, the interaction of the emitted energy with the object surface has influences on the range measurement. This has to be considered in order to explain geometric phenomena in the data. While the problems, e.g. multiple scattering, are understood well, there is currently a lack of remedies. Then, in analogy to the processing chain, segmentation approaches for laser scanning data are reviewed. Segmentation is a task relevant for almost all applications. Likewise, DTM (digital terrain model) reconstruction is relevant for many applications of airborne laser scanning, and is therefore discussed, too. This paper reviews the main processing steps necessary for many applications of laser scanning.
Full-text available
The high point densities obtained by today's airborne laser scanners enable the extraction of various features that are traditionally mapped by photogrammetry or land surveying. While significant progress has been made in the extraction of buildings and trees from dense point clouds, little research has been performed on the extraction of roads. In this paper it is analysed to what extent road sides can be mapped in point clouds of high point density (20 pts/m 2). In urban areas curbstones often separate the road surface from the adjacent pavement. These curbstones are mapped in a three step procedure. First, the locations with small height jumps near the terrain surface are detected. Second, midpoints of high and low points on either side of the height jump are generated, put in a sequence, and used to fit a smooth curve. Third, small gaps between nearby and collinear line segments are closed. GPS measurements were taken to analyse the performance of the road side detection. The analysis showed that the completeness varied between 50 and 86%, depending on the amount of parked cars occluding the curbstones. The RMSE in the comparison with the GPS measurements was 0.18 m.
Full-text available
Airborne laser altimetry has become a very popular technique for the acquisition of digital elevation models. The high point density that can be achieved with this technique enables applications of laser data for many other purposes. This paper deals with the construction of 3D models of the urban environment. A three-dimensional version of the well-known Hough transform is used for the extraction of planar faces from the irregularly distributed point clouds. To support the 3D reconstruction usage is made of available ground plans of the buildings. Two different strategies are explored to reconstruct building models from the detected planar faces and segmented ground plans. Whereas the first strategy tries to detect intersection lines and height jump edges, the second one assumes that all detected planar faces should model some part of the building. Experiments show that the second strategy is able to reconstruct more buildings and more details of this buildings, but that it sometimes leads to additional parts of the model that do not exist. When restricted to buildings with rectangular segments of the ground plan, the second strategy was able to reconstruct 83 buildings out of a dataset with 94 buildings.
Full-text available
Airborne laser scanning (ALS) is an active remote sensing technique providing range data as 3D point clouds. This paper aims at presenting a survey of the literature related to such techniques, with emphasis on the new sensors called full-waveform lidar systems. Indeed, an emitted laser pulse interacts with complex natural and man-made objects leading to a temporal distortion of the returned energy profile. The new technology of full-waveform laser scanning systems permits one to digitize the complete waveform of each backscattered pulse. Full-waveform lidar data give more control to an end user in the interpretation process of the physical measurement and provide additional information about the structure and the physical backscattering characteristics of the illuminated surfaces. In this paper, the theoretical principles of full-waveform airborne laser scanning are first described. Afterwards, a review of the main sensors as well as signal processing techniques are presented. We then discuss the interpretation of full-waveform measures with special interest on vegetated and urban areas.
Laser scanners are used more and more as surveying instruments for various applications. With the advance of high precision systems, capable of working in most real world environments under a variety of conditions, numerous applications have opened up. In the field of surveying laser scanners open up a new dimension with data capturing. Different industrial sectors require precise data of the environment in order to be able to have a as-build documentation of the facility. Especially as build documentation of plants (automotive, chemical, pharmaceutical etc.) has become a very sensitive and important new segment, as companies need to document their facilities. This is a basic requirement to plan and evaluate emergency situations (evacuation scenarios etc.) but also for simulation purposes of specific manufacturing cycles (car assembly etc.) as well as design studies. Having the environment in 3D as a CAD model ("digital factory") open up design studies without changing anything in the real environment and therefore causing no down time of production lines. This paper reports about several systems and physical technologies used for measuring distances and presents several products available in this area. Furthermore it presents technical specifications of different systems and summarises with a comparison of achievable results.
This paper outlines a study, carried out on behalf of a national mapping agency, to validate laser scanned point cloud data collected by a ground-based mobile mapping system. As the need for detailed three-dimensional data about our environment continues to grow, ground-based mobile systems are likely to find an increasingly important niche in national mapping agency applications. For example, such systems potentially provide the most efficient data capture for numerical modelling and/or visualisation in support of decision making, filling a void between static terrestrial and mobile airborne laser scanning. This study sought to assess the precision and accuracy of data collected using the StreetMapper system across two test sites: a peri-urban residential housing estate with low density housing and wide streets, and a former industrial area consisting of narrow streets and tall warehouses. An estimate of system precision in both test sites was made using repeated data collection passes, indicating a measurement precision (95%) of between 0.029 m and 0.031 m had been achieved in elevation. Elevation measurement accuracy was assessed against check points collected using conventional surveying techniques at the same time as the laser scanning survey, finding RMS errors in elevation in the order of 0.03 m. Planimetric accuracy was also assessed, with results indicating an accuracy of approximately 0.10 m, although difficulties in reliably assessing planimetric accuracy were encountered. The results of this validation were compared against a theoretical error pre-analysis which was also used to show the relative components of error within the system. Finally, recommendations for future validation methodologies are outlined and possible applications of the system are briefly discussed.
This paper presents an automatic method for reconstruction of building façade models from terrestrial laser scanning data. Important façade elements such as walls and roofs are distinguished as features. Knowledge about the features’ sizes, positions, orientations, and topology is then introduced to recognize these features in a segmented laser point cloud. An outline polygon of each feature is generated by least squares fitting, convex hull fitting or concave polygon fitting, according to the size of the feature. Knowledge is used again to hypothesise the occluded parts from the directly extracted feature polygons. Finally, a polyhedron building model is combined from extracted feature polygons and hypothesised parts. The reconstruction method is tested with two data sets containing various building shapes.