To read the full-text of this research, you can request a copy directly from the author.
Abstract
Light Detection and Ranging (LiDAR) has become a valuable data source for urban data acquisition. This paper gives an overview about current trends in the automation of object extraction from LiDAR data. These trends are caused by the technical development of LiDAR sensors that enable the acquisition of point clouds at higher resolution as well as the recording of the full waveform of the returned signal, and by the adoption of processing techniques from the Computer Vision and Pattern Recognition communities. Triggered by these developments, new applications are being found for LiDAR data.
To read the full-text of this research, you can request a copy directly from the author.
... Therefore, a fusion of LiDAR data with aerial images has become an essential phenomenon, which overcomes the shortcomings above of aerial images (Zhu et al. 2009;Clode et al. 2004;Hu et al. 2004). By using LiDAR data, the known elevations can be used to efficiently discriminate between roads and other aboveground objects with same spectra such as buildings (Poullis and You 2010;Rottensteiner 2010). On the other hand, LiDAR intensity and aerial images allow distinguishing roads from other bare land and grasslands, which have similar elevation (Gong et al. 2010). ...
Chapter 5 presents an accurate approach for extraction of highway information from remote
sensing data. This is significant for various applications such as traffic accident modeling,
navigation, intelligent transportation systems, and natural hazard assessments. One of the
conventional technique used for automatic highway extraction is machine learning. Despite
several machine learning algorithms have been tested and tried in the recent years; however,
there is no common agreement has been made which method performs better and spatially
transferable. Therefore, this paper contributes in evaluating several machine learning algorithms
(i.e. support vector machine, logistic regression, neural network, and decision tree) for
automatic highway extraction from high-resolution airborne LiDAR data. Based on the
comparative study performed, the best among studied machine learning algorithms was
identified and used in an integrated GIS workflow for automatic highway extraction. The
advanced GIS integrated workflow is an efficient model that could be applied to most commercial
and open-source GIS software. Among the studied machine learning algorithms,
multilayer perceptron and decision tree algorithms showed the best overall accuracy tested on
randomly selected sampling data. However, when the transferability of the models investigated,
logistic regression found to be the optimal algorithm for highway extraction from
LiDAR data. In addition, although support vector machine produced high overall accuracy
(90.19%) on sampling data, the model produced low-quality classification when applied to
raster data. Thus, it suffers from model transferability issues. The quantitative evaluation
showed that the logistic regression model could extract highway features from LiDAR data
with 85.43% for completeness measure, 76.70%, and 67.82% for correctness and quality
measures, respectively. The result of this study provides a clear guideline for other researchers
to develop more advanced and automatic GIS models for accurate extraction of highways from
LiDAR data.
... Therefore, a fusion of LiDAR data with aerial images has become an essential phenomenon, which overcomes the shortcomings above of aerial images (Zhu et al. 2009;Clode et al. 2004;Hu et al. 2004). By using LiDAR data, the known elevations can be used to efficiently discriminate between roads and other aboveground objects with same spectra such as buildings (Poullis and You 2010;Rottensteiner 2010). On the other hand, LiDAR intensity and aerial images allow distinguishing roads from other bare land and grasslands, which have similar elevation (Gong et al. 2010). ...
Future prediction is one of the fascinating topics for human endeavor and is identified to be a vital tool in transportation management. Understanding the whole network of transportation is much difficult than on a single road. The main purpose of this effort is to provide a better route with high safety level and support the traffic managers in managing road network efficiently.
... Therefore, a fusion of LiDAR data with aerial images has become an essential phenomenon, which overcomes the shortcomings above of aerial images (Zhu et al. 2009;Clode et al. 2004;Hu et al. 2004). By using LiDAR data, the known elevations can be used to efficiently discriminate between roads and other aboveground objects with same spectra such as buildings (Poullis and You 2010;Rottensteiner 2010). On the other hand, LiDAR intensity and aerial images allow distinguishing roads from other bare land and grasslands, which have similar elevation (Gong et al. 2010). ...
Automatic extraction of highways from airborne LiDAR (light detection and ranging) has been a long-standing active research topic in remote sensing (Idrees and Pradhan 2016, 2018;
Idrees et al 2016; Fanos et al. 2016, 2018; Fanos and Pradhan 3; Abdulwahid and Pradhan 2017; Pradhan et al. 2016; Sameen et al. 2017; Sameen
and Pradhan 2017a, b). Accurate and computationally useful extraction of highway information from remote sensing data is significant for various applications such as traffic accident modeling (Bentaleb et al. 2014), navigation (Kim et al. 2006), intelligent transportation systems (Vaa et al. 2007), and natural hazard assessments (Jebur et al. 2014).
... Rutzinger et al. (2009) used a Hough transform segmentation algorithm for extracting walls from both airborne and terrestrial mobile LIDAR data and compared the accuracy of the two dataset. Currently, mobile LIDAR data is being widely used for vertical walls reconstruction (Hammoudi et al., 2010;Rottensteiner, 2010). ...
Automated 3D building model generation continues to attract research interests in photogrammetry and computer vision. Airborne
Light Detection and Ranging (LIDAR) data with increasing point density and accuracy has been recognized as a valuable source for
automated 3D building reconstruction. While considerable achievements have been made in roof extraction, limited research has
been carried out in modelling and reconstruction of walls, which constitute important components of a full building model. Low
point density and irregular point distribution of LIDAR observations on vertical walls render this task complex. This paper develops
a novel approach for wall reconstruction from airborne LIDAR data. The developed method commences with point cloud
segmentation using a region growing approach. Seed points for planar segments are selected through principle component analysis,
and points in the neighbourhood are collected and examined to form planar segments. Afterwards, segment-based classification is
performed to identify roofs, walls and planar ground surfaces. For walls with sparse LIDAR observations, a search is conducted in
the neighbourhood of each individual roof segment to collect wall points, and the walls are then reconstructed using geometrical and
topological constraints. Finally, walls which were not illuminated by the LIDAR sensor are determined via both reconstructed roof
data and neighbouring walls. This leads to the generation of topologically consistent and geometrically accurate and complete 3D
building models. Experiments have been conducted in two test sites in the Netherlands and Australia to evaluate the performance of
the proposed method. Results show that planar segments can be reliably extracted in the two reported test sites, which have different
point density, and the building walls can be correctly reconstructed if the walls are illuminated by the LIDAR sensor.
... Adapting machine learning methods from pattern recognition and computer vision is a trend in the field of land cover classification in urban areas (Rottensteiner, 2010;Vatsavai et al., 2011). ...
... Scientists have developed many approaches which attempt to delineate and classify objects from 3D point clouds with the use of various segmentation based methodologies [1,5,6,9,10,12,16,17,18,19,22,23,24,25,26,27,28,30,31,32,33,34,35]. ...
... In a similar development, progress in Light Detection and Ranging (LiDAR) sensor technology for dense 3D data collection allows the geometrical properties and surface roughness of both natural and man-made objects to be obtained (Guo et al., 2011; Jiangui and Guang, 2011). In fact, LiDAR data and its derived products have become acceptable component of national geospatial database similar to the way orthophotos took over geo database in the 1990 s (Rottensteiner, 2010). The use of LiDAR data has, therefore, enhanced generation of accurate and up-to-date data for urban landscape, infrastructure inventory and vegetation monitoring (Edson and Wing, 2011), which has been the aim of remote sensing professionals. ...
This study presents our findings on the fusion of Imaging Spectroscopy (IS) and LiDAR data for urban feature extraction. We carried out necessary preprocessing of the hyperspectral image. Minimum Noise Fraction (MNF) transforms was used for ordering hyperspectral bands according to their noise. Thereafter, we employed Optimum Index Factor (OIF) to statistically select the three most appropriate bands combination from MNF result. The composite image was classified using unsupervised classification (k-mean algorithm) and the accuracy of the classification assessed. Digital Surface Model (DSM) and LiDAR intensity were generated from the LiDAR point cloud. The LiDAR intensity was filtered to remove the noise. Hue Saturation Intensity (HSI) fusion algorithm was used to fuse the imaging spectroscopy and DSM as well as imaging spectroscopy and filtered intensity. The fusion of imaging spectroscopy and DSM was found to be better than that of imaging spectroscopy and LiDAR intensity quantitatively. The three datasets (imaging spectrocopy, DSM and Lidar intensity fused data) were classified into four classes: building, pavement, trees and grass using unsupervised classification and the accuracy of the classification assessed. The result of the study shows that fusion of imaging spectroscopy and LiDAR data improved the visual identification of surface features. Also, the classification accuracy improved from an overall accuracy of 84.6% for the imaging spectroscopy data to 90.2% for the DSM fused data. Similarly, the Kappa Coefficient increased from 0.71 to 0.82. on the other hand, classification of the fused LiDAR intensity and imaging spectroscopy data perform poorly quantitatively with overall accuracy of 27.8% and kappa coefficient of 0.0988.
Over the past two and a half decades, urban growth has been a subject in numerous studies mostly through the use of remote sensing technology. Although many cities, large or small, have been targeted, Beijing as China’s capital city has probably been more frequently researched than any other metropolises in the world. This chapter aims to examine some major advances in remote sensing-based urban growth studies with Beijing as the focus. For this purpose, we surveyed peer-reviewed English literature paying attention on some journal articles reporting the subject. Specifically, we examined the progress on several issues related to the research design and implementation, namely, spatial extent or temporal scale, data sources, and quantified dimensions. Based on the literature review, we further identified several major challenges and discussed some future research directions. We believe our longitudinal study focusing on major English literature examining the urbanization pattern in Beijing through remote sensing can not only help better research design but also assist formulating effective strategies and polices to deal with major challenges towards ecological sustainability in large metropolises.
This paper proposes a novel algorithm of road detection from dense Light Detection and Ranging (LiDAR) data based on the local and global information of point clouds. First, the ground points and non-ground points are separated by a filtering algorithm. Then, road candidates are identified in the ground points by segmentation based on local intensity distribution histogram, which can make use of the homogeneity and consistency of roads. Finally, the ultimate road points are verified by global inference based on the area of the road candidate point sets, which can remove small data sets. The practical data in complex environment is utilized to test this algorithm. The experimental results show that this algorithm is capable of detecting roads automatically and efficiently, which has high robustness for complex roads and environments.
Optical imagery and Light Detection And Ranging (LiDAR) point cloud are two major data sources in the community of photogrammetry and remote sensing. Optical images and LiDAR data have unique characteristics that make them preferable in certain applications. On the other hand, the disadvantage of one type of data source may be compensated by an advantage of the other. Hence, data fusion is a prerequisite to utilising the complementary characteristics of both data sources. Numerous methods haven been proposed to perform the fusion in various applications. This article makes a systematic review of the state-of-the-art fusion methodology used in various applications, such as registration, generation of true orthophotographs, pan-sharpening, classification, recognition of some key targets, three-dimensional reconstruction, change detection and forest inventory. Moreover, the future developing trends are introduced. In the coming few years, we expect that fusion of optical images and LiDAR point cloud will promote the development of both photogrammetry and laser scanning in both industry and scientific research.
This paper presents a novel algorithm of road detection from airborne LiDAR point clouds adaptive for variability of intensity data of road network. First, the point cloud topology is constructed using a grid index structure which facilitate spatial searching and preserves the accuracy of raw data without interpolation, and a LiDAR filtering algorithm is employed to distinguish the ground points from non-ground points. Second, road candidates are identified in the derived ground points by segmentation based on local intensity distribution histogram. Finally, the ultimate road point sets are verified by global inference based on the roughness and area of the road candidate point sets. The roughness of candidate point sets are calculated based on morphological gradients in consideration of the characteristics of roads compared to other non-road ground areas such as grass land and bare ground. The experimental results using practical data in complex environment demonstrate that this algorithm is able to automatically detect roads adaptive for the variability of intensity data of road network. Other non-road ground areas such as grass land and bare ground can be efficiently eliminated.
Traffic islands play a major role in transport studies by affecting traffic behavior safety, air pollution, and transport decision support. Point data obtained by laser scanning enable the determination of their locations. Planimetric errors, vertical errors, and limited point spacing
however affect their spatial data quality (SDQ). In this study, we defined uncertainty as the lack of accuracy and analyzed its importance by modeling each traffic island as a random set. The covering functions of the point data and their intermediate locations were determined by point segmentation,
followed by interpolation. In this way, traffic islands were delineated from the background with a transition zone. The study showed that point spacing has the largest contribution to the positonal accuracy of a traffic island. The area of the transition zone has a linear relation with the
planimetric errors, whereas the influence of the vertical errors on the accuracy decreases with increasing point spacing. Experiments were conducted to investigate the influences of the parameters in an SDQ analysis. The study demonstrated how different sources of uncertainty can be integrated.
Results showed the advantages of using random sets for SDQ modelling. We concluded that modelling of traffic islands by random sets provides meaningful information to integrate uncertainties.
This report presents the results and conclusions of the EuroSDR project “Radiometric aspects of
digital photogrammetric images” that was carried out during 2008-2011. The project was a European-
wide multi-site research project, where the participants represented stakeholders of photogrammetric
data in National Mapping Agencies, software development and research. The project began with a
review phase, which consisted of a literature review and a questionnaire to the stakeholders of
photogrammetric data. The review indicated excellent radiometric potential of the novel imaging
systems, but also revealed many shortcomings in the radiometric processing lines. The second phase
was an empirical investigation, for which radiometrically controlled flight campaigns were carried out
in Finland and in Spain using the Leica Geosystems ADS40 and Intergraph DMC large-format
photogrammetric cameras. The investigations considered vicarious radiometric calibration and
validation of sensors, spatial resolution assessment, radiometric processing of photogrammetric image
blocks and practical applications. The results proved the stability and quality of evaluated imaging
systems with respect to radiometry and optical system. The first new-generation methods for reflec-
tance image production and equalization of photogrammetric image blocks provided promising results
and were also functional from the productivity and usability points of view. For reflectance images, an
accuracy of up to 5% was obtained without need of ground reference measurements. Application
oriented results indicated that automatic interpretation methods will benefit from the optimal use of
radiometrically accurate stereoscopic photogrammetric imagery. Many improvements are still needed
for the processing chains in order to obtain full advantage of the radiometric potential of photogram-
metric sensors.
During the project, the quantitative radiometric processing in photogrammetric processing lines was
not mature technology at all. Operational applications used qualitative and statistical methods in
assessing and processing the radiometry, and the output image products were mainly used in visual
interpretation. The major emphasis in this investigation was to consider the radiometry from the
quantitative point of view. This report summarizes many points of view to the radiometric processing
and all the evaluated methods can be further developed and implemented as automated tools in
modern photogrammetric processes of National Mapping Agencies in the future.
This paper gives an overview about advanced techniques for classification and object detection that are being adopted for urban object detection from LiDAR data. The paper covers local supervised classifiers such as AdaBoost, SVM and Random Forests, statistical models of context such as Markov Random Fields and Conditional Random Fields, and sampling techniques. The relevance of features is also discussed. Applications include DTM generation and the extraction of buildings, trees, and low vegetation.
This study presents two methods used to measure the accuracy of the height component of Airborne Laser Scanning (ALS) data. The objectives are: to assess the accuracy of LiDAR data, to find correlation between the actual and sensor recorded height, and to explore the effectiveness of linear regression model for accuracy assessment. Field observation was carried out with Total Station as reference data and the corresponding data obtained from normalized digital surface model (n-DSM). First, statistical method was used to obtain a Root Mean Square Error (RMSE) value of 0.607 and linear accuracy of 1.18948 at 95% confidence level. Similarly, linear regression function was used to obtain RMSE value of 0.5073 and linear accuracy of 1.10999. The study shows that ALS recorded height is reliable for 3D urban mapping. A resulting correlation coefficient of 0.9919 indicates a very good agreement between the sensor recorded height and the actual height of the object (R2= 0.9839; p < 2.2e-16). The study indicates that linear regression model is effective for assessing the accuracy of ASL data.
The objective of the “EuroSDR Mobile Mapping - Road Environment Mapping using Mobile Laser
Scanning” project was to evaluate the quality of mobile laser scanning systems and methods with
special focus on accuracy and feasibility. Mobile laser scanning (MLS) systems can collect high
density point cloud data with high accuracy.
A permanent test field established in the project suits well for verifying and comparing the
performance of different mobile laser scanning syst
ems. The test field was measured with several
commercial and research systems, such as RIEGL
VMX-250, Streetmapper 360 and Optech Lynx of
the TerraTec AS, FGI Roamer and FGI Sensei. A ge
odetic network of terrestrial laser scannings was
used as the reference for the quality analysis. Each system provided the data using the system specific
pre-processing standards. The system comparison fo
cussing on planimetric a
nd elevation errors using
filtered DEM, poles and building corners as referen
ce objects, revealed the high quality point clouds
generated by all systems with good GNSS conditions. With all professional systems properly
calibrated, the elevation accuracy
was better than 3.5 cm up to 35 m distance. The best system had 2.5
cm planimetric accuracy even with the range of 45
m. The planimetric errors increase as a function of
range, but moderately if the system was properly calibrated. The main focus on MLS development in
near-future should concentrate on the improvements of trajectory solu
tion, especially under non-ideal
conditions, using both improved hardware (additional sensors) and software solutions (post-
processing).
The benchmarking of algorithms did not collect a high number of inputs. The results obtained by ITC
and FGI could be used to assess the present stat
e of the art in point cloud processing of MLS.
Currently, a level of 80-90 % correct detections can be obtained for object recognition (to those
objects most feasible for MLS data) and the rest needs to be done in interactive editing process. Since
the goal of MLS processing is to have high automation supported by minimum amount of manual
work and to create accurate 3D models of roadside
s and cities, there is still a significant contribution
needed for future research.
Finnish Geodetic Institute, Mobile Mapping research group (www.fgi.fi/mobimap), continues the test
site development for MLS, the benchmarking of MLS methods and putting MLS data to public use
also in the future.
We introduce and test the performance of two sampling methods that utilize distance distributions of laser point clouds in terrestrial and mobile laser scanning geometries. The methods are leveled histogram sampling and inversely weighted distance sampling. The methods aim to reduce a significant portion of the laser point cloud data while retaining most characteristics of the full point cloud. We test the methods in three case studies in which data were collected using a different terrestrial or mobile laser scanning system in each. Two reference methods, uniform sampling and linear point picking, were used for result comparison. The results demonstrate that correctly selected distance-sensitive sampling techniques allow higher point removal than the references in all the tested case studies.
A method for the automatic detection and vectorization of roads from lidar data is presented. To extract roads from a lidar point cloud, a hierarchical classification technique is used to classify the lidar points progressively into road and non-road points. During the classification
process, both intensity and height values are initially used. Due to the homogeneous and consistent nature of roads, a local point density is introduced to finalize the classification. The resultant binary classification is then vectorized by convolving a complex-valued disk named the Phase
Coded Disk (PCD) with the image to provide three separate pieces of information about the road. The centerline and width of the road are obtained from the resultant magnitude image while the direction is determined from the corresponding phase image, thus completing the vectorized road model.
All algorithms used are described and applied to two urban test sites. Completeness values of 0.88 and 0.79 and correctness values of 0.67 and 0.80 were achieved for the classification phase of the process. The vectorization of the classified results yielded RMS values of 1.56 m and 1.66 m,
completeness values of 0.84 and 0.81 and correctness values of 0.75 and 0.80 for two different data sets.
In contrast to conventional airborne multi-echo laser scanner systems, full-waveform (FW) lidar systems are able to record the entire emitted and backscattered signals of each laser pulse. Instead of clouds of individual 3D points, FW devices provide 1D profiles of the 3D scene, which allows gaining additional and more detailed observations of the illuminated surfaces. Indeed, lidar waveforms are signals consisting of a train of echoes where each of them corresponds to a scattering target of the Earth surface or a group of close objects leading to superimposed signals. Modelling these echoes with the appropriate parametric function is necessary to retrieve physical information about these objects and characterize their properties. Henceforth, the extracted parameters can be useful for subsequent object segmentation and/or classification. This paper presents a stochastic based model to reconstruct lidar waveforms in terms of a set of parametric functions. The model takes into account both a data term which measures the coherence between the proposed configurations and the waveforms, and a regularizing term which introduces physical knowledge on the reconstructed signal. We search for the best configuration of functions by performing a Reversible Jump Markov Chain Monte Carlo sampler coupled with a stochastic relaxation. Finally, the algorithm is validated on waveforms from several airborne lidar sensors, showing the suitability of the approach even when the traditional assumption of Gaussian decomposition of waveforms is invalid.
In this research we address the problem of classification and labeling of regions given a single static natural image. Natural images exhibit strong spatial dependencies, and modeling these dependencies in a principled manner is crucial to achieve good classification accuracy. In this work, we present Discriminative Random Fields (DRFs) to model spatial interactions in images in a discriminative framework based on the concept of Conditional Random Fields proposed by lafferty et al.(2001). The DRFs classify image regions by incorporating neighborhood spatial interactions in the labels as well as the observed data. The DRF framework offers several advantages over the conventional Markov Random Field (MRF) framework. First, the DRFs allow to relax the strong assumption of conditional independence of the observed data generally used in the MRF framework for tractability. This assumption is too restrictive for a large number of applications in computer vision. Second, the DRFs derive their classification power by exploiting the probabilistic discriminative models instead of the generative models used for modeling observations in the MRF framework. Third, the interaction in labels in DRFs is based on the idea of pairwise discrimination of the observed data making it data-adaptive instead of being fixed a priori as in MRFs. Finally, all the parameters in the DRF model are estimated simultaneously from the training data unlike the MRF framework where the likelihood parameters are usually learned separately from the field parameters. We present preliminary experiments with man-made structure detection and binary image restoration tasks, and compare the DRF results with the MRF results.
Various multi-echo and Full-waveform (FW) lidar features can be processed. In this paper, multiple classifers are applied to lidar feature selection for urban scene classification. Random forests are used since they provide an accurate classification and run efficiently on large datasets. Moreover, they return measures of variable importance for each class. The feature selection is obtained by backward elimination of features depending on their importance. This is crucial to analyze the relevance of each lidar feature for the classification of urban scenes. The Random Forests classification using selected variables provide an overall accuracy of 94.35%.
Airborne laser scanning (ALS) requires GNSS (Global Navigation Satellite System; e.g. GPS) and an IMU (Inertial Measurement Unit) for determining the dynamically changing orientation of the scanning system. Because of small but existing instabilities of the involved parts - especially the mounting calibration - a strip adjustment is necessary in most cases. In order to realize this adjustment in a rigorous way the GNSS/IMU-trajectory data is required. In some projects this data is not available to the user (any more). Derived from the rigorous model, this article presents a model for strip adjustment without GNSS/IMU-trajectory data using five parameters per strip: one 3D shift, one roll angle, and one affine yaw parameter. In an example with real data consiting of 61 strips this model was successfully applied leading to an obvious improvement of the relative accuracy from (59.3/23.4/4.5) (cm) to (7.1/7.2/2.2) (defined as RMS values in (X/Y/Z) of the differences of corresponding points derived by least squares matching in the overlapping strips). This example also clearly demonstrates the importance of the affine yaw parameter.
Building outlines in cadastral maps are often created from different sources such as terrestrial surveying and photogrammetric analyses. In the latter case the position of the building wall cannot be estimated correctly if a roof overhang is present. This causes an inconsistent representation of the building outlines in cadastral map data. Laser scanning can be used to correct for such estimation inconsistencies and additional occurring changes in the building shape. Nowadays, airborne (ALS) and mobile laser scanning (MLS) data for overlapping areas are available. The object representation in ALS and MLS point clouds is rather different regarding point density, representation of object details (scale), and completeness, which is caused by the different platform position i.e. distance to the object and scan direction. These differences are analysed by developing a workflow for automatic extraction of vertical building walls from D laser scanning point clouds. A region growing segmentation using Hough transform derives the initial segments. These are then classified based on planarity, inclination, wall height and width. The planar position accuracy of corresponding walls and completeness of the automatically extracted vertical walls are investigated. If corresponding vertical wall segments are defined by a maximum distance of 0.1 m and maximum angle of 3º then 24 matches with a planimetric accuracy of 0.05 m RMS and 0.04 m standard deviation of the X- and Y-coordinates could be found. Finally the extracted walls are compared to building outlines of a cadastral map for map updating. The completeness of building walls in both ALS and MLS depends strongly on the relative position between sensor and object. A visibility analysis for the building façades is performed to estimate the potential completeness in the MLS data. Vertical walls in ALS data are represented as less detailed façades caused by lower point densities, which is enforced by large incidence angles. This can be compensated by the denser MLS data if the façade is covered by the survey.
This paper highlights a novel segmentation approach for single trees from LIDAR data and compares the results acquired both from first/last pulse and full waveform data. In a first step, a conventional watershed-based segmentation procedure is set up, which robustly interpolates the canopy height model from the LIDAR data and identifies possible stem positions of the tallest trees in the segments calculated from the local maxima of the canopy height model. Secondly, this segmentation approach is combined with a special stem detection method. Stem positions in the segments of the watershed segmentation are detected by hierarchically clustering points below the crown base height and reconstructing the stems with a robust RANSAC-based estimation of the stem points. Finally, a new three-dimensional (3D) segmentation of single trees is implemented using normalized cut segmentation. This tackles the problem of segmenting small trees below the canopy height model. The key idea is to subdivide the tree area in a voxel space and to set up a bipartite graph which is formed by the voxels and similarity measures between the voxels. Normalized cut segmentation divides the graph hierarchically into segments which have a minimum similarity with each other and whose members (= voxels) have a maximum similarity. The solution is found by solving a corresponding generalized eigenvalue problem and an appropriate binarization of the solution vector. Experiments were conducted in the Bavarian Forest National Park with conventional first/last pulse data and full waveform LIDAR data. The first/last pulse data were collected in a flight with the Falcon II system from TopoSys in a leaf-on situation at a point density of 10 points/m2. Full waveform data were captured with the Riegl LMS-Q560 scanner at a point density of 25 points/m2 (leaf-off and leaf-on) and at a point density of 10 points/m2 (leaf-on). The study results prove that the new 3D segmentation approach is capable of detecting small trees in the lower forest layer. So far, this has been practically impossible if tree segmentation techniques based on the canopy height model were applied to LIDAR data. Compared to a standard watershed segmentation procedure, the combination of the stem detection method and normalized cut segmentation leads to the best segmentation results and is superior in the best case by 12%. Moreover, the experiments show clearly that using full waveform data is superior to using first/last pulse data.
The high point densities obtained by today's airborne laser scanners enable the extraction of various features that are traditionally mapped by photogrammetry or land surveying. While significant progress has been made in the extraction of buildings and trees from dense point clouds, little research has been performed on the extraction of roads. In this paper it is analysed to what extent road sides can be mapped in point clouds of high point density (20 pts/m 2). In urban areas curbstones often separate the road surface from the adjacent pavement. These curbstones are mapped in a three step procedure. First, the locations with small height jumps near the terrain surface are detected. Second, midpoints of high and low points on either side of the height jump are generated, put in a sequence, and used to fit a smooth curve. Third, small gaps between nearby and collinear line segments are closed. GPS measurements were taken to analyse the performance of the road side detection. The analysis showed that the completeness varied between 50 and 86%, depending on the amount of parked cars occluding the curbstones. The RMSE in the comparison with the GPS measurements was 0.18 m.
Airborne laser scanning (ALS) is known as an operational tool for collecting high resolution elevation information (> 4 pt/m²). The characteristics of the emitted pulses, i.e. their spatial extent, allow the detection of multiple echoes, which occur especially in areas covered with high vegetation. In the case of forested areas this means that not only the first reflection on the canopy but also reflections on or near the ground surface are recorded. The detection of high vegetation in urban areas (single trees, groups, and small forests next to residential areas) is needed for several applications. Classified vegetation and derived parameters, such as height, volume and density, are used in urban planning, urban ecology and 3D city modeling. The here presented algorithm follows the principle of object-based point cloud analysis (OBPA), which consists of (i) segmentation of the original ALS point cloud, (ii) feature calculation for the delineated segments and (iii) classification to label the objects of interest. The segmentation is based on an intelligent seed point selection by surface roughness, initializing a region growing process. Point features for the segmentation and classification, respectively, are e.g. roughness, the ratio between 3D and 2D point density, or statistics on first an last echo occurrence within the segments. The advantage of the developed algorithm is that no calculation of a digital terrain model is needed and that it solely works in the original point cloud, maintaining the maximal achievable accuracy. For the evaluation of the method a flight campaign of the city of Innsbruck/Austria is used as test site.
The reconstruction of 3D city models has matured in recent years from a research topic and niche market to commercial products and services. When constructing models on a large scale, it is inevitable to have reconstruction tools available that offer a high level of automation and reliably produce valid models within the required accuracy. In this paper, we present a 3D building reconstruction approach, which produces LOD2 models from existing ground plans and airborne LIDAR data. As well-formed roof structures are of high priority to us, we developed an approach that constructs models by assembling building blocks from a library of parameterized standard shapes. The basis of our work is a 2D partitioning algorithm that splits a building's footprint into nonintersecting, mostly quadrangular sections. A particular challenge thereby is to generate a partitioning of the footprint that approximates the general shape of the outline with as few pieces as possible. Once at hand, each piece is given a roof shape that best fits the LIDAR points in its area and integrates well with the neighbouring pieces. An implementation of the approach is used now for quite some time in a production environment and many commercial projects have been successfully completed. The second part of this paper reflects the experiences that we have made with this approach working on the 3D reconstruction of the entire cities of East Berlin and Cologne.
This paper describes a model for the consistent estimation of building parameters that is a part of a method for automatic building reconstruction from airborne laser scanner (ALS) data. The adjustment model considers the building topology by GESTALT observations, i.e. observations of points being situated in planes. Geometric regularities are considered by "soft constraints" linking neighbouring vertices or planes. Robust estimation can be used to eliminate false hypotheses about such geometric regularities. Sensor data provide the observations to determine the parameters of the building planes. The adjustment model can handle a variety of sensor data and is shown to be also applicable for semi-automatic building reconstruction from image and/or ALS data. A test project is presented in order to evaluate the accuracy that can be achieved using our technique for building reconstruction from ALS data, along with the improvement caused by adjustment and regularisation. The planimetric accuracy of the building walls is in the range of or better than the ALS point distance, whereas the height accuracy is in the range of a few centimetres. Regularisation was found to improve the planimetric accuracy by 5-45%.
In contrast to conventional airborne multi-echo laser scanner systems, full-waveform (FW) lidar systems are able to record the entire emitted and backscattered signal of each laser pulse. Instead of clouds of individual 3D points, FW devices provide connected 1D profiles of the 3D scene, which contain more detailed and additional information about the structure of the illuminated surfaces. This paper is focused on the analysis of FW data in urban areas. The problem of modelling FW lidar signals is first tackled. The standard method assumes the waveform to be the superposition of signal contributions of each scattering object in such a laser beam, which are approximated by Gaussian distributions. This model is suitable in many cases, especially in vegetated terrain. However, since it is not tailored to urban waveforms, the Generalized Gaussian model is selected instead here. Then, a pattern recognition method for urban area classification is proposed. A supervised method using Support Vector Machines is performed on the FW point cloud based on the parameters extracted from the post-processing step. Results show that it is possible to partition urban areas in building, vegetation, natural ground and artificial ground regions with high accuracy using only lidar waveforms.
The update of databases – in particular 2D building databases – has become a topical issue, especially in the developed countries where such databases have been completed during the last decade. The main issue here concerns the long and costly change detection step, which might be automated by using recently acquired sensor data. The current deficits in automation and the lack of expertise in the domain have driven the EuroSDR to launch a test comparing different change detection approaches, representative of the current state-of-the-art. The main goal of this paper is to present the test bed of this comparison and the results that have been obtained for three different contexts (aerial imagery, satellite imagery, and LIDAR). In addition, we give the overall findings that emerged from our experiences and some promising directions to follow for building an optimal operative system in the future.
The automated extraction of topographic objects has been on the research agenda in the Photogrammetry and Computer Vision communities for more than two decades. Considerable progress has been achieved, though up to now there are hardly any commercial products that have been accepted by the market. Recent developments in the field of sensor technology, along with advanced techniques for data processing, have increased the potential of automated object extraction. This paper gives an overview on the status and further projects of automated object extraction, focusing on buildings and roads and on the application of high-resolution optical data.
Recent advances in airborne light detection and ranging (LiDAR) technology allow rapid and inexpensive generation of digital surface models (DSMs), 3-D point clouds of buildings, vegetations, cars, and natural terrain features over large regions. However, in many applications, such as flood modeling and landslide prediction, digital terrain models (DTMs), the topography of the bare-Earth surface, are needed. This paper introduces a novel machine learning approach to automatically extract DTMs from their corresponding DSMs. We first classify each point as being either ground or nonground, using supervised learning techniques applied to a variety of features. For the points which are classified as ground, we use the LiDAR measurements as an estimate of the surface height, but, for the nonground points, we have to interpolate between nearby values, which we do using a Gaussian random field. Since our model contains both discrete and continuous latent variables, and is a discriminative (rather than generative) probabilistic model, we call it a hybrid conditional random field . We show that a Maximum a Posteriori estimate of the surface height can be efficiently estimated by using a variant of the Expectation Maximization algorithm. Experiments demonstrate that the accuracy of this learning-based approach outperforms the previous best systems, based on manually tuned heuristics.
Airborne lidar systems have become an alternative source for the acquisition of altimeter data. In addition to multi-echo laser scanner systems, full-waveform systems are able to record the whole backscattered signal for each emitted laser pulse. These data provide more information about the structure and the physical properties of the surface. This paper is focused on the classification of full-waveform lidar and airborne image data on urban scenes. Random forests are used since they provide an accurate classification and run efficiently on large datasets. Moreover, they provide measures of variable importance for each class. This is crucial to analyze the relevance of each feature for the classification of urban scenes. Random Forests provide more accurate results than Support Vector Machines with an overall accuracy of 95.75%. The most relevant features show the contribution of lidar waveforms for classifying dense urban scenes and improve the classification accuracy for all classes.
This work presents an automatic algorithm for extracting vectorial land registers from altimetric data in dense urban areas.
We focus on elementary shape extraction and propose a method that extracts rectangular buildings. The result is a vectorial
land register that can be used, for instance, to perform precise roof shape estimation. Using a spatial point process framework,
we model towns as configurations of and unknown number of rectangles. An energy is defined, which takes into account both
low level information provided by the altimetry of the scene, and geometric knowledge about the disposition of buildings in
towns. Estimation is done by minimizing the energy using simulated annealing. We use an MCMC sampler that is a combination
of general Metropolis Hastings Green techniques and the Geyer and Mller algorithm for point process sampling. We define some
original proposition kernels, such as birth or death in a neighborhood and define the energy with respect to an inhomogeneous
Poisson point process. We present results on real data provided by the IGN (French National Geographic Institute). Results
were obtained automatically. These results consist of configurations of rectangles describing a dense urban area.
Airborne laser scanning (ALS) of urban regions is commonly used as a basis for 3D city modeling. In this process, data acquisition relies highly on the quality of GPS/INS positioning techniques. Typically, the use of differential GPS and high-precision GPS/INS postprocessing methods are essential to achieve the required accuracy that leads to a consistent database. Contrary to that approach, we aim at using an existing georeferenced city model to correct errors of the assumed sensor position, which is measured under nondifferential GPS and/or INS drift conditions. Our approach accounts for guidance of helicopters or UAVs over known urban terrain even at night and during frequent loss of GPS signals. We discuss several possible sources of errors in airborne laser scanner systems and their influence on the measured data. A workflow of real-time capable methods for the segmentation of planar surfaces within ALS data is described. Matching planar objects, identified in both the on-line segmentation results and the existing city model, are used to correct absolute errors of the sensor position. 1.1 Problem description 1.
The determination of building models from unstructured three-dimensional point cloud data is often based on the piecewise intersection of planar faces. In general, the faces are determined automatically by a segmentation approach. To reduce the complexity of the problem and to increase the performance of the implementation, often a resampled (i.e. interpolated) grid representation is used instead of the original points. Such a data structure may be sufficient for low point densities, where steep surfaces (e.g. walls, steep roofs, etc.) are not well represented by the given data. However, in high resolution datasets with twenty or more points per square-meter acquired by airborne platforms, vertical faces become discernible making three-dimensional data processing adequate. In this article we present a three-dimensional point segmentation algorithm which is initialized by clustering in parameter space. To reduce the time complexity of this clustering, it is implemented sequentially resulting in a computation time which is dependent of the number of segments and almost independent of the number of points given. The method is tested against various datasets determined by image matching and laser scanning. The advantages of the three-dimensional approach against the restrictions introduced by 2.5D approaches are discussed.
The paper describes a methodology for tree species classification using features that are derived from small?footprint full waveform Light Detection and Ranging (LIDAR) data. First, 3?dimensional coordinates of the laser beam reflections, the intensity, and the pulse width are extracted by a waveform decomposition, which fits a series of Gaussian pulses to the waveform. Since multiple reflections are detected, and even overlapping pulse reflections are distinguished, a much higher point density is achieved compared to the conventional first/last?pulse technique. Secondly, tree crowns are delineated from the canopy height model (CHM) using the watershed algorithm. The CHM posts are equally spaced and robustly interpolated from the highest reflections in the canopy. Thirdly, tree features computed from the 3?dimensional coordinates of the reflections, the intensity and the pulse width are used to detect coniferous and deciduous trees by an unsupervised classification. The methodology is applied to datasets that have been captured with the TopEye MK II scanner and the Riegl LMS?Q560 scanner in the Bavarian Forest National Park in leaf?on and leaf?off conditions for Norway spruces, European beeches and Sycamore maples. The classification, which groups the data into two clusters (coniferous, deciduous), leads in the best case to an overall accuracy of 85% in a leaf?on situation and 96% in a leaf?off situation.
Multiple Pulses in Air Technology, or MPiA, is a new technology allowing airborne LIDAR systems to be used at higher pulse rates than previously possible. By allowing the airborne LIDAR system to fire a second laser pulse prior to receipt of the previous pulse's reflection, the pulse rate at any given altitude can be effectively doubled. Getting past the limitations imposed by the speed of light and conventional single-pulse-in-air LIDAR technology allows the airborne LIDAR system to achieve the desired point density at twice the coverage rate or, conversely, for twice the point density to be achieved at conventional coverage rates. Though announced publicly in 2006, it was not until well into 2007 that commercially-available MPiA-equipped systems were fielded. The technology can now be considered "mainstream", and is actively being used on a variety of airborne LIDAR data acquisition projects. This study will present an overview of MPiA technology in the context of a large area survey project in Alberta, Canada. In addition to the consideration of MPiA technology in this project, implications on other facets of project organization will be presented. Overall results will be given, proving the ability of MPiA-equipped systems to achieve a nominal 2:1 productivity increase over that of conventional systems * Corresponding author.
This paper discusses state and promising directions of automated object extraction in photogrammetric computer vision considering also practical aspects arising for digital photogrammetric workstations (DPW). A review of the state of the art shows that there are only few practically successful systems on the market. Therefore, important issues for a practical success of automated object extraction are identified. A sound and most important powerful theoretical background is the basis. Here, we particularly point to statistical modeling. Testing makes clear which of the approaches are suited best and how useful they are for praxis. A key for commercial success of a practical system is efficient user interaction. As the means for data acquisition are changing, new promising application areas such as extremely detailed three-dimensional (3D) urban models for virtual television or mission rehearsal evolve.
In this paper, we describe the evaluation of a method for building detection by the Dempster–Shafer fusion of airborne laser scanner (ALS) data and multi-spectral images. For this purpose, ground truth was digitised for two test sites with quite different characteristics. Using these data sets, the heuristic models for the probability mass assignments are validated and improved, and rules for tuning the parameters are discussed. The sensitivity of the results to the most important control parameters of the method is assessed. Further we evaluate the contributions of the individual cues used in the classification process to determine the quality of the results. Applying our method with a standard set of parameters on two different ALS data sets with a spacing of about 1 point/m2, 95% of all buildings larger than 70 m2 could be detected and 95% of all detected buildings larger than 70 m2 were correct in both cases. Buildings smaller than 30 m2 could not be detected. The parameters used in the method have to be appropriately defined, but all except one (which must be determined in a training phase) can be determined from meaningful physical entities. Our research also shows that adding the multi-spectral images to the classification process improves the correctness of the results for small residential buildings by up to 20%.
In this study we use a technique referred to as Gaussian decomposition for processing and calibrating data acquired with a novel small-footprint airborne laser scanner that digitises the complete waveform of the laser pulses scattered back from the Earth's surface. This paper presents the theoretical basis for modelling the waveform as a series of Gaussian pulses. In this way the range, amplitude, and width are provided for each pulse. Using external reference targets it is also possible to calibrate the data. The calibration equation takes into account the range, the amplitude, and pulse width and provides estimates of the backscatter cross-section of each target. The applicability of this technique is demonstrated based on RIEGL LMS-Q560 data acquired over the city of Vienna.
In this article, two methods for data collection in urban environments are presented. The first method combines multispectral imagery and laser altimeter data in an integrated classification for the extraction of buildings, trees and grass-covered areas. The second approach uses laser data and 2D ground plan information to obtain 3D reconstructions of buildings.
Consistent Estimation of Building Parameters Considering Geometric Regularities by Soft Constraints". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXVI - 3
F Rottensteiner
Detection of Curbstones in Airborne Laser Scanning Data". International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXVIII - 3/W8