ArticlePDF Available

Abstract and Figures

Airborne light detection and ranging (LiDAR) has now become industry standard tool for collecting accurate and dense topographic data at very high speed. These data have found use in many applications and several new applications are being discovered regularly. This paper presents a review of the current state-of-the-art of this technology. The paper covers both data capture and data processing issues of the technology. The paper first discusses various types of LiDAR sensors and their working. This is followed by information on data format and data quality assessment procedures. The paper reviews the existing data classification techniques and also looks into the new approaches like convolutional neural networks and visual analytics for data processing. Finally, the paper outlines future scope of the technology and the research challenges, which should be addressed in coming years.
Content may be subject to copyright.
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
Airborne LiDAR Technology: A Review of Data Collection and
Processing Systems
Bharat Lohani1, Suddhasheel Ghosh2
1Department of Civil Engineering, Indian Institute of Technology Kanpur, Kanpur, India
2Department of Civil Engineering, MGM’s Jawaharlal Nehru Engineering College, Aurangabad, India
Abstract
Airborne LiDAR (Light Detection and Ranging) has now become industry standard tool for collecting accurate and
dense topographic data at very high speed. These data have found use in many applications and several new
applications are being discovered regularly. This paper presents a review of the current state-of-the-art of this
technology. The paper covers both data capture and data processing issues of the technology. The paper first discusses
various types of LiDAR sensors and their working. This is followed by information on data format and data quality
assessment procedures. The paper reviews the existing data classification techniques and also looks into the new
approaches like CNN (Convolutional Neural Networks) and visual analytics for data processing. Finally, the paper
outlines future scope of the technology and the research challenges, which should be addressed in coming years.
1 Introduction
Shortly after the development of the first optical laser, instruments employing this new technology were developed to
measure distance by timing round trip travel of a laser pulse between the laser transmitter, the surface being measured
and the laser receiver [1]. The concept of using airborne laser to measure terrestrial biomass emerged during the 1960s
and 1970s. Rempel and Parker [2] proposed the idea of using an airborne laser for micro-relief studies in 1964. For the
first time, Hickman and Hogg [3] demonstrated the use of the laser for bathymetry measurements in 1968. Hoge et al.
[4] reported the results of a study using the NASA (National Aeronautics and Space Administration) airborne
oceanographic LiDAR (AOL) to derive water depths in the Atlantic Ocean and more turbid Chesapeake Bay. A major
hindrance in the earlier uses, as noted by Krabill et al. [5] and Schreier et al. [6], was locating the position of the
airborne laser; Hoge et al. [7] mentioned the use of tracking radar, Arp et al. [8] used the Autotape to measure the
position of a floating helicopter by resection, and Schreier et al. [6] described a photogrammetric method. Accurate
positioning of airborne lasers only became possible with the advent of GPS (Global Positioning System). Further
advancement in IMU (Inertial Measuring Unit), laser, and computing technologies made the use of LiDAR cheaper
and more accurate. In the last two decades, the airborne remote sensing sector has seen this technology emerge as an
extremely rapid and highly accurate terrain-mapping tool. This development has spawned innovative solutions to
difficult mapping problems, including several new applications which were nearly impossible earlier in the absence
of data like LiDAR.
The main aim of this paper is to review the status of LiDAR technology, highlight the research issues associated with
the technology and identify the direction where the technology is heading to. The review will be carried out for different
aspects viz., data collection, data processing, accuracy analysis, data classification, data visualization and finally
applications being realized and the future potential of the technology. LiDAR technology can be operated using
different platforms, viz., space vehicles, aircrafts and helicopters, drones, ground based vehicles, and tripods. While
the basic principle of technology in each case is same, there are some differences in data capture procedures, data
processing steps, applications of data etc. Moreover, the current paper is specifically focused on airborne LiDAR
technology.
The subsequent sections of the paper will present state-of-the-art information on the principle of technology, type of
sensors, flight planning aspects, data and file format, accuracy assessment of data generated, classification methods,
visualization of point cloud, application areas and the future scope.
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
2 Principle of LiDAR technology
As shown in Figure 1, an airborne LiDAR system consists of (i) an airborne platform (aircraft or helicopter), which is
used to fly a LiDAR sensor over the area of interest; (ii) LiDAR sensor, which is used to generate short width laser
pulses (of the order of a few nano-seconds), transmit these toward ground, scan the ground beneath while firing pulses,
receive the return signal (i.e., return waveform), measure the time of travel of the return pulse (most significant return,
first return, last return, or multiple returns), associate each return pulse with Global Navigation Satellite System
(GNSS) time and the scan angle at which the pulse was transmitted; (iii) a GNSS receiver, which works in tandem
with a ground based GNSS base station receiver and observes the position of the aircraft at each epoch of GNSS
observation (1 Hz or 2 Hz); (iv) an IMU sensor, which observes the accelerations and orientations of the aircraft at a
much higher frequency than that of GNSS epoch (say 400 Hz); (v) onboard computer, which timestamps different
data produced by the above sensors using the GNSS time and archives raw data. It is a common practice to also fly a
medium format digital camera (60MP to 100MP) along with LiDAR sensor as it provides color information of the
terrain.
Figure 1: Airborne LiDAR data capture. Red dots show LiDAR data on terrain objects.
The processing steps for various sensor data collected are shown in Figure 2. Accurate position (< 2 cm circular error
probable (CEP)) of the aircraft is determined by differential processing of the onboard and reference GNSS receivers
(which collect multi-constellation, multi-frequency, carrier phase GNSS signals). The trajectory (i.e. the location and
orientation of the aircraft at the time of firing of each laser pulse) is then computed by integrating GNSS solution with
the IMU observations. Through the geolocation process (as shown in Hofton, et al. [9]) the laser ranges and scanning
angles are attached with the aircraft trajectory to yield the coordinates of the point on the ground in GNSS coordinate
system, i.e., ECEF WGS-84 (Earth Centered Earth Fixed World Geodetic System) Cartesian coordinate system. The
system calibration parameters are also input to the processing software at this stage to minimize certain errors. The
coordinates generated can then be transformed to any desired horizontal and vertical datum.
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
Figure 2: Schematic display of processing flow of LiDAR system
3 Different forms of LiDAR sensors
A major part of the past two decades has been dominated by single wavelength single pulse Linear Mode LiDAR
(LML) with single or multi-pulse ability. However, recently several new forms of LiDAR sensors have also appeared
and are opening new applications. This section will briefly discuss these different forms of LiDAR sensors.
3.1 Single pulse single wavelength Linear Mode LiDAR (LML)
In these LiDAR sensors, a single laser pulse is transmitted and received at the sensor. Any two consecutive pulses are
separated by enough time gap so the next pulse is transmitted only after the previous pulse has been received. Pulse
Repetition Frequency (PRF) of the order of 400 kHz is possible with the help of current scanners available in market.
In general, infra-red wavelengths are employed as their reflection from most of the topographic features is significant
enough to register its return on receiver thus producing a coordinate. The most common scanning mechanism employed
in these sensors is zig-zag or parallel line patterns. A few sensors have also been proposed with elliptical scanning.
These sensors have limitation in terms of flying altitude as higher altitude leads to reduction in the PRF and also the
energy of the signal reaching the receiver diminishes. These sensors are good for small area survey, say up to 2000-
3000 sq km, when compared to new GML/SPL (Geiger Mode LiDAR / Single Photon LiDAR) sensors, though the
former have been successfully employed for surveying areas of several tens of thousands of square kilometers. The
advantage of LML is that they also generate intensity image unlike GML and can generate data also for power lines
unlike GML/SPL.
3.2 Multi-pulse in air (MPiA) LiDAR
In case of single pulse LiDAR the pulses are transmitted sequentially, as there is no method to distinguish between
these pulses. This limits the PRF and flying altitude. Multiple Pulses in Air Technology, or MPiA, allows airborne
LIDAR system to fire the second laser pulse prior to receipt of the previous pulse's return waveform, leading to
doubling of the pulse rate at any given altitude. This permits higher pulse rates than previously possible by single
pulse LiDAR. Besides this major change the functioning of this sensor is similar to the LML. The role of Multi-pulse
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
would reduce in future as higher data density from higher heights are now being generated by GML/SPL. However,
there is difference in data characteristics between MPiA and GML/SPL, as MPiA data are similar to LML with certain
advantages.
3.3 Full waveform digitization (FWD) LiDAR
In case of full waveform digitization, the return waveform is completely digitized at fine time intervals, e.g. 1 ns. This,
therefore, unlike single or multiple returns, captures the property of the backscattering surfaces in the entire depth of a
footprint. It is then possible to determine the significant backscattering surfaces within the footprint by using
techniques like Gaussian Decomposition [10]. FWD data have been found more suitable for identification of objects
under canopy and detecting the ground [11]. The use of FWD is very much application dependent. Several LML
sensors also support FWD now.
3.4 Multi-spectral LiDAR (MSL)
The intensity of return pulse can help in the classification of LiDAR data, similar to the case of images. Due to this
reason, the same has been widely investigated by various researchers including the work on normalization of return
intensity for different ranges and angles. However, with only a single wavelength (with narrow bandwidth) the
accuracy of classification is limited. In case of the multi-spectral LiDAR (e.g. TITAN, by Teledyne Optech Inc.),
three independent pulses – wavelengths 532 nm, 1,064 nm and 1,550 nmare emitted, each with a 300 kHz PRF. The
multi-spectral LiDAR has potential to be used for 3D land cover classification as shown by Xiaoliang, et al. [12] and
environmental applications. While MSL offers multi-wavelength view and increased data density, the same can be
achieved also by flying a multi-spectral or hyper-spectral sensor with LML. If the demand of the application is spectral
analysis along with geometric information, the latter options are more suitable.
3.5 Geiger Mode LiDAR (GML) and Single Photon LiDAR (SPL)
The technologies of GML and SPL share their principle, though these are being promoted by two different companies,
viz., Harris Corporations and SigmaSpace (Hexagon group company), respectively. Unlike Linear Mode LiDAR, both
GML and SPL can measure range by detecting even a few photons of the return laser. In these sensors, similar to
LML, the sensor is fitted at the bottom of an aircraft looking downward. The scanner scans in a circular pattern while
the aircraft moves ahead. A laser pulse is split into multiple pulselets (100 in case of the SPL from SigmaSpace).
Around 60,000 such sets of pulselets are generated every second, thereby producing 6 Million possible measurements
every second. The typical circular scan pattern ensures full coverage around high rise buildings thus minimizing the
presence of shadows in data. The main advantage of these sensors is to generate highly dense point cloud from flying
heights of order of 2,000 m to 10,000 m, which is not possible in case of LML. However, unlike LML, these sensors
do not produce multiple returns or full waveforms and have also been found to produce less accurate data for highly
reflective surfaces [13]. Moreover, these sensors are especially suitable for large area survey where only digital
elevation model is required, e.g., 3DEP program of USA. In general, these sensors become cost-effective for areas
beyond 2000 sq km.
4 Flight planning for airborne LiDAR and photograph
While performing flying operations for LiDAR data acquisition, aircraft covers the area of interest (AOI) on terrain in
parallel strips. After flying over a strip, aircraft turns to the next strip. In order to acquire data with desired
characteristics (namely data density, overlap, uniform distribution, spacing and accuracy) the operating parameters of
LiDAR sensor and flying parameters of aircraft need to be decided in advance and accordingly the project is
undertaken. However, there can be a large number of sets of sensor and flight parameters which can result in the desired
data, but would take different flying hours. Minimization of flying hours is important to reduce the cost of a project.
In the view of this, flight planning is carried out prior to airborne LiDAR data acquisition in field. Flight planning
generates the sensor and flight parameters which would guarantee the desired data characteristics and also minimize
the cost of data acquisition [14]. In the case of total manual or semi-automatic solution of this problem (assisted by
software like ASCOT (by Leica), ALTM-NAV (by Optech), and IGIplan (by IGI)) there is no guarantee of reaching
the optimal solution. Furthermore, the solution can be highly biased by the experience of the persons involved.
Therefore, it is required to research in developing optimization based solution for this problem.
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
One such work has addressed numerous research issues of flight planning for airborne LiDAR data acquisition in a
series of publications [14-20]. The researchers conceptualized a comprehensive flight planning system that consists of
various components of flight planning (e.g., data requirements, mapping requirements, sensor characteristics, aerial
platforms, field limitations, flight duration, optimization algorithms, and simultaneous data acquisition with multiple
sensors). The proposed flight planning system closely mimics flying operations and estimates the actual cost of data
acquisition as the estimated flight duration consists of both turning time and strip time. Further, the system includes
all data characteristics as constraints and uses evolutionary algorithms to determine an optimal solution. Furthermore,
the flexible system design facilitates modification of any component and its inter-relation with other components and
thus higher level of abstraction can be adopted at any stage. Consequently, the system shows a capacity to simulate the
most realistic scenario of airborne data acquisition with minimum user intervention. The demonstrated system in this
research provides flight plan, flight planning parameters, and turning mechanisms as an output. There is a need to
further research this area and implement the solutions during field operations.
5 LiDAR data and file format
LiDAR data generated by different types of sensors mainly consist of the coordinates of point (X, Y, Z), intensity of
return (I), associated color information (R, G, B) which is taken corresponding to each point from the simultaneously
flown camera and GNSS time when the point was captured. Besides this the data also contains information about its
return number, number of returns for a pulse, the scan angle, data-on-edge-of-scan information, land cover class
associated with data point etc. All these information about data are stored in .las file format [21]. The .las format has
evolved from its earlier version 1.0 to now 1.4 and has provision to store waveform data as well. One example of
LiDAR data displayed with elevations as colors and intensity data as gray scale are shown in Figure 3.
Figure 3: Example of LiDAR data. Image on left shows point cloud colored as per height while image on right is the
corresponding intensity image. Colours or gray level shades are to show relative heights or intensity levels,
respectively, and therefore no legends are given (courtesy Optech Inc.)
6 Accuracy analysis of LiDAR data
LiDAR data generated in field need to undergo quality check. An elaborate quality check methodology covering
various aspects of LiDAR data is suggested in United States Geological Survey (USGS) Base Specifications document
[22]. Though primary purpose of this document is to specify data for National Geospatial Program in USA, the
procedures are applicable in nearly all projects. The vertical accuracy of LiDAR is of prime importance. Therefore,
multiple measures for vertical accuracy are suggested. The basis for vertical accuracy is the Root Mean Square Error
(RMSE), which is computed by comparing LiDAR points with the ground surveyed points. The vertical accuracies are
reported as non-vegetated vertical accuracy (NVA) and vegetated vertical accuracy (VVA). Check points for NVA are
surveyed in clear, open areas, devoid of vegetation and other vertical artifacts, where only single returns are possible.
NVA is computed as
!"#$ %&'()*
. For determining VVA, the checkpoints should be surveyed in vegetated areas
where multiple returns are common. This is computed as the 95th percentile of the differences in LiDAR and reference
data, as in these areas data do not follow normal distribution.
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
Unlike vertical accuracy, planimetric accuracy of LiDAR data always poses a problem of determination in field. There
are a few indirect methods to determine this accuracy using reference objects, e.g., corners of building or specially
installed targets. The planimetric accuracy is stated as
!"+,-. %&'()/
, where RMSEr is the Root Mean Square Error
in horizontal computed by comparing LiDAR data with ground reference. There have been attempts to determine
planimetric accuracy using more direct approaches like Casella et al. [23] and Vosselman [24]. However, these
methods under-estimate the planimetric accuracy. A method which can determine correct planimetric accuracy of
LiDAR is not yet available.
In addition to the accuracy there are several other quality parameters against which data captured should be evaluated.
Nominal Pulse Spacing (NPS) is a measure of nearness of data points and is defined as the average data spacing
between data points. This is determined for first return data of middle 90% of a flight swath. Similar to NPS, Nominal
Pulse Density (NPD) is defined as the number of data points per square meter. Aggregate NPS and NPD are reported
when instead of single swath aggregate data from multiple swaths are available for same spatial location. NPS and
data density have inverse relation. LiDAR data captured should not possess voids except for those areas wherefrom
laser does not reflect, e.g., water bodies. The area of such voids, within a single swath for first return data, should not
be larger than (
0 % 12(34
. For any application, uniform spatial distribution of LiDAR data is crucial. Data density
alone does not represent spatial distribution. Spatial distribution of LiDAR data is determined by counting the grid
cells, which have at least one first return data inside them, where the size of grid cell laid over data is 2xNPS. As per
the USGS base specification at least 90% grid cells should have at least one data point.
7 Visualization of LiDAR data
As a first step in LiDAR data analysis it is necessary to visualize point cloud using interactive approaches.
Geovisualization is a wide area of research and contributes toward visualization of LiDAR data as well. There are a
range of techniques for visualization of LiDAR data which are already available in the body of research literature.
These techniques include (i) projection of 3D data to 2D raster and treat the data as an image for color scale
visualization, (ii) visualization of data in 2.5D using DSM (Digital Surface Model) and DEM (Digital Elevation Model)
generated from LiDAR data, (iii) generating 2D triangulation and 3D tetrahedralization using Delaunay Triangulation
methods and visualize the data as 2.5D, (iv) generate contours from LiDAR DEM and visualize the same, and (v)
conversion of 3D LiDAR data into two stereo-pairs and visualization of the same using stereovision.
Kreylos et al. [25] and Richter and Döllner [26] have attempted the visualization of point cloud data using out-of-core
computation methods. Point cloud data were converted to voxels and visualized by Stoker [27, 28], whereas Isenburg
[29] generated raster Digital Elevation Model (DEM) via a Triangulated Irregular Network (TIN) streaming technique.
In the context of various visualization schemes for LiDAR data, Ghosh and Lohani [30, 31] carried out extensive
investigations and concluded that data visualization in stereoscopic mode has potential to provide more immersive
visualization experience. They further developed a pipeline for visualization of LiDAR data in stereoscopic manner,
which provides lighter data and avoids confusion of visualization of point cloud alone. In this pipeline first using
Density Based Spatial Clustering of Application with Noise (DBSCAN) [32] and Ordering Points To Identify the
Clustering Structure (OPTICS) [33], Ghosh and Lohani attempted to cluster LiDAR data [31]. They found DBSCAN
to perform better than the OPTICS algorithm in terms of computational time, and performance of clustering.
Subsequently, the clusters generated by DBSCAN were classified into three different types: (i) sparse and wide
clusters, (ii) dome shaped clusters, and (iii) clusters potentially containing planes. The clusters were treated by a
processing pipeline [25]. The comparison of various existing processing pipelines reveals that the pipeline developed
by Ghosh and Lohani [30], performed almost equivalent to the Computer Aided Design (CAD) based pipeline [34].
8 Processing LiDAR data for scene understanding
Unlike image data LiDAR data are geometric representation of terrain. Though LiDAR data are accompanied by
intensity and sometime RGB information, the prime source of object recognition in LiDAR data is its geometry.
Therefore, the approaches aimed at identification, extraction and classification of objects in LiDAR data are primarily
based on exploiting the geometric properties, which sometime are also assisted by intensity and RGB data. Further,
unlike image classification where all classes are classified simultaneously, the approaches, in general, for LiDAR data
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
have been focused on identification of one object at a time. The following paragraphs outline the state of the art in
this domain.
8.1 Ground classification (ground filtering)
Literature suggests that most of the approaches begin with outlier detection and removal, which is followed by “ground
filtering”. The process of filtering involves the labelling of the dataset into terrain and non-terrain points. Various
filtering algorithms have been reviewed by Sithole and Vosselman [35], Kobler et al. [36], Pfeifer and Mandlberger
[37] and Meng et al. [38]. It is worthwhile to mention here that after a comparison of filtering algorithms, Sithole and
Vosselman [39] had suggested, and Błaszczak-Bąk et al. [40] have reiterated that there is no unique algorithm for
extracting all types of terrains.
The literature on ground detection algorithms designed for LiDAR data can be classified in the following groups: (i)
morphological filters and their variants, (ii) active contouring, (iii) progressive densification, (iv) spline-based
methods, (v) surface based filters and their extensions and variants, and (vi) repetitive interpolation. The literature
found in these areas are given in Table 1.
Table 1: Ground filtering algorithms
Type of the algorithm
1
Morphological filters and their extension and variants
2
Active contouring
3
Progressive densification
4
Spline based method
5
Surface based filters and their extension and variants
6
Repetitive interpolation
Despite a large body of literature on ground classification and commercial tools there still is need of manual editing of
outputs. In order to avoid this more research is required for developing methods which are fully automatic and
accurate.
8.2 Building detection, extraction and reconstruction
Airborne LiDAR data mostly contain information from the roofs of the buildings in the form of point cloud. Literature
suggests that the roofs are assumed to be containing planar facets. Once the points belonging to a particular building
are identified, detection of the individual planes is required. Some of the authors have also attempted to derive the
building model parameters from the respective point cloud. The various approaches are summarized below and
tabulated in Table 2:
Table 2: Plane extraction algorithms
Type of the algorithm
1
Hough transform
2
RANSAC
3
Octree splitting and merging
4
Clustering of normals
5
Target based graph matching
6
Pseudo-grid based
7
Invariant moments
Various techniques attempted for identification of planar surfaces in LiDAR data are as following:
1. Hough transform: A plane can be represented either by its general equation, or by the triplet
567 87 93
, which
represents the direction of the normal to the plane. The Hough transform [65] identifies planes based on the
orientation of most popular local normal through a voting process, which are identified by respective triplets.
Plane identification for LiDAR data using the Hough transform has been studied by Overby et al. [54], Tasha-
Kurdi et al. [55] and Lohani and Singh [66].
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
2. RANSAC: Random Sample Consensus (RANSAC) is proposed by Fishler and Bolles [67], which chooses models
based on a least squares regression, and has been used for the detection of planar facets by Forlani et al. [68],
Tasha-Kurdi et al. [55], Nardinocchi et al. [56], Forlani et al. [69] and Bretar [57].
3. Clustering of normals: Normals can be calculated by computing the Delaunay triangulation or the Voronoï
diagram. Building facet detection by using clustering of normals has been used by Hofmann et al. [59], Morgan
and Habib [60], Auer and Hinz [70] and Tse et al. [61].
4. Octree and pseudo grid: The octree is a tree data structure, which divides the entire 3D space enveloping the
point cloud. It is used for easier computational handling of a voluminous point cloud data. Plane detection using
Octree data structure [71] has been done by Tseng and Wang [58]. On the other hand, Cho et al. [63] divided the
areal extent of LiDAR dataset into a pseudo-grid and detected planes.
5. Model description using invariant moments: Invariant moments [72] are used for visual pattern recognition
specifically for shape analysis. Maas [73, 74] and Maas and Vosselman [64] have used first and second order
invariant moments to derive building parameters.
After the planar facets of the building roofs are identified, the models of the buildings are reconstructed. Bretar [57]
comments that “the purpose of building reconstruction is to represent a building with as few point vertices as possible.”
In this process, the topological primitives and different connectivities are determined for the building. The various
approaches for reconstructing the building models can be grouped as follows:
1. Polyhedral models: Rottensteiner and Jansa [75] have used a raster-based approach, and they first group the
detected segments using a 3-4 Chamfer mask as a proximity measure. This is followed by a boundary tracing
and construction of walls and finally output as a wireframe model.
2. Modelling using the ground plans: Brenner et al. [76] developed a heuristic subdivision of ground plans into
rectangles as well as a rule-based reconstruction relying on discrete relaxation. Laycock and Day [77] have used
the building footprint information to create the walls of the buildings. The roofs of the buildings have been
modelled using a concept of “straight-skeletons”. The fundamental units of modelling are rectilinear polygons
and building models are derived by merging these fundamental units.
3. Intersection of adjacent roof planes: Maas and Vosselman [64], Huber et al. [78], Forlani et al. [68], and Sampath
and Shan [79] have used intersection of adjacent roof planes to create building model.
4. Modelling using building primitives: Teo et al. [80] constructed building primitives and reconstructed the
building using splitting and merging of these building primitives.
A summary of various approaches in the reconstruction of building models has been presented in Table 3.
Table 3: Reconstruction of building models
Type of the algorithm
References
1
Hough transform
[54, 55, 81]
2
RANSAC
[57, 81]
3
Octree splitting and merging
[58]
4
Clustering of normals
[59]
5
Target based graph matching
[62]
6
Pseudo-grid based
[63]
7
Invariant moments
[73]
8
Intersection of planar faces
[74]
9
Polyhedral building roofs
[79, 82]
8.3 Tree classification
An exhaustive review of forest related studies from LiDAR data has been presented in Hyyppä et al. [83]. Although
the extraction of various tree or forest parameters from LiDAR data have been made possible by various researchers,
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
not much has been said about the representation of trees on a visualization engine. Fujisaki et al. [84] have used a
simple forest model to represent trees. Morsdorf et al. [85] have used clustering algorithms to detect trees and then use
convex hulls to represent them. Another team extended their previous work, and used rotational paraboloids and
cylinders to represent tree canopies and trunks, respectively [86].
Tree locations can be determined using point cloud local maxima [87]. Using canopy height model (CHM) and
segmentation methods, Hyyppä and Inkinen [87] and, Friedlaender and Koch [88] have demonstrated individual-tree
based forest inventory. DSM or CHM images have also been used for individual tree crown (ITC) or crown diameter
estimation [89, 90]. Detection of trees and large scale visualization for urban landscapes have been attempted by
Oehlke et al. [91] . LiDAR data have been shown useful in biomass estimation and carbon stock assessment.
8.4 Road detection
Literature on identification of points belonging to roads can be classified into the following groups: (i) integration of
imagery and LiDAR data, (ii) integration of existing maps and LiDAR data, and (iii) morphological filtering, (vi)
road extraction using graph theoretic concepts, (v) buffering method (vi) classifier selection strategy, and (vii) use of
range, intensity and elevation images in classification. Literature found in the above groups are listed in
Table 4.
Table 4: Summary of road detection algorithms
Type of the algorithm
References
1
Integration of imagery and LiDAR data
[92-94]
2
Integration of existing maps and LiDAR data
[95, 96]
3
Morphological filtering using intensity and range data
[97]
4
Graph theoretic concepts
[98]
5
Buffering method
[99]
6
Classifier selection strategy
[100]
7
Range, intensity and elevation images with EM classification
[101]
In the given list of literature for road detection, Clode et al. [102] and Zhao et al. [101] have attempted the vectorisation
of roads for making maps.
8.5 Segmentation based methods
The segmentation based methods for LiDAR data are found to be either raster based or point cloud based. The idea of
segmentation is to partition a point cloud into different groups based on certain properties set by the algorithm. The
conversion of LiDAR data into a raster, results in the loss of three-dimensional information. Also, segmentation on a
raster can result in inaccuracies. On the other hand, segmentation on the point clouds are usually terrain feature
focussed, i.e. ground, non-ground or building, non-building etc.
In the point cloud based methods, buildings were extracted by using scan line segmentation [103]. Shan and Sampath
[104] developed a method to separate ground and non-ground points, and Chehata [105] used hierarchical k-means to
separate ground, off-ground and low-off ground classes. However, Chehata’s approach is not suitable for steep terrains.
Separation of buildings and non-building points was done by Filin and Pfeifer [43] using a slope adaptive
neighbourhood technique.
In the grid based methods, Samadzadegan et al. [106] used multi-class SVMs (Support Vector Machines) on grids
derived from LiDAR data to extract different classes. Brattberg and Tolt [107] first separated ground and non-ground
pixels and then used an object based classification to detect the buildings.
8.6 Visual Analytics based approach
Visual analytics has potential to play significant role in improving structural classification and semantic classification
of 3D LiDAR point clouds. Kumari et al. [108] have proposed an approach using Compute Unified Device Architecture
(CUDA) [109], which includes outlier detection, geometric classification, feature detection, line feature extraction,
and down-sampling [110]. In order to alleviate the gap of unsupervised machine learning algorithms in semantic
classification of LiDAR point clouds, the authors have proposed a visualization-driven technique for deciding
clustering parameters for a hierarchical divisive classification [111]. They have implemented a prototype tree visualizer
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
tool for hierarchical Expectation-Maximization technique, where the overall accuracy of around 70% [112] allowed
for quick assessment of datasets. The tree visualizer tool developed allows user to consult the colormaps (or heatmaps)
of different parameters used for semantic classification for determining the parameter that gives the best binary
classification. Structural classes give information of the likelihood of a point belonging to a surface, line, or degenerate
point (or junction) types of features in the region. The authors thereby propose a tuple of structural and semantic
classes, calling it an augmented semantic classification [111] for labeling the points, e.g. (line, building), (surface,
building), etc. The authors have reported that augmented semantic classification gives better quality rendering of point
clouds for visualization, especially where line feature points highlight boundaries or edges. Structural classification is
derived from conventionally used covariance analysis, which gives the shape of the local neighborhood of each point
[108, 110]. However, it was found that the local geometric descriptor, upon eigenvalue analysis, does not detect sharp
line features, e.g. gabled roof lines, and unoriented points, e.g. foliage, accurately [111]. However, a tensor voting
based local geometric descriptor improves the structural classification [112, 113]. The proposed local geometric
descriptor detected line features in gabled roofs, however, detecting un-oriented points in foliage is not satisfactory.
The authors have further proposed the use of gradients of two-dimensional local geometric descriptor for correcting
the classification of such points as point-type features [114] and also used the Gradient Energy Tensor [115] to identify
points of interest, i.e., the foliage points.
8.7 Convolutional neural networks based LiDAR data classification
Deep convolutional neural networks (CNN) have gained lot of popularity in object recognition research from images.
This is because of their higher classification accuracy compared to traditional methods and parameter independency.
Recently researchers have started investigating the use of CNN for classification of point cloud data in general and
LiDAR data in particular. Kumar et al. [116] have proposed a CNN architecture for automatic classification of LiDAR
data obtained for outdoor environment. LiDAR data have noise, clutter, high point density and large size in comparison
to images. The authors have developed an architecture which deals with the problem of geometry loss during rescaling
of point cloud, given the constraint of fixed number of neurons in CNN and have also proposed a method to minimize
the loss of data points due to voxelization. The authors have reported classification accuracies ranging from 85% to
92.5% with kappa values ranging from 75% to 83.5% in different cases of input data. This work is currently for mobile
LiDAR data, however, the same approach can be easily extended for airborne data. As claimed by the authors, the
reported accuracies may be improved further by employing better computing facilities and providing extended training
to the CNN developed.
9 Future scope of technology, related research and conclusion
LiDAR data have the ability to capture and represent our physical environment like never before. All physical
phenomena that happen in our surrounding across wide range of scales highly depend on the physical landscape.
LiDAR is being used in applications like flood modelling, sound propagation modelling [117], electromagnetic wave
propagation modelling among others. There are still a large number of problem domains similar to the above where
LiDAR would find use, e.g. air pollution propagation, sun-light availability analysis, air movement corridor analysis
in urban environment etc.
The 3D Elevation Programme (3DEP) of the USGS for US is an important area where the LiDAR data is expected to
play a major role. It is quite likely that similar programmes would be extended globally. Stoker et al. [13] have
evaluated different sensors for this program. There is a need to develop such program for India where a lot of research
has to be done to choose the right technology and specifications of such elevation dataset.
The use of LiDAR from Unmanned Aerial Vehicles (UAV) has already started. However, its commercial application
is still negligible compared to its counterpart, i.e., photogrammetry based systems. With the sensors becoming smaller
and data processing technologies like Simultaneous Localisation and Mapping (SLAM) becoming stronger, there will
be much wider use of these UAV LiDARs in near future. Generation of large volumes of LiDAR data and in several
cases time-series of data would require development of better processing techniques.
As has been seen in this paper, a variety of approaches have been used for data understanding through classification.
However, a large part of data processing still hinges on manual input. It is expected that artificial intelligence and deep
learning techniques would find much more use in understanding LiDAR data. LiDAR data can be represented in a
large number of features classes based on geometric parameters-local and global. A large number of such features have
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
been already studied. However, there is a need to collate all such features in place (similar to eCognition Software
approach for images) and develop classification techniques based on these.
This review paper has outlined the state-of-the-art LiDAR sensors and their relative advantages. The paper dwells upon
the flight planning issue for an airborne platform and has emphasized the use of optimization based approach for
minimizing the cost of project. The paper also discussed the accuracy and data quality issues of LiDAR data and the
methods to quantify these. The paper has discussed conventional approaches for LiDAR data classification for different
object classes, e.g., ground, building, tree, road etc. More importantly the paper has also highlighted the use of some
recent approaches for data classification which are based on visual analytics and CNN. The paper has successfully
brought all these diverse aspects associated with LiDAR technology in one place and also highlighted the important
research issues where work would focus in future.
References
1. Ritchie JD (1996) Airborne laser altimeters, remote sensing applications to hydrology. Hydrological Sciences
Journal 41(4).
2. Rempel RC & Parker AK (1964) An information note on an airborne laser terrain pprofile for micro-relief
studies. in Proceedings of the 3rd Symposium on Remote Sensing of Environment (University of Michigan
Institute of Science and Technology).
3. Hickman GD & Hogg JE (1969) Application of an airborne pulsed laser for near shore bathymetric
measurements. Remote Sensing of Environment 1.
4. Hoge FE (1988) Airborne Oceanographic LiDAR (AOL) flight mission Participation. in In Its Laboratory for
Oceans (National Aeronautics and Space Administration, Goddard Space Flight Center, Greenbelt, MD), pp
95-97.
5. Krabill WB, Collins JG, Link LE, Swift RN, & Butler ML (1984) Airborne laser topographic mapping results.
Photgrammetric Engineering and Remote Sensing 50:685-694.
6. Schreier H, Lougheed J, Tucker C, & Leckie D (1983) Automated measurements of terrain reflection and
height variations using and Airborne infrared laser system. International Journal of Remote Sensing 6(1).
7. Hoge FE (1974) Integrated laser / radar satellite ranging and tracking system. Applied Optics 13(10):2352 -
2358.
8. Arp H, Griesba JC, & Burns JP (1982) Mapping in tropical forests: a new approach using the laser APR.
Photogrammetric Engineering and Remote Sensing 48(1):91-100.
9. Hofton MA, et al. (2000) An airborne laser altimetry survey of Long Valley California. International Journal
of Remote Sensing 21(12):2413-2437.
10. Wagner W, Ullrich A, Ducic V, Melzer T, & Studnicka N (2006) Gaussian decomposition and calibration of
a novel small-footprint full-waveform digitizing airborne laser scanner. ISPRS Journal of Photogrammetry
and Remote Sensing 60(2):100-112.
11. Doneus M, Briese C, Fera M, & Janner M (2008) Archaeological prospection of forested areas using full-
waveform airborne laser scanning. Journal of Archaeological Science 35(4):882 - 893.
12. Xiaoliang Z, Guihua Z, Jonathan L, Yuanxi Y, & Yong F (2016) 3D Land cover classification based on
multispectral LiDAR point clouds,. International Archives of the Photogrammetry, Remote Sensing & Spatial
Information Sciences 41(B1):741-747.
13. Stoker JM, Abdullah QA, Nayegandhi A, & Winehouse J (2016) Evaluation of Single Photon and Geiger
Mode Lidar for the 3D Elevation Program. Remote Sensing 8(9).
14. Dashora A (2013) Optimization System for Flight Planning for Airborne LiDAR Data Aquisition. (Indian
Institute of Technology Kanpur, Kanpur, India).
15. Dashora A & Lohani B (2013) LiDAR technology and a new method of flight planning for airborne LiDAR
data acquisition. in ISRS - ISG Conference (Vishakhapattanam, India).
16. Dashora A, Lohani B, & Deb K (2014) Method of flight planning for airborne LiDAR using genetic
algorithms. SPIE Journal of Applied Remote Sensing 8(1):1-19.
17. Dashora A, Lohani B, & Deb K (2014) LiDAR flight planning –- A new system for flight planning with
minimal user intervention. GIM International Magazine.
18. Dashora A, Lohani B, & Deb K (2013) Turning mechanisms for airborne LiDAR and photographic data
acquisition. SPIE Journal of Applied Remote Sensing 7(1):1-19.
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
19. Dashora A, Lohani B, & Deb K (2013) Two-step procedure of optimization for solving the flight planning
problem for airborne LiDAR data acquisition. International Journal of Mathematical Modelling and
Numerical Optimization 4(4):323-350.
20. Dashora A, Lohani B, & Deb K (2012) Flight planning system for airborne data acquisition.
21. ASPRS (2011) ASPRS LAS format standard 1.4.
22. Heidemann HK (2014) Lidar base specification (ver. 1.2, November 2014) (USGS).
23. Casella V & Spalla A (2000) Estimation of Horizontal Accuracy of Laser Scanning Data. Proposal of a
Method Exploiting Ramps. The International Archives of Photogrammetry and Remote Sensing 33(B3).
24. Vosselman G (2008) Analysis of planimetric accuracy of airborne laser scanning surveys. International
Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XXXVII(B3a):99-104.
25. Kreylos O, Bawden G, & Kellogg L (Immersive Visualization and Analysis of LiDAR Data. Advances in
Visual Computing, pp 846-855.
26. Richter R & Döllner J (2010) Out-of-core real-time visualization of massive 3D point clouds. in Proceedings
of the 7th International Conference on Computer Graphics, Virtual Reality, Visualisation and Interaction in
Africa (ACM, Franschhoek, South Africa), pp 121-128.
27. Stoker JM (2004) Voxels as a representation of multiple-return lidar data. Proceedings of ASPRS Annual
Conference, Denver, CO, .
28. Stoker JM (2009) Volumetric visualization of multiple-return LIDAR data: Using voxels. Photgrammetric
Engineering and Remote Sensing 75(2):109-112.
29. Isenburg M, Liu Y, Shewchuk J, Snoeyink J, & Thirion T (2006) Generating Raster DEM from mass points
via TIN streaming. in Proceedings of the 4th International Conference on Geographic Information Science.
30. Ghosh S & Lohani B (2015) Development and comparison of aerial photograph aided visualization pipelines
for LiDAR datasets. International Journal of Digital Earth 8(8):656 - 677.
31. Ghosh S & Lohani B (2013) Mining LiDAR data with spatial clustering algorithms. International Journal of
Remote Sensing 34(14):5119 -5135.
32. Ester M, Kriegel H-P, Sander Jo, & Xu X (1996) A density-based algorithm for discovering clusters in large
spatial databases with noise. in Proceedings of the Second International Conference on Knowledge Discovery
and Data Mining (KDD-96) (AAAI Press), pp 226-231.
33. Ankrest M, Breunig M, Kriegel HP, & Sander J (1999) OPTICS: Ordering points to identify the clustering
structure. in Proceedings of ACM, SIGMOD'99.
34. Ghosh S, Lohani B, & Misra N (2014) A study-based ranking of LiDAR data visualization schemes aided by
georectified aerial images. Cartographic and Geographic Information Science 41(2):138-150.
35. Sithole G & Vosselman G (2004) Experimental comparison of filter algorithms for bare-Earth extraction from
airborne laser scanning point clouds. ISPRS Journal of Photogrammetry and Remote Sensing 59:85-101.
36. Kobler A, et al. (2007) Repetitive interpolation: A robust algorithm for DTM generation from Aerial laser
scanner data in forested terrain. Remote Sensing of Environment 108:9-23.
37. Pfeifer N & Mandlburger G (LiDAR data filtering and DTM generation. Topographic Laser Ranging and
Scanning: Principles and Processing, pp 308-333.
38. Meng X, Currit N, & Zhao K (2010) Ground Filtering Algorithms for Airborne LiDAR Data: A Review of
Critical Issues. Remote Sensing. 2010; 2(3):833-860.:833-860.
39. Sithole G & Vosselman G (2003) Automatic structure detection in a point-cloud of an urban landscape. in
Remote Sensing and Data Fusion over Urban Areas, 2003. 2nd GRSS/ISPRS Joint Workshop on, pp 67 -71.
40. Błaszczak-Bąk W, Janowski A, Kamiński W, & Rapinski J (2011) Optimization algorithm and filtration using
the adaptive TIN model at the stage of initial processing of the ALS point cloud. Canadian Journal of Remote
Sensing 37(6):583-589.
41. Roggero M (2001) Airborne laser scanning: clustering in raw data. International Archives of Photogrammetry
and Remote Sensing, 3/W4, Annapolis, MD 34:227-232.
42. Sithole G (2001) Filtering of laser altimetry data using a slope adaptive filter. International Archives of the
Photogrammetry, Remote Sensing and Spatial Information Sciences XXXIV(3/4):203-210.
43. Filin S & Pfeifer N (2006) Segmentation of airborne laser scanning data using a slope adaptive
neighbourhood. ISPRS Journal of Photogrammetry and Remote Sensing 60:71-80.
44. Meng X, Wang L, Silván-Cárdenas JL, & Currit N (2009) A multi-directional ground filtering algorithm for
airborne LIDAR. ISPRS Journal of Photogrammetry and Remote Sensing 64(1):117-124.
45. Elmqvist M (2002) Ground surface estimation from Airborne Laser Scanner Data using active shape models.
in Photogrammetric Computer Vision, ISPRS Commission III Symposium (Graz, Austria).
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
46. Elmqvist M (2001) Ground estimation of laser radar data using active shape models. in OOPE workshop on
airborne laser scanning and interferometric SAR for detailed digital elevation models.
47. Axelsson P (2000) DEM Generation from laser scanner data using adaptive TIN models. in IAPRS, XXXIII,
B4/1, Amsterdam, The Netherlands.
48. Axelsson P (1999) Processing of laser scanner data - algorithms and applications. ISPRS Journal of
Photogrammetry and Remote Sensing 54(2-3):138-147.
49. Sohn G & Dowman I (2002) Terrain surface reconstruction by the use of tetrahedron model with the MDL
Criterion. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
XXXIV(Pt. 3A):336-344.
50. Brovelli MA, Cannata M, & Longoni UM (2004) LIDAR Data Filtering and DTM Interpolation Within
GRASS. Transactions in GIS 8(2):155-174.
51. Brovelli MA, Cannata M, & Longoni UM (2002) Managing and processing LIDAR data within GRASS. in
Proceedings of GRASS Users Conference (Trento, Italy, 11-13 September), p 29.
52. Kraus K & Pfeifer N (1997) A new method for surface reconstruction from laser scanner data. in IAPRS,
XXXII, 3/2W3, Haifa, Israel.
53. Lohmann P, Koch A, & Schäffer M (2000) Approaches to the filtering of laser scanner data. International
Archives of Photogrammetry and Remote Sensing, XXXIII(B3/1):534-541.
54. Overby J, Bodum L, Kjems E, & Iisoe PM (2004.) Automatic 3D building reconstruction from airborne laser
scanning and cadastral data using Hough transform. International Archives of Photogrammetry and Remote
Sensing XXXV(B3):296-301.
55. Tarsha-Kurdi F, Landes T, & Grussenmeyer P (2007) Hough-Transform and Extended RANSAC Algorithms
for Automatic Detection of 3d Building Roof Planes From Lidar Data. in International Archives of
Photogrammetry and Remote Sensing (Espoo, Finland).
56. Nardinocchi C, Scaioni M, & Forlani G (2001) Building extraction from LIDAR data. in Remote Sensing and
Data Fusion over Urban Areas, IEEE/ISPRS Joint Workshop 2001, pp 79 -84.
57. Bretar F (Feature extraction from LiDAR data in urban areas. Topographic Laser Ranging and Scanning:
Principles and Processing, pp 403-419.
58. Tseng Y-H & Wang M (2005) Automatic plane extraction from LIDAR data based on octree splitting and
merging segmentation. in Geoscience and Remote Sensing Symposium, 2005. IGARSS '05. Proceedings. 2005
IEEE International, pp 3281 - 3284.
59. Hofmann AD, Maas H-G, & Streilein A (2003) Derivation of roof types by cluster analysis in parameter
spaces of airborne laserscanner point clouds. in ISPRS Commission III WG3, Workshop, 3-D reconstruction
from airborne laserscanner and InSAR data, IAPRS International Archives of Photogrammetry and Remote
Sensing and Spatial Information Sciences Vol.34, Part 3/W13, pp 112-117.
60. Morgan M & Habib A (2002) Interpolation of LiDAR Data and Automatic Building Extraction. in ACSM-
ASPRS 2002 Annual Conference Proceedings.
61. Tse R, Gold C, & Kidner D (2007) Using the Delaunay Triangulation/ Voronoi Diagram to extract Building
Information from Raw LIDAR Data. in Voronoi Diagrams in Science and Engineering, 2007. ISVD '07. 4th
International Symposium on, pp 222 -229.
62. Oude Elberink S & Vosselman G (2009) Building Reconstruction by Target Based Graph Matching on
Incomplete Laser Data: Analysis and Limitations. Sensors 9(8):6101-6118.
63. Cho W, Jwa Y-S, Chang H-J, & Lee S-H (2004) Pseudo-Grid Based Building Extraction Using Airborne
LIDAR Data. in International Archives of Photogrammetry and Remote Sensing, pp 378-381.
64. Maas HG & Vosselman G (1999) Two algorithms for extracting building models from raw laser altimetry
data. ISPRS Journal of Photogrammetry and Remote Sensing 54:153-163.
65. Hough PVC (1962) Methods and means for recognizing complex patterns.
66. Lohani B & Singh R (2008) Effect of data density, scan angle, and flying height on the accuracy of building
extraction using LiDAR data. Geocarto International 23(2):81-94.
67. Fischler MA & Bolles RC (1981) Random Sample Consensus: A paradigm for model fitting with applications
to image analysis and automated cartography. Communications of the ACM 24(6):381-395.
68. Forlani G, Nardinocchi C, Scaioni M, & Zingaretti P (2006) Complete classification of raw LIDAR data and
3D reconstruction of buildings. Pattern Analysis & Applications 8:357-374.
69. Forlani G, Nardinocchi C, Scaioni M, & Zingaretti P (2003) Building reconstruction and visualization from
LiDAR data. in The International Archives of the Photogrammetry, Remote Sensing and Spatial Information
Sciences (Ancona, Italy).
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
70. Auer S & Hinz S (2007) Automatic extraction of salient geometric entities from LIDAR point clouds. in
Geoscience and Remote Sensing Symposium, 2007. IGARSS 2007. IEEE International, pp 2507 -2510.
71. Donald M (1980) Octree Encoding: A new technique for the representation, manipulation and display of
arbitrary 3-D objects by computer. (Rensselaer Polytecnic Institute).
72. Ming-Kuei H (1962) Visual pattern recognition by moment invariants. IEEE Transactions on Information
Theory 8(2):179 - 187.
73. Maas H-G (1999) Closed solutions for the determination of parametric building models from invariant
moments of airborne laserscanner data. in ISPRS Conference 'Automatic Extraction of GIS Objects from
Digital Imagery', Munich, September 6-10, pp 193-199.
74. Maas HG (1999) Fast determination of parametric house models from dense airborne laserscanner data.
International Archives of Photogrammetry and Remote Sensing XXXII(2W1).
75. Rottensteiner F & Jansa J (2002) Automatic extraction of buildings from lidar data and aerial images. in
ISPRS Commission IV, Symposium 2002 (Ottawa).
76. Brenner C, Haala N, & Fritsch D (2001) Towards fully automated 3D city model generation. in Ascona01, pp
47-57.
77. Laycock RG & Day AM (2003) Rapid generation of urban models. Computers & Graphics 27(3):423-433.
78. Huber M, Schickler W, Hinz S, & Baumgartner A (2003) Fusion of LiDAR data and aerial imagery for
automatic reconstruction of building surfaces. in 2nd GRSS/ISPRS Workshop on ``Data Fusion and Remote
Sensing over Urban Areas'', pp 82-86.
79. Sampath A & Shan J (2010) Segmentation and Reconstruction of Polyhedral Building Roofs From Aerial
Lidar Point Clouds. Geoscience and Remote Sensing, IEEE Transactions on 48(3):1554 -1567.
80. Teo T-A, Rau J-Y, Chen L-C, Liu J-K, & Hsu W-C (Reconstruction of Complex Buildings using LIDAR and
2D Maps. Innovations in 3D Geo Information Systems, pp 345-354.
81. Zaharia T & Prêteux F (2002) Shape based retrieval of 3D mesh models. in Proceedings of the IEEE
International Conference on Multimedia and Expo, 2002. ICME '02, pp 437 - 440.
82. Chen L-C, Teo T-A, Kuo C-Y, & Rau J-Y (2008) Shaping Polyhedral Buildings by the Fusion of Vector
Maps and Lidar Point Clouds. Photogrammetric Engineering & Remote Sensing 74(9):1147-1157.
83. Hyyppä J, et al. (Forest inventory using small-footprint airborne LiDAR. Topographic Laser Ranging and
Scanning: Principles and Processing, pp 335-370.
84. Fujisaki I, et al. (2008) Stand Assessment Through LiDAR-Based Forest Visualization Using a Stereoscopic
Display. Forest Science 54(1):1-7.
85. Morsdorf F, Meier E, Allgöwer B, & Nüesch D (2003) Clustering in airborne laser scanning raw data for
segmentation of single trees. in International Archives of the Photogrammetry, Remote Sensing and Spatial
Information Sciences.
86. Morsdorf F, et al. (2004) LIDAR-based geometric reconstruction of boreal type forest stands at single tree
level for forest and wildland fire management. Remote Sensing of Environment 92:353-362.
87. Hyyppä J & Inkinen M (1999) Detecting and estimating attributes for single trees using laser scanner. The
Photogrammetric Journal of Finland 16:27-42.
88. Friedlaender H & Koch B (2006) First experience in the application of laser scanner data for the assessment
of vertical and horizontal forest structures. in International Archives of Photogrammetry and Remote Sensing,
XXXIII, B7, ISPRS Congress, Amsterdam.
89. Brandtberg T, Warner T, Landenberger R, & McGraw J (2003) Detection and analysis of individual tree
crowns in small footprint, high sampling density LiDAR data from the eastern deciduous forest in North
America. Remote Sensing of Environment 85:290-303.
90. Tiede D & Hoffman C (2006) Process oriented object-based algorithms for single tree detection using laser
scanning. in International Workshop 3D Remote Sensing in Forestry Proceedings, Vienna, February 14-15,
2006.
91. Oehlke C, Richter R, & Döllner J (2015) Automatic Detection and Large-Scale Visualization of Trees for
Digital Landscapes and City Models based on 3D Point Clouds. in 16th Conference on Digital Landscape
Architecture (DLA 2015), pp 151-160.
92. Shamayleh H & Khattak A (2003) Utilization of LiDAR technology for highway inventory. in Proceedings
of the 2003 Mid-continent Transportation Research Symposium (Ames, Iowa).
93. Harvey WA & McKeown Jr. DM (2008) Automatic compilation of 3D road features using LIDAR and multi-
spectral source data. in ASPRS 2008.
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
94. Tiwari P, Pande H, & Pandey A (2009) Automatic urban road extraction using airborne laser
scanning/altimetry and high resolution satellite data. Journal of the Indian Society of Remote Sensing 37:223-
231.
95. Vosselman G (2003) 3D reconstruction of roads and trees for city modelling. in 3-D reconstruction from
airborne laserscanner and InSAR data (Dresden, Germany).
96. Oude Elberink S & Vosselman G (2006) 3D modelling of topographic objects by fusing 2D maps and lidar
data. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences
XXXVI(4):27-30.
97. Clode S, Rottensteiner F, & Kootsookos PJ (2005) Improving City Model Determination By Using Road
Detection From LiDAR Data. in Joint Workshop of ISPRS and the German Association for Pattern
Recognition (DAGM), 'Object Extraction for 3D City Models, Road Databases and Traffic Monitoring -
Concepts, Algorithms, and Evaluation' (CMRT05) (Vienna, Austria).
98. Zhu P, Lu Z, Chen X, Honda K, & Eiumnoh A (2004) Extraction of city roads through shadow path
reconstruction using laser data. Photogrammetric Engineering and Remote Sensing 70(12):1433-1440.
99. Choi Y-W, Jang Y-W, Lee H-J, & Cho G-S (2008) Three-Dimensional LiDAR Data Classifying to Extract
Road Point in Urban Area. Geoscience and Remote Sensing Letters, IEEE 5(4):725 -729.
100. Samadzadegan F, Hahn M, & Bigdeli B (2009) Automatic road extraction from LIDAR data based on
classifier fusion. in Urban Remote Sensing Event, 2009 Joint, pp 1 -6.
101. Zhao J, You S, & Huang J (2011) Rapid extraction and updating of road network from airborne LiDAR data.
in Applied Imagery Pattern Recognition Workshop (AIPR), 2011 IEEE, pp 1 -7.
102. Clode S & Rottensteiner F (2005) Classification of trees and powerlines from medium resolution Airborne
Laserscanner data in urban environments. in WDIC, pp 191-196.
103. Sithole G & Vosselman G (2005) Filtering of airborne laser scanner data based on segmented point clouds.
International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol.
XXXVI(3/W19):66-71.
104. Shan J & Sampath A (2005) Urban DEM generation from raw lidar data: A labelling algorithm and its
performance. Photogrammetric Engineering and Remote Sensing 71(2):217-226.
105. Chehata N, David N, & Bretar F (2008) LIDAR Data Classification using Hierarchical k-means clustering.
in International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences. (Beijing,
China), pp 325-330.
106. Samadzadegan F, Bigdeli B, & Ramzi P (A Multiple Classifier System for Classification of LIDAR Remote
Sensing Data Using Multi-class SVM. Multiple Classifier Systems, pp 254-263.
107. Brattberg O & Tolt G (2008) Terrain classification using airborne lidar data and aerial imagery. in The
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (Beijing
2008), pp 261-266.
108. Kumari B, Ashe A, & Sreevalsan Nair J (2014) Remote Interactive Visualization of Parallel Implementation
of Structural Feature Extraction of Three-dimensional Lidar Point Cloud. in Big Data Analytics (Springer
International Publishing, Third International Conference, BDA 2014, New Delhi, India, December 20-23,
2014), pp 129-132.
109. NVIDIA (2008) NVIDIA CUDA Programming Guide 2.0 (NVIDIA Corporation).
110. Keller P, et al. (2011) Extracting and Visualizing Structural Features in Environmental Point Cloud LiDaR
Data Sets. Topological Methods in Data Analysis and Visualization: Theory, Algorithms, and Applications,
(Springer), pp 179-192.
111. Kumari B & Sreevalsan-Nair J (2015) An Interactive Visual Analytic Tool for Semantic Classification of 3D
Urban LiDAR Point Cloud. in Proceedings of the 23rd SIGSPATIAL International Conference on Advances
in Geographic Information Systems (ACM, Seattle, Washington), pp 73:71-73:74.
112. Kumari B (2016) Visualization Techniques in Classification of 3D Urban LiDAR Point Cloud. Master of
Science by Research (IIIT Bangalore).
113. Sreevalsan Nair J & Kumari B (2017 preprint) Local Geometric Descriptors for Multi-Scale Probabilistic
Point Classification of Airborne LiDAR Point Clouds. Mathematics and Visualization:1-26.
114. Sreevalsan-Nair J & Jindal A (2017) Using Gradients and Tensor Voting in 3D Local Geometric Descriptors
for Feature Detection in Airborne LiDAR Point Clouds in Urban Regions. in Proceedings of 2017 IEEE
International Geoscience and Remote Sensing Symposium, 2017 (to appear).
115. Felsberg M & Köthe U (2005) GET: The Connection Between Monogenic Scale-Space and Gaussian
Derivatives. in Scale Space and PDE Methods in Computer Vision (Springer Berlin Heidelberg, Berlin,
Heidelberg), pp 192-203.
THIS IS A PREPRINT VERSION OF THE ORIGINAL ARTICLE. THE ORIGINAL ARTICLE IS AVAILABLE
AT: https://link.springer.com/article/10.1007/s40010-017-0435-9
116. Kumar B, Lohani B, & Pandey G (2017) Development of deep learning architecture for automatic
classification of mobile LiDAR data. Int. Jr. of Phtogrammetry and Remote Sensing (communicated).
117. Biswas S & Lohani B (2008) Development of high resolution 3D sound propagation model using LiDAR
data and aerial photo. in The International Archives of the Photogrammetry, Remote Sensing and Spatial
Information Sciences (Beijing 2008).
... The processes for data acquisition are indicated in two ways: First, images were taken with a visible camera (RGB) and then generating a 3D scene ( Figure 18) [75]. The second option proposes the acquisition of LIDAR data using UAVs [76][77][78]. Drones are an essential tool for power line inspection; however, self-driving UAVs do not guarantee the total reliability of data collection. Finally, the generation of a 3D point cloud from which the power lines and vegetation can be extracted due to measuring their respective distances with high accuracy. ...
Article
Full-text available
Civil engineering is a field of knowledge in direct contact with the citizen, not only in the design and construction of infrastructure but also in its maintenance, conservation, monitoring, and management. The integration of new technologies, such as drones, is revolutionizing work methodologies, offering new possibilities for the execution and management of infrastructure and minimizing human intervention in these jobs, with the increase in occupational safety and cost reduction that this entails. This study presents a comprehensive review of the literature on UAV applications for the monitoring and management of civil infrastructure. The applicability of UAVs and their connection with the main existing sensors and technologies are analyzed, such as visible cameras (RGB), multispectral cameras, and hyperspectral cameras, in the most relevant areas of civil engineering, such as building inspection, bridge inspection, dams, power line inspection, photovoltaic plants, inspection, hydrological studies road inspection, slope supervision, and the maintenance and monitoring of landfill operation. The impact and scope of these technologies are addressed, as well as the benefits in terms of process automation, efficiency, safety, and cost reduction. The incorporation of drones promises to significantly transform the practice of civil engineering, improving the sustainability and resilience of infrastructures.
... Satellite and airborne sensors provide structural (e.g., vegetation) information about ecosystems, as well as dynamics, functions, and processes (e.g., net primary productivity) affecting ecological conditions and trends. Moreover, data from certain sensors, like LIDAR, when flow on aircraft, provide finer-scale data (in some cases with at cm resolutions) on vegetation as well as topography, which is often a critical determent of ecological function and condition (Lohani and Ghosh, 2017). ...
... Airborne LiDAR technology is not a technique that was developed purely for archaeological purposes [5,6]. In fact, starting from the second half of the 20th century, the idea of using airborne lasers for topographical and urban planning purposes [7-9] and for studying oceans [10,11] and vegetation [12][13][14] was developed. ...
Article
Full-text available
This study introduces a methodology for the improvement of the visibility of archaeological features using an open-source probabilistic machine learning framework applied to UAV LiDAR data from the Torre Castiglione site in Apulia, Italy. By leveraging a Random Forest classification algorithm embedded in an open-source software, the approach processes dense LiDAR point clouds to segment out vegetation from the ground and the structures. Key steps include training the classifier, generating digital terrain models, digital feature models, and digital surface models, and enhancing the visibility of archaeological features. This method has proven effective in improving the interpretation of archaeological sites, revealing previously hidden or difficult-to-access microtopographic and structural details, such as the defensive structures, terraces, and ancient paths of the Torre Castiglione site. The results underline this methodology’s ease of use in uncovering archaeological landscapes under a dense canopy. Moreover, the study emphasises the benefits of using open-source tools to enhance the documentation and analysis of remote or difficult archaeological sites.
... While these methods have proven effective, they often fall short in capturing vertical structures and fine spatial details [5]. Three-dimensional laser point cloud technology provides high-precision spatial information, overcoming the vertical spatial resolution limitations of images [6]. Unlike remote sensing images, the unstructured and irregular nature of point clouds makes them more challenging to process [7]. ...
Article
Full-text available
Utilizing deep learning techniques to extract high-precision features from point clouds is essential for accurately capturing land cover information, which is instrumental in urban planning and environmental conservation. Despite delivering high-accuracy outcomes in the semantic segmentation of extensive terrestrial point clouds, prevalent methodologies encounter considerable hurdles, particularly in training and inference duration, as well as the associated hardware expenses. To solve these issues, this paper introduces an efficient and lightweight deep learning network called Uniform Voxelization Geometric Enhancement and Local-Global Feature Fusion Network (VEF-Net). VEF-Net is designed to improve the training and inference efficiency of large-scale point cloud semantic segmentation while maintaining accuracy. Uniform voxel down-sampling is employed to discretize point clouds, resulting in a substantial enhancement in computational and memory performance. To counteract potential information loss due to voxel down-sampling, VEF-Net integrates a mechanism unit to enhance local geometric features, enriching point cloud data. Furthermore, it incorporates a Local-global feature fusion module, adeptly capturing the global contextual relationships within the point cloud. Experimental results show that VEF-Net achieved excellent performance on both the proprietary Bengbu dataset and the publicly available Toronto-3D dataset. VEF-Net maintained comparable segmentation accuracy to mainstream models while operating at a lower computational cost, and it demonstrated significant advantages in certain key tasks. On the Toronto3D dataset, VEF-Net achieved an IoU of 20.4% in the road marking category. Additionally, VEF-Net demonstrated lower model parameters and computational complexity (FLOPs), achieving training speeds 12 times faster and inference speeds 16.8 times faster than Point Transformer.
... Recent advancements in 3D scanning technologies and point cloud generation have significantly expanded the use of 3D measurements for extracting urban features. Among these, light detection and ranging (LiDAR) technology is particularly valued for its precision and speed, enabling rapid data collection [6]. Building on these capabilities, unmanned aerial vehicles (UAVs) equipped with LiDAR, high-resolution cameras, inertial measurement units (IMUs), and global navigation satellite systems (GNSS) can be a mobile platform that enhances 3D point cloud acquisition. ...
Article
Full-text available
Segmentation of 3D point clouds is essential for applications such as environmental monitoring and autonomous navigation, where making accurate distinctions between different classes from high-resolution 3D datasets is critical. Segmenting 3D point clouds often requires a trade-off between preserving spatial information and achieving computational efficiency. In this paper, we present SAMNet++, a hybrid 3D segmentation model that integrates segment anything model (SAM) and adopted PointNet++ in a sequential two-stage pipeline. Firstly, SAM performs an initial unsupervised segmentation, which is then refined using adopted PointNet++ to improve the accuracy. The key innovations of SAMNet++ include its hybrid architecture, which combines SAM’s generalization with PointNet++’s local feature extraction, and a feature refinement strategy that enhances precision while reducing computational overhead. Additionally, SAMNet++ minimizes the reliance on extensive supervised training, while maintaining high accuracy. The proposed model is tested on three urban datasets, which are collected by an unmanned aerial vehicle (UAV). The proposed SAMNet++ model demonstrates high segmentation performance, achieving accuracy, precision, recall, and F1-score values above 0.97 across all classes on our experimental datasets. Furthermore, its mean intersection over union (mIoU) of 86.93% on a public benchmark dataset signifies a more balanced and precise segmentation across all classes, surpassing previous state-of-the-art methods. In addition to its improved accuracy, SAMNet++ showcases remarkable computational efficiency, requiring almost half the processing time of standard PointNet++ and nearly one-sixteenth of the time needed by the original PointNet algorithm.
... The collected point cloud data, often georeferenced in a global coordinate system, are processed to generate high-resolution digital elevation models (DEMs), terrain models, and 3D structural reconstructions. LiDAR UAVs are widely used for topographic mapping, vegetation analysis, infrastructure inspection, and disaster management applications due to their high accuracy and efficiency in capturing detailed terrain features [43]. ...
Article
Full-text available
The integrity of earth infrastructure, encompassing slopes, dams, pavements, and embankments, is fundamental to the functioning of transportation networks, energy systems, and urban development. However, these infrastructures are increasingly threatened by a range of natural and anthropogenic factors. Conventional monitoring techniques, including inclinometers and handheld instruments, often exhibit limitations in spatial coverage and operational efficiency, rendering them insufficient for comprehensive evaluation. In response, Uncrewed Aerial Vehicles (UAVs) and Electrical Resistivity Imaging (ERI) have emerged as pivotal technological advancements, offering high-resolution surface characterization and critical subsurface diagnostics, respectively. UAVs facilitate the detection of deformations and geomorphological dynamics, while ERI is instrumental in identifying zones of water saturation and geological structures, detecting groundwater, characterizing vadose zone hydrology, and assessing subsurface soil and rock properties and potential slip surfaces, among others. The integration of these technologies enables multidimensional monitoring capabilities, enhancing the ability to predict and mitigate infrastructure instabilities. This article focuses on recent advancements in the integration of UAVs and ERI through data fusion frameworks, which synthesize surface and subsurface data to support proactive monitoring and predictive analytics. Drawing on a synthesis of contemporary research, this study underscores the potential of these integrative approaches to advance early-warning systems and risk mitigation strategies for critical infrastructure. Furthermore, it identifies existing research gaps and proposes future directions for the development of robust, integrated monitoring methodologies.
... It may also possible, where land use and management intensity data are known, to infer soil P source pressures based on extant monitoring data. As national scale LiDAR datasets become more commonplace (Lohani and Ghosh, 2017), the ability for targeted CSA implementation for priority works will therefore be practical in other countries or regions. ...
Article
In the realm of earth observation and space exploration, LiDAR technology offers humanity insights into the dynamics of our planet and beyond. This paper reviews spaceborne LiDAR instruments with attention to their evolution, capabilities, and achievements. We focus on the high-level LiDAR instrument design, their components, and their operational parameters in contribution to the study of Earth. Through examining selected space missions, this work illustrates the role of LiDAR technology in our understanding of environmental and atmospheric phenomena. Furthermore, the paper looks ahead, discussing the ongoing development of advanced LiDAR technologies.
Preprint
Full-text available
This study investigates the integration of UAV LiDAR-derived individual tree crown (ITC) metrics and distance-dependent competition indices (CI) as input to machine learning (ML) models—random Forest (RF) and support vector machines (SVM)—for predicting the individual tree level yield of Pinus taeda (L.) plantations after 4-years. We hypothesized that RF and SVM would outperform multiple linear regression (MLR) in prediction accuracy and that certain ITC and CI variables would disproportionately impact yield predictions. Analyses of variance (ANOVA) show a significant impact of planting density across the models. SVM achieved the highest individual tree-level yield accuracy (mean absolute error (MAE): 0.07 m³ tree-1, normalized RMSE (nRMSE): 9.59%, R²: 0.59), followed by RF (MAE: 0.08 m³ tree-1, nRMSE: 10.86%, R²: 0.48). When the individual tree-level yield aggregated at the stand-level, SVM underpredicted total stem volume by -1.50%, while RF overpredicted by 1.53%. The reduced ML models, using the top seven predictors, maintained similar accuracy. The reduced SVM model achieved a nRMSE of 9.14%, an R2 of 0.55, and underpredicted stand volume by 0.90%, while the reduced RF model had a nRMSE of 11.88%, an R2 of 0.37, and overpredicted by 0.71% at stand-level. The study also evaluated a best subset MLR model, however, the model failed to meet the assumptions of homoscedasticity, primarily due to the presence of trees that died over the four-year prediction interval and therefore could not produce a valid model. These findings demonstrate the potential for integrating LiDAR into yield prediction models for accurate individual tree and stand-level predictions and suggest broader applications in yield modeling, forest carbon modeling, and biomass estimation.
Article
Full-text available
Autonomous electric vehicles operating in confined environments face unique challenges, as the objects within these environments often have specific characteristics not adequately covered by general datasets. This condition is particularly relevant for the autonomous electric vehicle being developed by BRIN in Indonesia, which requires accurately recognizing and classifying objects to avoid collisions and navigate safely. To address these needs, our research proposes the development of an object detection and classification system utilizing the Velodyne VLP-16 LiDAR. Given the tendency of this LiDAR to produce sparse point clouds, we have adjusted the PointPillars method to better adapt to such data, showcasing the adaptability of our system. Our findings indicate that a backbone network model configuration based on a residual network (BaseBEVRESBackbone) outperforms the traditional PointPillars backbone configuration (BaseBEVBackbone). This superior performance is achieved even with a more straightforward layer configuration and smaller voxel size, demonstrating the effectiveness of our approach in enhancing LiDAR-based object detection and classification in constrained environments.
Article
Full-text available
Data acquired by Harris Corporation’s (Melbourne, FL, USA) Geiger-mode IntelliEarth™ sensor and Sigma Space Corporation’s (Lanham-Seabrook, MD, USA) Single Photon HRQLS sensor were evaluated and compared to accepted 3D Elevation Program (3DEP) data and survey ground control to assess the suitability of these new technologies for the 3DEP. While not able to collect data currently to meet USGS lidar base specification, this is partially due to the fact that the specification was written for linear-mode systems specifically. With little effort on part of the manufacturers of the new lidar systems and the USGS Lidar specifications team, data from these systems could soon serve the 3DEP program and its users. Many of the shortcomings noted in this study have been reported to have been corrected or improved upon in the next generation sensors.
Chapter
Full-text available
Point classification is necessary for detection and extraction of geometric feature (folds, creases, junctions, surfaces), and subsequent 3D reconstruction of point-sampled geometry of topographic data captured using airborne LiDAR technology. Geometry-based point classification (line-, surface-, point-type features) is determined using shape of the local neighborhood, given by the local geometric descriptor (LGD) at every point in the point cloud. Covariance matrix of local neighborhoods is the conventionally used LGD in the LiDAR community. However, it is known that covariance analysis has drawbacks in detection of sharp features, which are a subset of the line-type features. Here, we compare the performance of new variants of existing LGDs, such as weighted covariance matrix, and that based on tensor voting concept, in geometric classification with that of covariance matrix. We propose a multi-scale probabilistic saliency map based on eigenvalues of the LGDs for computing the classification. Usually the state-of-the-art performance analyses of LGDs in the classification outcomes are done downstream after feature extraction. We propose that the comparisons may be done upstream at the classification stage itself, which can be achieved by expressing these LGDs as positive semidefinite second-order tensors. We perform qualitative comparisons of the tensor fields based on shape and orientation of the tensors, and the classification outcomes using visualizations. We visualize LGDs using superquadric tensor glyphs and point rendering, using our proposed saliency map as colormap. Our detailed comparative analysis shows that the new variant of LGDs based on tensor voting classify line-type features, especially sharp features, better than covariance-based LGDs. Our proposed LGD based on tensor voting performs better than the covariance matrix, for our goal of detecting sharp features, e.g. gabled roofs in buildings.
Conference Paper
Full-text available
We propose a novel unsupervised machine learning approach for effective semantic labeling by combining two different multi-class classifications, structural and contextual classification, of points in airborne LiDAR point cloud of urban environment. Structural classification labels a point in the cloud as a point-, line-, or surface-type feature. An additional outcome of this classification is the geometry-preserving downsampling of the point cloud. The contextual classification, on the other hand, labels the points in four classes, namely, buildings, vegetation, natural ground, and asphalt ground, by using data derived from the raw input, which includes the structural classification. Preserving these two classifications in the labeling of the points gives a geometry-aware contextual semantic labeling. We propose: (a) an augmented semantic classification which preserves both structural and contextual classification, (b) an interactive hierarchical clustering method for contextual classification, and (c) an interactive visual analytic framework to aid both the structural and contextual classifications.
Conference Paper
Full-text available
Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.
Article
Full-text available
Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.
Article
This paper proposes a deep convolutional neural network (CNN) architecture for automatic classification of mobile laser scanning (MLS) data obtained for outdoor environment, which are characterized by noise, clutter, large size and larger quantum of information. The developed architecture introduces a look up table (LUT) based approach, which retains the geometry of the input MLS point cloud while rescaling. Further, with the voxelisation of the input MLS sample, the ambiguity of selecting one out of multiple point values within a voxel is resolved. The performance of the architecture is evaluated on MLS data of outdoor environment in two instances, first using tree and non-tree classes (non-tree class has objects like electric pole, wire, low vegetation, wall, house and ground) and then with tree and electric pole classes. Additional testing is carried out by mixing the outdoor MLS data of tree and electric pole classes with three classes of indoor objects, taken from Modelnet dataset, thereby assessing the architecture efficacy over an ensemble of three-dimensional (3D) datasets. Classification of tree and non-tree classes, followed by tree and electric pole classes from MLS samples result in total accuracies of 86.0%, 90.0% respectively and kappa values of 72.0%, 78.7% respectively. Moreover, for the combinations of MLS and Modelnet classes, the classification results are promising, reaching a total accuracy of 95.2% and kappa of 92.5%. The LUT based approach has shown better classification over the traditional rescaling approach for the MLS dataset, resulting in an enhancement up to 9.0% and 18.0% in total accuracy and kappa, respectively. With different varieties of tree, non-tree and electric pole samples, the proposed architecture has shown its potential for automatic classification of MLS data with high accuracy. This study further reveals that the accuracy of classification is improved by introducing more spatial features in the input layer. The accuracies produced in this work can be further improved with the availability of better hardware resources.
Conference Paper
Vegetation objects represent one main compositional element of digital models of our environment required by a growing number of simulation, analysis, and visualization applications. However, a detailed representation of vegetation in 3D spatial models is generally not feasible due to the lack of up-to-date, object-based, and area-wide tree surveys and computational limits in data acquisition, storage, and visualization regarding vegetation. In this paper, we present an approach for automatic detection, categorization, and visualization of individual trees based on dense 3D point cloud analysis and efficient real-time rendering techniques such as instancing, adaptive tessellation, and vertex displacement. We have evaluated our approach for an urban area and a forest area with about 100 points/m², running real-time visualization on standard desktop hardware. The results indicate that this kind of automatic tree cadastre based on dense 3D point clouds is a practicable and cost-efficient approach to integrate area-wide, object-based vegetation models into virtual 3D landscape and 3D city models and, in particular, significantly enhance their visual appearance and their suitability for computational applications.