ArticlePDF Available

Abstract and Figures

Over the past years, several filters have been developed to extract bare-Earth points from airborne laser scanner data. we conducted a test to determine the performance of three different filters and applying wavelets for the degradation of blunders in LIDAR data, and to identify directions for future research. Five selected samples have been processed by three participants. In this paper, the test results are presented. The paper describes the characteristics of the used filter approaches. The filter performance is analysed qualitatively. The performance of all filters depends on the complexity of landscapes in which they used. In general, wavelet de-noising is found to perform well and it is hardly recommended to use wavelets for de-noising the LIDAR data. Future research should be directed towards the usage of wavelets in the filtering algorithms.
Content may be subject to copyright.
BARE EARTH EXTRACTION FROM AIRBORNE LIDAR DATA USING DIFFERENT
FILTERING METHODS
A. Baligh
*
, M. J. Valadan Zoej, A. Mohammadzadeh
Geodesy and Geomatics Engineering Faculty, K.N.Toosi University of Technology, No. 1346,
Vali_Asr St., Tehran, Iran, Postal Code: 1996715433 -
ali_baligh@yahoo.com, valadanzouj@kntu.ac.ir, ali_mohammadzadeh@alborz.kntu.ac.ir
Commission III, WG III/3
KEY WORDS: LIDAR, Point Cloud, Classification, DTM, Landscape, Segmentation
ABSTRACT:
Over the past years, several filters have been developed to extract bare-Earth points from airborne laser scanner data. we conducted a
test to determine the performance of three different filters and applying wavelets for the degradation of blunders in LIDAR data, and
to identify directions for future research. Five selected samples have been processed by three participants. In this paper, the test
results are presented. The paper describes the characteristics of the used filter approaches. The filter performance is analysed
qualitatively. The performance of all filters depends on the complexity of landscapes in which they used. In general, wavelet de-
noising is found to perform well and it is hardly recommended to use wavelets for de-noising the LIDAR data. Future research
should be directed towards the usage of wavelets in the filtering algorithms.
1. INTRODUCTION
The use of airborne Light Detection And Ranging (LIDAR)
technology offers rapid high resolution capture of surface
elevation data suitable for a large range of applications. The
commercial use of LIDAR in the last few years has gained more
importance as more reliable and accurate systems are produced.
While LIDAR has come a long way, the election of appropriate
data processing techniques for the generation of models is still
being researched. In order to produce digital elevation models,
filtering and quality control pose the greatest challenges,
consuming an estimated 60–80% of processing time (Flood,
2001), and so underlining the necessity for research in this field.
Some comparison of known filtering algorithms and problems
have been mentioned in Huising and Gomes Pereira (1998),
Haugerud and Harding (2001), Tao and Hu (2001) and Sithole
and Vosselman (2003). However, an exprimental comparison is
available, although it would be useful to improve the
performance of the different approaches by adding some hints
on how to apply the algorithms. In order to assess the
weaknesses of the different approaches we used the available
reference data.
Therefore, we established a study to compare the performance
of three automatic filters developed to date, with the aim of:
– determining the comparative performance of these filters,
– applying wavelet denoising method for the detection of
blunders in LIDAR data, and
– identifying directions for future research on
filtering point clouds.
To achieve these aims, we used the ISPRS web site at which
datasets were provided for testing. processing all datasets was
not possible, but we used some. Results were received from
three participants. The algorithms are: Hierarchical terrain
recovery algorithm by
utilizing Image pyramid (Hu, 2003),
Filtering of Airborne laser scanner data based on segmented
point clouds (Sithole, 2005), Slope based filtering (Vosselman,
2000). We used these algorithms because of their novel methods
for extracting the bare-Earth from airborne laser scanner point
clouds.
This paper is structured as follows: The laser scanning data used
in this paper is described in Section 2. Section 3 describes three
selected filter algorithms. Section 4 deals with filter concepts.
Section 5 describes the application of wavelet in LIDAR de-
noising and The results of the implementations are described
and discussed in Section 6. In Section 7, the conclusions are
drawn with respect to the objectives set out.
2. TEST DATA
Within the ISPRS web site
(http://www.itc.nl/isprswgIII-
3/filtertest/StartPage.htm)
we found some free datasets which
belongs to the OEEPE project on laser scanning (OEEPE, 2000),
FOTONOR acquired data with an Optech ALTM scanner over
the Vaihingen/Enz test field and the Stuttgart city centre. By the
kind permission of Commission III, Working Group 3, subsets
of this dataset are available for the comparison of filtering
algorithms. Also reference data which was produced by
interactively filtering the datasets is accessible.
2.1. Data provided to the participants
A total of five samples (of urban area) were chosen because
they contained a variety of features that were expected to be
difficult for the filtering. The datasets included terrain with
steep slopes, dense vegetation, densely packed buildings with
vegetation in between, large buildings, multi-level buildings
with courtyards,
ramps, underpasses and etc. The samples of urban areas were
recorded with a point spacing of 1–1.5 m (0.67 points per
square metre). For each of the samples the reference data is
available. This reference data has been compiled by semi-
automatic filtering and manual editing. Each record in the
237
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B3b. Beijing 2008
reference data files contains the X, Y, and Z coordinates
followed by either a 0 (ground point) or 1 (non-ground point).
Figure.1 shows a part of an urban test site.
Figure 1. a part of an urban test site
3. OVERVIEW OF FILTERING ALGORITHMS
An overview of the algorithms and a characterisation of filters
are given in Table 1. The filtering algorithms are described in
more detail below.
3.1. George Vosselman
The basic idea is based on the observation that a large height
difference between two nearby points is unlikely to be caused
by a steep slope in the terrain. More likely, the higher point is
not a ground point. Clearly, for some height difference, the
probability that the higher point could be a ground point
decreases if the distance between the two points decreases.
They explicitly
define the acceptable height difference between
two points as a function of the distance between the points:
Dh
max(d). In general, this will be a non-decreasing function.
Further details can be found in Vosselman (2000).
3.2. Yong Hu
The algorithm will identify terrain points by finding local
minima and other topographic points, and recover the terrain
surface in a coarse-to-fine manner. First, after screening the
blunders, the scattered 3-D points are transformed into a grid-
based range image by selecting the point of lowest elevation in
each grid. Then, the raw range image is processed to fill void
areas and correct distortions. Then, an image pyramid is
generated.
The top-level image is hypothesized to be a coarse DTM if its
grid size is larger than the largest non-terrain object. Finally, the
coarse DTM is refined hierarchically from the top level to the
bottom level. At each level, denser terrain points are identified,
and the nonterrain points are replaced by interpolated elevations
using surrounding terrain points. The bottom-level image
represents the expected bare Earth surface.
Post-filtering is performed on the resultant DTM to improve its
quality for mapping purposes. This smoothing attempts to
correct the influence of speckles and undesired non-terrain
undulations of the DTM. At last, the void areas in the raw range
image may be duplicated to preserve water regions.
The quality of the derived DTM is subject to thresholding
parameters used in the algorithms. The optimal values of these
parameters may vary with the scene complexity. One can use a
priori knowledge, if available, including the terrain relief range,
the maximum and minimum building sizes, heights and areas,
the maximal tree height, etc to determine the parameters
empirically first, and then adjust them adaptively if multi-return
or intensity data is available. Further details can be found in Hu
(2003).
3.3. George Sithole
The algorithm emphasis is placed on establishing the
topological and geometric relations between bare-Earth and
object surfaces. Filtering is defined as the identification of
surface segments whose perimeter is raised above their
neighbourhood. Meaningful surfaces can be reconstructed for
large objects but not for small objects (too few points).
Therefore, in the algorithm large and small objects are detected
separately. Large objects are treated by segmentation of the
point cloud, while small objects by smoothing of the point cloud
in a later stage.
The segmentation of a point cloud into smooth surfaces is the
first important step of the algorithm. Two points are considered
to be part of the same surface if there is a smooth path of
adjacent points between them. This definition allows for
discontinuities within a surface as long as there is a path around
a discontinuity that connects points on both sides.
In this filter scan line segmentation algorithm is used in a
different manner by definition and segmentation of scan lines
with multiple orientations. A point cloud is sliced into
contiguous profiles, and this slicing is done in several directions.
Once profiles have been obtained, they are segmented to get
line segments that represent continuous planar curves on
surfaces in the landscape. Segmentation of the point cloud is
achieved by overlaying all segmented profiles which lead to the
generation of a graph, and a surface segment is therefore a
connected subgraph. The usage of the profiles in several
directions hence provides an elegant way to combine the profile
segmentation results to a surface segmentation. Two adjacent
parallel profile segments are connected only if there exists a
profile segment with another orientation that contains points of
both these parallel segments.
The algorithm can be described as a procedural stripping away
of objects from the bare-Earth in the following sequence, large
objects, bridges, and small objects. In each step of the sequence
smaller and smaller objects are removed. The explicit detection
of bridges is necessary to ensure the reliability of the detected
bare-Earth. Further details can be found in Sithole (2005).
Participants Filter description
George Vosselman—
Delft University of
Technology
Slope-based filter
Yong Hu—
University of Calgary
Hierarchical terrain
recovery
George Sithole—
Delft University of
Technology
Segmented point clouds
Table 1. Test participants
238
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B3b. Beijing 2008
4. FILTER CONCEPT
Every filter has an assumption about the structure of bare-Earth
points in a landscape. Here, In the Slope-based algorithm
(Vosselman), the slope or height difference between two points
is important. If the slope exceeds a certain predefined threshold,
then the highest point is assumed to belong to an object. In
hierarchical terrain recovery algorithm (Hu), the top-level of an
image pyramid is assumed to be a coarse bare-Earth. In such
algorithms a smooth condition is established to validate that the
elevation at the current level and the estimated elevation from
the reference level are reasonably close at every point of a local
patch. If any point in the current window violates the smooth
condition, then that point is classified as a non-terrain point. In
Segmentation-based algorithm (Sithole), the assumption is that
every points in a segment must belong to an object if their
segment is above its neighbourhood. It is important to note that
for such a concept to work the segments must delineate objects
and not facets of objects.
5. WAVELET DE-NOISING
Blunder detection of LIDAR data by using wavelets in this
paper is based on the decomposition of the raw LIDAR image
in order to detect salient changes in height. In the context of
Airborne laser scanning data, a filter response represents a
discontinuity caused by a blunder whereas the underlying
landscape do not response. By employing blunder detection
using wavelets, low frequencies such as ground or large objects
are separated from the high frequency components which
represent blunders.
The LIDAR point cloud is first regularly gridded. The resulting
matrix is then decomposed using the Discrete Wavelet
Transform (Goswami and Chan, 1999) into low frequencies and
high frequencies. The energy is evenly distributed among sub-
images and therefore, the amplitudes of sub-images becomes
lower (Bartels, Wei, and Mason, 2005). In this study, level 4
decomposition is applied to the LIDAR data. Discontinuities
give responses in the details depending
on their relative position to the wavelet kernel.
(a) (b)
Figure 2. (a) LIDAR range image, (b) degradation of blunders
by wavelets.
Figure.2(a,b) depicts wavelet de-noising on LIDAR range
image using Symlets(6) wavelet filter (Daubechies, 1992). The
scene in Figure.2(a) represents building blocks of different
height on the terrain with blunders. Applying wavelet de-
noising, blunders in the scene are degraded as shown in
Figure.2(b). As shown in Figure.2(b), two observations can be
made. First, blunders could be successfully degraded. Second, It
can clearly be seen that the filter response depends on the
degree of the salient height, i.e. the larger difference between
adjacent height values the higher is the magnitude of the
responses. However, as expected, it can also clearly be seen that
flat roofs are not degraded as there is no distinctive change in
height.
6. RESULTS
The filter results of these algorithms have been analysed in
various ways. The data of five refrence samples has been used
to assess the performance of the algorithms in several difficult
terrain types.
The quantitative assessment was done by evaluating Type I
(rejection of bare-Earth points) and Type II (acceptance of
object points as bare Earth) errors. It must be stressed that the
results which are presented here do not cover the difficulties in
filtering as observed in the data, yet in general all the filters
worked quite well.
The output of Hu’s filter is gridded, and it is altered in position
from the original. Therefore, DEMs were generated for the
filtered data and the height of the points in the reference data
were compared against these DEMs. The Type I and Type II
errors have to be understood in the context of height comparison
of the reference against the filtered DEMs.
Vosselman
Test
Samples
Type I
errors %
Type II
errors %
Number of
points
Samp11 43.2 5.7 38010
Samp12 23.8 3.5 52119
Samp21 10.4 1.6 12960
Samp31 5.6 1.9 28862
Samp41 51.8 2.6 11231
Hu
Test
Samples
Type I
errors %
Type II
errors %
Number of
points
Samp11 13.6 7.8 38010
Samp12 4.8 2.6 52119
Samp21 1.5 12.7 12960
Samp31 6.1 1.9 28862
Samp41 21.6 1.7 11231
Sithole
Test
Samples
Type I
errors %
Type II
errors %
Number of
points
Samp11 28.3 13.7 38010
Samp12 3.7 2.6 52119
Samp21 3.3 1.7 12960
Samp31 2.3 3.2 28862
Samp41 6.8 4.9 11231
Table 2. percentages of Type I and Type II errors
All the filtering algorithms fails sometimes. This failure is
caused by the fact that filters are blind to the context of
structures in relation to their neighbourhoods. Because of this, a
trade-off is involved between making Type I and Type II errors.
The computed errors (over all the datasets) ranged from 1.5 to
51.8%, 1.6 to 13.7% for Type I, Type II, respectively.
Vosselman’s filter focus on minimizing Type II errors. This
tendency to minimizing Type II errors, partly suggests that they
consider the cost of Type II errors to be much higher than that
of Type I errors. Table 2. shows the percentages of Type I errors
and Type II errors for three tested algorithms.
239
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B3b. Beijing 2008
7. CONCLUSIONS
In this paper a comparison of three different filtering methods
was presented. The objectives of the study were to: (1)
determine the performance of filter algorithms, (2) applying
wavelet denoising method for the detection of blunders in
LIDAR data and (3) establish directions for future research. In
general, all the filters worked quite well in landscape of low
complexity. As seen in Table 2, the performance of a filter can
differ depending on the feature content of a landscape. The
problems that pose the greatest challenges appear to be complex
cityscapes and discontinuities in the bare-Earth. So, the
combination of different strategies may be a solution. Wavelet
de-noising methods showed their talent in the detection and
degradation of blunders among LIDAR data. Therefore, it is
hardly recommended to use wavelets in other researches and
even in the filtering methods for the sake of object detection.
REFERENCES
Bartels, M. H. Wei, and D. C. Mason. Wavelet packets and
cooccurrence matrices for texture-based image segmentation.
IEEE International Conference on Advanced Video and Signal-
Based Surveillance, 1:428–433, 2005.
Daubechies, I. Ten lectures on wavelets. CBMS-NSF regional
conference series in applied mathematics, 1992.
Flood, M. (2001). Lidar activities and research priorities in the
commercial sector. IAPRS vol. 34 (3W/4, October 22-24,
Annapolis (MD), USA), pp. 678–684.
Goswami, J. C. and A. K. Chan. Fundamentals of wavelets :
theory, algorithms, and applications. New York, Chichester:
Wiley, 1999.
Huising, E. and L. M. G. Pereira (1998). Errors and accuracy
estimates of laser altimetry data acquired by various laser
scanning systems for topographic applications. ISPRS JPRS vol.
53 (5), pp. 245–261.
Hu, Y. (2003). Automated Extraction of Digital Terrain Models,
Roads and Buildings Using Airborne Lidar Data,
http://www.geomatics.ucalgary.ca/links/GradTheses.html
(accessed 30 april. 2008)
OEEPE, 2000. Working Group on laser data acquisition. ISPRS
Congress 2000. http://www.terra.geomatics.kth.se/~fotogram/
OEEPE/ISPRS_Amsterdam_OEEPE_presentation.pdf
(accessed January 22, 2004).
Sithole, G. and G. Vosselman (2003). Automatic structure
detection in a pointcloud of an urban landscape. Proceedings of
2nd Joint Workshop on Remote Sensing and Data Fusion over
Urban Areas (Urban 2003), May 22-23, Berlin, Germany, pp.
67–71.
Sithole, G. (2005). ISPRS WG III/3, III/4, V/3 Workshop
"Laser scanning 2005", Enschede, the Netherlands, September
12-14, 2005.
Tao, C. and Y. Hu (2001). A review of post processing
algorithms for airborne lidar data. Proceedings ASPRS
conference April 23-27, St. Louis Missouri. CDROM, 14 pages.
Vosselman, G. (2000). Slope based filtering of laser altimetry
data. IAPRS, WG III/3, Amsterdam, The Netherlands vol. 33
(B3), pp. 935–942.
240
... Besides that, visible channels which consist of red, green and blue (RGB) spectral band value also can be added into the point cloud as add-on spectral data by integrating both TLS and digital camera. Point cloud intensity had been used as one of the spectral information for point cloud segmentation and filtering process which had been done by Lichti (2005), Sithole (2005), Baligh et al. (2008), Yunfei et al. (2008, Guarnieri et al. (2012) and Pirotti et al. (2013). Maximum likelihood discriminant function was applied by Lichti (2005) to discriminate between different surface classes through analysing the intensity of the point cloud. ...
... Maximum likelihood discriminant function was applied by Lichti (2005) to discriminate between different surface classes through analysing the intensity of the point cloud. Whereas Baligh et al. (2008) applied wavelet de-noising on the raw range image of point cloud by analysing amplitude of point cloud as pre-filtering process. Meanwhile, the calibrated relative reflectance of the point cloud was exploited by Guarnieri et al. (2012) and Pirotti et al. (2013) as pre-filtering process which in under online waveform processing. ...
Article
Full-text available
The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.
... To date, many comparative studies of GF methods have been conducted, but most of them focus on comparisons of traditional GF algorithms on relatively small or specific study sites (Sithole and Vosselman, 2004;Baligh et al., 2008;Korzeniowska et al., 2014;Luis Montealegre et al., 2015;Stereńczak et al., 2016;Zhao et al., 2018;Silva et al., 2018;Klápště et al., 2020;Moudry et al., 2020;Chen et al., 2021). Recently, the newly proposed OpenGF dataset provides a good opportunity for evaluating the performance of advanced DL techniques in GF. ...
Article
With the fast development of 3D data acquisition techniques, topographic point clouds have become easier to acquire and have promoted many geospatial applications. Ground filtering (GF), as one of the most fundamental and challenging tasks for the post-processing of large-scale topographic point clouds, has been extensively studied but has yet to be well solved. To reveal future superior solutions, a comprehensive investigation of up-to-date GF studies is essential. However, existing GF surveys are scarce and fail to capture the latest progress and advancements. To this end, this paper not only presents a comprehensive review of up-to-date and advanced GF methods, but also conducts systematic comparative analyses of existing experimental results on public GF benchmark datasets. Moreover, this survey compiles the most recent publicly available resources that can be leveraged for the GF research, including pertinent datasets, metrics, and a range of open-source tools. Finally, the remaining challenges and promising research directions of GF, as well as implications for large-scale 3D geospatial understanding, are discussed in-depth. It is expected that this survey can simultaneously serve as a position paper and tutorial to those interested in GF.
... Baligh et al. 2008 [14] carried out a research to determine the performance of three different filters and application of wavelets for minimizing noise in the LiDAR data. They acknowledged that the performances of the examined filters depend on the complexity of the landscapes while they found that wavelet denoising performed well. ...
... Tekniğin en büyük avantajlarından biri de diğer uzaktan algılama metotlarının aksine yeryüzünün üç boyutlu (3B) tasvirini sağlayan dijital yüzey modellerinin (DYM) ve dijital arazi modellerinin (DAM) yoğun ve yüksek kalitede nokta bulutları temelinde daha basit işlem adımları ile elde edilebilmesidir. ALS DYM ve DAM'ları literatürde pek çok çalışmada tercih edilmiştir (Lohr 1998;McIntosh, Krupnik, and Schenk 2000;Shan and Aparajithan 2005;Mandlburger, Briese, and Pfeifer 2007;Liu 2008;Baligh, Valadan Zoej, and Mohammadzadeh 2008). Üstelik geçmişte bu konuda yapılan çalışmalar uçak, helikopter gibi platformlar kullanımı ile uygulanan geleneksel ALS yönteminden elde edilen 3B modellerin açık alanda ≤ 2 cm yatay ve ≤ 5 cm düşey mutlak konum doğruluklarına sahip olduklarını ortaya koymaktadır (Sefercik vd., 2015). ...
Conference Paper
Full-text available
Yoğun, yüksek konum doğruluklu, düşük maliyetli ve periyodik nokta bulutları elde etme ve doğrudan üç boyutlu dijital yüzey ve arazi modellerini sağlayabilme gibi avantajları sayesinde Hava Kaynaklı Lazer Tarama (ALS), haritacılık başta olmak üzere yeryüzü ve ona dair verileri inceleyen bilim alanlarında oldukça sık kullanılır hâle gelmiştir. ALS, özellikle mali açıdan güçlü Avrupa ülkeleri ve ABD'de, aktif algılamanın bulut ve orman penetrasyon yeteneği gibi avantajlarından faydalanarak pasif algılama prensipli yani güneşin aydınlatmasına bağımlı fotogrametriye alternatif bir yöntem olarak kısa sürede kabul görmüştür. Bununla birlikte, ALS teknolojisi, ihtiyaç duyduğu uçak, hava kaynaklı lazer tarayıcı, Global Navigasyon Uydu Sistemleri (GNSS), Atalet Ölçüm Birimi (IMU) ve yüksek kapasiteli iş istasyonları gibi ekipmanların yüksek maliyetinden ötürü az gelişmiş veya gelişmekte olan ülkelerde yeterince kullanılamamıştır. Finansal gücü yeterli olmayan az gelişmiş ve gelişmekte olan ülkelerde de ALS tekniğinin kullanılabilmesini sağlamak için araştırma ekibimiz, Ulusal bir Araştırma ve Geliştirme (Ar-Ge) Projesi kapsamında, düşük maliyetli ALS kabiliyetli gelişmiş bir insansız hava aracı (İHA) üretmiştir. Cihaz, yeni dizayn ALS gimbal ve gerçek zamanlı kinematik konumlandırma gibi gelişmiş özelliklerle yüksek doğruluk ve çözünürlükte ALS verileri sağlayabilmektedir. Bu çalışmada, üretilen ALS, İHA ve verilerinden elde edilen dijital yüzey modelinin (DYM) kalitesi, açık, orman ve bina arazi sınıflarını içeren Bülent Ecevit Üniversitesi Merkez Kampüsü’ndeki bir test alanında görsel ve istatistiksel nokta ve model bazlı yaklaşımlarla ortaya konmuştur. Nokta bazlı analizlerde, araziden toplanan yer kontrol noktaları kullanılırken model bazlı değerlendirmelerde yersel lazer taramadan elde edilen referans DYM’den faydalanılmıştır. Elde edilen sonuçlar, üretilen hava kaynaklı lazer tarayıcının veri performansının uluslararası ALS veri standartlarını fazlasıyla sağlayabildiğini ortaya koymuştur.
... Preliminary tests consisted of trial and error based on the limited literature of the data LIDAR filtering for the bare earth recommended by Baligh et al. (2008) and the generation of DTM by Jacobsen (2003), Kraus and Pfeifer (2001), Liu (2008), Shan and Sampath (2007) and Shan and Aparajithan (2005). The first set includes a seven filter combination designed using a series of arrangements of algorithms, fed by a set of thresholds or ranges of values for each attribute (i.e. ...
Article
Full-text available
Three-dimensional urban cartography is needed for city changes’ assessment. The variety of studies using 3D calculations of urban elements grows each year. Building and vegetation volumes are necessary to assess and understand spatio-temporal urban changeable environments. However, there are technical questions as to which method can improve 3D urban cartographic accuracy. The innovative part of this current study is the creation of a six-band hybrid obtained from LIDAR and WorldView2 synergy. Two different enhancement algorithms demonstrated the most important spectral features for the urban development and vegetation classes. Results indicated an improvement in accuracy by up to 21.3%, according to the Kappa coefficient. Both infra-red (IR) band and intensity band were the most significant, according to the principal components analysis (PCA). The synergy delimited classes and polygons, as well as the direct display of information regarding heights of elements and improving the extraction of roads, buildings and vegetation classes.
... Due to these limitations, ALS has grad-ually become the primary technique for mapping forest areas, which offers rapid and highly accurate 3D topographic point clouds that have traditionally not been provided by competing remote sensing technologies (Baltsavias 1999, Hill et al. 2000. With its forest penetration capability, achieved through the detection of multiple echoes per laser pulse and dense point clouds, ALS is considered by the scientific community as an effective solution for generating DEMs (Lohr 1998, McIntosh et al. 2000, Shan & Sampath 2005, Mandlburger et al. 2007, Baligh et al. 2008, Liu 2008, Akay et al. 2012. ...
Article
Full-text available
In remote sensing, estimation of the forest stand height is an ever-challenging issue due to the difficulties encountered during the acquisition of data under forest canopies. Stereo optical imaging offers high spatial and spectral resolution ; however, the optical correlation is lower in dense forests than in open areas due to an insufficient number of matching points. Therefore, in most cases height information may be missing or faulty. With their long wavelengths of 0.2 to 1.3 m, P-band and L-band synthetic aperture radars are capable of penetrating forest canopies, but their low spatial resolutions restrict the use of single-tree based forest applications. In this study, airborne laser scanning was used as an effective remote sensing technique to produce large-scale maps of forest stand height. This technique produces very high-resolution point clouds and has a high penetration capability that allows for the detection of multiple echoes per laser pulse. A study area with a forest coverage of approximately 60% was selected in Houston, USA, and a three-dimensional color-coded map of forest stands was produced using a normalized digital surface model technique. Rather than being limited to the number of ground control points, the accuracy of the produced map was assessed with a model-to-model approach using terrestrial laser scanning. In the accuracy assessment, the standard deviation was used as the main accuracy indicator in addition to the root mean square error and normalized median absolute deviation. The absolute geo-location accuracy of the generated map was found to be better than 1 cm horizontally and approximately 40 cm in height. Furthermore, the effects of bias and relative standard deviations were determined. The problems encountered during the production of the map, as well as recommended solutions , are also discussed in this paper.
... Because of the ability of AL to provide rapid, high-resolution, and accurate data, it is gradually being adopted as the primary technique for mapping local-to regional-scale coverage, an area historically dominated by photogrammetry (Baltsavias 1999;Hill et al. 2000). Accordingly, ALS digital surface models (DSMs), digital terrain models (DTMs), and digital elevation models (DEMs) have undergone significant study in the literature (Lohr 1998;McIntosh, Krupnik, and Schenk 2000;Shan and Aparajithan 2005;Mandlburger, Briese, and Pfeifer 2007;Liu 2008;Baligh, Valadan Zoej, and Mohammadzadeh 2008). ...
Article
Full-text available
Airborne laser scanning (ALS) is a remote-sensing technique that provides scale-accurate 3D models consisting of dense point clouds with x, y planimetric coordinates and altitude z. Using ALS, very high-resolution (VHR) digital surface models (DSMs) have been widely used for commercial and scientific applications since the early 1990s. Although there is widespread usage, there has been little comprehensive investigation of quality control for ALS DSMs in the literature, as most studies have been limited to assessing point-based vertical accuracy. This article is dedicated to investigating the quality of ALS DSMs for different land classes using statistical and visual approaches based on absolute and relative vertical accuracy metrics. Rather than a limited number of ground control points (GCP), the model-to-model-based approach is applied and DSMs derived from terrestrial laser scanning (TLS) point clouds that have around 5 mm absolute and 3 mm relative geolocation accuracy were used as the reference data for comparison. The results demonstrate that in open, grass, and building land classes, the ALS DSMs reached both standard deviation (σ) and normalized median absolute deviation (NMAD) of 3–5 cm after the elimination of any systematic biases. This result sufficiently satisfies the vertical accuracy requirements for 1/1000-scale topographic maps determined by National Digital Elevation Program (NDEP) specifications. In tall vegetation, a higher number of discrepancies larger than 0.5 m exist, reversing the relation between σ and NMAD. These vegetation errors also do not appear to be normally distributed. As an additional investigation, the performance of ALS DEMs under dense high-vegetation areas was assessed. These under-canopy ALS DEMs, created using only classified ground returns, offer both σ and NMAD of 12–14 cm, a performance level that is difficult to achieve under-canopy using photogrammetric techniques.
Article
Full-text available
Orman envanterinde önemli bir parametre olan meşcere yükseklik bilgisi, az gelişmiş ya da Türkiye gibi gelişmekte olan ülkelerde halen yaygın şekilde yersel tekniklerle tekil ağaç bilgisi edinme ve hata payı oldukça yüksek kestirimler ile bu bilgiyi genele yayma prensibi ile elde edilmektedir. Uzaktan algılama (UA) temelli modern ölçüm teknolojilerinin devinimi ile Amerika ve Avrupa ülkeleri başta olmak üzere gelişmiş ülkeler orman meşcere yükseklik belirleme çalışmalarında hata payı yüksek yersel yöntemleri tamamen oyun dışı bırakmışlardır. Bu çalışma, hava kaynaklı lazer tarama (ALS) yoğun nokta bulutları ile orman meşcere yükseklik haritası üretimi ve üretilen haritanın mutlak konum doğruluğu potansiyelinin ortaya koyulmasını amaçlamaktadır. Amaç doğrultusunda, Houston, ABD'de orman-dominant bir çalışma alanı seçilmiş, ALS verileri ile dijital yüzey ve arazi modelleri üretilmiş ve bu ürünler temelinde normalize dijital yüzey modeli tekniği kullanılarak üç boyutlu orman meşcere yükseklik haritası elde edilmiştir. Üretilen haritanın doğruluğu, aynı çalışma alanında yersel lazer tarama (TLS) tekniğinden elde edilen veriler ile üretilen orman meşcere yükseklik haritası kullanılarak model bazlı yaklaşımlarla değerlendirilmiştir. Değerlendirmelerde, standart sapma ve normalize medyan mutlak sapma uluslararası doğruluk metrikleri kullanılmıştır. Sonuçlar ışığında, ALS verileri ile üretilen meşcere yükseklik haritasının yatayda ±1 cm düşeyde ise ±40 cm dolayında mutlak konum doğruluğuna sahip olduğu tespit edilmiştir.
Chapter
Full-text available
Cities are systems in constant change that need to be classified and mapped periodically to perform cadastral updates and evaluate urban green areas , among other problems natural and urban resources to the generation of bases for policies on sustainable development. The LIDAR (Light Detection and Ranging) is a technological tool to create 3D information with proven utility and efficiency to solve problems of spatial resolution. Faster than conventional photogrammetry, LIDAR generates combinable data with others sensors. The processing methodology consists of organizing data, eliminating errors, obtaining digital terrain model (DTM) and digital surface model (DMS), designed and implement grades to classify the surface. The combination of ad hoc to the characteristics of the city in general and even to the specific ones by zones depending on the structural arrangement of the elements on the surface, such as the level of urban density. This document evaluates the coverage of the area in which it is located. Areas of 2.5 km2 in the Metropolitan Area of Monterrey by means of air data LIDAR with resolution horizontal of 0.70 m and vertical precision of 0.10 m for identify new settlements and update cadastral information. The flast design used the characteristics of elevation, plane angles, intensity and number of returns. The cartographic results obtained were compared with the information layers of coverage and current use of the land at the municipal level and the changes in use of the ground with high resolution satellite images. The results allow us to point out changes in coverage which can be determined in area and volume and report settlements not recorded by the cadastre database, as well as changes in land use that modify the urban vegetation cover.
Conference Paper
Airborne LiDAR point cloud data is convenient and efficient, which is an important data source for DEM generation. Because the airborne LiDAR point cloud data is discrete, it is always an important research to work out how to generate DEM quickly and efficiently. This paper presents a DEM generation method based on mathematical morphology, and the method takes interpolation method to obtain the grid digital surface model (DSM), and then using morphological dilation and erosion operator to get the ground point to generate DEM. Experiments show that the method have better effect on the area whose ground change gentler.
Thesis
Full-text available
This dissertation presents a collection of algorithms developed for automatically extracting useful information from lidar data exclusively. The algorithms focus on automated extraction of DTMs, 3-D roads and buildings utilizing single- or multi-return lidar range and intensity data. The hierarchical terrain recovery algorithm can intelligently discriminate between terrain and non-terrain lidar points by adaptive and robust filtering. It processes the range data bottom up and top down to estimate high quality DTMs using the hierarchical strategy. Road ribbons are detected by classifying lidar intensity and height data. The 3-D grid road networks are reconstructed using a sequential Hough transformation, and are verified using road ribbons and lidar-derived DTMs. The attributes of road segments including width, length and slope are computed. Building models are created with a high level of accuracy. The building boundaries are detected by segmenting lidar height data. A sequential linking technique is proposed to reconstruct building boundaries to regular polygons, which are then rectified to be of cartographical quality. Then prismatic models are created for flat roof buildings, and polyhedral models are created for non-flat roof buildings by the incremental selective refining and vertical wall rectification procedures. These algorithms have been tested using many lidar datasets of varying terrain type, coverage type and point density. The results show that in most areas the lidar-derived DTMs retain most terrain details and remove non-terrain objects reliably; the road ribbons and grid road networks are sketched well in built-up areas; and the extracted building footprints have high positioning accuracy equivalent to ground-truth data surveyed in field.
Article
Full-text available
Laser altimetry is becoming the prime method for large scale acquisition of height data. Although laser altimetry is full integrated into processes for the production of digital elevation models in different countries, the derivation of DEM's from the raw laser altimetry measurements still causes problems. In particular the laser pulses reflected on the ground surface need to be distinguished from those reflecting on buildings and vegetation. In this paper a new method is proposed for filtering laser data. This method is closely related to the erosion operator used for mathematical grey scale morphology. Based on height differences in a representative training dataset, filter functions are derived that either preserve important terrain characteristics or minimise the number of classification errors. In experiments it is shown that the latter filter causes smaller errors in the resulting digital elevation models. In general the performance of the filters deteriorates with a decreasing point density.
Conference Paper
Full-text available
A method for detecting urban structures in an irregularly spaced point-cloud of an urban landscape is proposed. The method is especially designed for detecting structures that are extensions to the bare-earth (e.g., bridges, ramps, etc.,). The method involves a segmentation of a point-cloud followed by a classification. Both the segmentation and classification of the data are based on the analysis of a data structure in which the point-cloud is represented as an orthogonal set of profiles. Also proposed is a conceptual and logical model of the landscape for the structure detection problem.
Article
This chapter focuses on different time-frequency analysis, including short-time Fourier transform (STFT), continuous wavelet transform, discrete wavelet transform (DWT), and Wigner-Ville distribution. It first discusses the notion of window function, by means of which the desired portion of a given signal can be removed. The STFT provides one of many ways to generate a time-frequency analysis of signals. Another linear transform that provides such analyses is the integral (or continuous) wavelet transform (CWT). The chapter also introduces the DWT, which is similar to the discrete Fourier transform and discrete short-time Fourier transform, and shows the relationship between DWT and CWT. Finally, the chapter briefly discusses the significance of a surface over the time-frequency plane. discrete wavelet transforms; Fourier analysis; time-frequency analysis
Article
The Survey Department of Rijkswaterstaat in The Netherlands makes extensive use of laser scanning for topographic measurements. An inventory of sources of errors indicates that errors may vary from 5 to 200 cm. The experience shows that errors related to the laser instrument, GPS and INS may frequently occur, resulting in local distortions, and planimetric and height shifts. Moreover, the results indicate that for flat terrain, having corrected for gross errors, an offset of less than 10 cm can often be obtained and standard deviations are generally well within 15 cm. For hilly and flat terrain densely covered by vegetation, accuracy estimates do not generally fulfil those required by Rijkswaterstaat. However, the use of an adequate strategy for data collection and processing will, to a great extent, improve the accuracy and fidelity of the results. Thus, research should be devoted to the design of appropriate strategies for data collection and processing.
Article
Laser altimetry, more commonly referred to in the commercial sector as LiDAR mapping, is becoming a commonplace operational tool in photogrammetry, survey and mapping firms. Its use has grown rapidly over the past five years due to the increasing availability of commercial off-the-shelf sensors, advancements in the design and capabilities of the sensors themselves and an increased awareness of the advantages of using LiDAR technology for elevation data capture by end users. As a result LiDAR mapping has been experiencing strong growth, which in turn has spurred further developments in the technology and even greater demand for the data products. Due to the relatively small size of the lidar mapping sector, capital investment in internally funded research and development appears to be limited. As a result further growth and development of the technology in the commercial sector will depend heavily on the ability to work cooperatively with the academic and research sector to define common research priorities and objectives, especially as relates to specific applications. Education of end users to increase awareness and acceptance of the technology along with establishing approved methodologies and quality control guidelines are also areas of overlap between the commercial and government sectors. Potential research priorities given commercial sector needs will be discussed and ranked with an emphasis on software tools. Undeveloped aspects of research-oriented laser altimetry, especially as relates to waveform capture and analysis will be discussed in the context of potential commercial markets.
Conference Paper
In this paper, a texture-based segmentation approach using wavelet packets, co-occurrence matrices and normalised modified histogram thresholding is discussed and developed. Background and objects in remotely sensed light detection and ranging (LIDAR) data are successfully partitioned into rivers, fields and residential areas using the developed algorithms. The issue of wavelet packet decomposition level is addressed by analysing the sub-images' energy and entropy. The standard deviation of the modified histogram, which is derived from the main diagonal of the sub-image's co-occurrence matrix, is used as a measure to evaluate the sub-images in terms of thresholdability. Finally, the segmentation results are presented.
A review of post processing algorithms for airborne lidar data
  • C Tao
  • Y Hu
Tao, C. and Y. Hu (2001). A review of post processing algorithms for airborne lidar data. Proceedings ASPRS conference April 23-27, St. Louis Missouri. CDROM, 14 pages.
Working Group on laser data acquisition. ISPRS Congress
OEEPE, 2000. Working Group on laser data acquisition. ISPRS Congress 2000. http://www.terra.geomatics.kth.se/~fotogram/ OEEPE/ISPRS_Amsterdam_OEEPE_presentation.pdf (accessed January 22, 2004).
ISPRS WG IIILaser scanning
  • G Sithole
Sithole, G. (2005). ISPRS WG III/3, III/4, V/3 Workshop "Laser scanning 2005", Enschede, the Netherlands, September 12-14, 2005.