ArticlePDF Available

Abstract and Figures

Terrain classification of LIDAR point clouds is a fundamental problem in the production of Digital Elevation Models (DEMs). The Simple Morphological Filter (SMRF) addresses this problem by applying image processing techniques to the data. This implementation uses a linearly increasing window and simple slope thresholding, along with a novel application of image inpainting techniques. When tested against the ISPRS LIDAR reference dataset, SMRF achieved a mean 85.4% Kappa score when using a single parameter set and 90.02% when optimized. SMRF is intended to serve as a stable base from which more advanced progressive filters can be designed. This approach is particularly effective at minimizing Type I error rates, while maintaining acceptable Type II error rates. As a result, the final surface preserves subtle surface variation in the form of tracks and trails that make this approach ideally suited for the production of DEMs used as ground surfaces in immersive virtual environments.
Content may be subject to copyright.
An improved simple morphological filter for the terrain classification
of airborne LIDAR data
Thomas J. Pingel
, Keith C. Clarke
, William A. McBride
Northern Illinois University, Department of Geography, DeKalb, IL 60115, USA
University of California, Santa Barbara, Department of Geography, Santa Barbara, CA 93101, USA
article info
Article history:
Received 22 September 2011
Received in revised form 20 December 2012
Accepted 21 December 2012
Available online 27 January 2013
Virtual reality
Terrain classification of LIDAR point clouds is a fundamental problem in the production of Digital Eleva-
tion Models (DEMs). The Simple Morphological Filter (SMRF) addresses this problem by applying image
processing techniques to the data. This implementation uses a linearly increasing window and simple
slope thresholding, along with a novel application of image inpainting techniques. When tested against
the ISPRS LIDAR reference dataset, SMRF achieved a mean 85.4% Kappa score when using a single param-
eter set and 90.02% when optimized. SMRF is intended to serve as a stable base from which more
advanced progressive filters can be designed. This approach is particularly effective at minimizing Type
I error rates, while maintaining acceptable Type II error rates. As a result, the final surface preserves sub-
tle surface variation in the form of tracks and trails that make this approach ideally suited for the produc-
tion of DEMs used as ground surfaces in immersive virtual environments.
Ó2013 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) Published by Elsevier
B.V. All rights reserved.
1. Introduction
One important application of LIght Detection and Ranging (LI-
DAR) technology is the creation of high-resolution Digital Elevation
Models (DEMs) that capture the shape of the terrain in great detail.
DEMs are used for a variety of purposes, but one of the most recent
is the employment of their derived ground surfaces in immersive
geographic virtual environments. While large-extent DEMs, such
as those produced from the Shuttle Radar Topography Mission
(SRTM) are useful as ground surfaces in global and other large-ex-
tent views of the environment (e.g., NASA’s World Wind or Google
Earth), immersion at a locality requires a higher-fidelity model to
convey the correct sense of the terrain within highly varying envi-
ronments. This need can be met through ground surfaces derived
from LIDAR datasets. The most basic form of LIDAR data commonly
utilized by researchers is the ‘‘point cloud’’ – a cluster of three-
dimensional points and, often, associated attributes like the inten-
sity of the return or color information when laser scanners are
integrated with digital cameras.
The challenge with such point clouds is that they are rarely use-
ful in themselves, but must instead be processed and transformed
into ground models and representations of objects such as build-
ings and trees. Dozens of algorithms have been published on the
extraction of terrain from the point cloud alone (Meng et al.,
2010; Shan and Toth, 2008; Vosselman and Maas, 2010). While
most algorithms perform tolerably well on unbuilt and undifferen-
tiated terrain, the task has proven difficult for complex urban and
highly rugged environments. There have been many attempts to
categorize ground filtering algorithms based on their methodology
(e.g., Liu, 2008; Sithole and Vosselman, 2004). Meng et al. (2010)
helpfully and systematically identify key dimensions (or attri-
butes) on which algorithms typically differ, including whether they
use the first, last, or full LIDAR return set, and whether the raw LI-
DAR data are used to interpolate a surface or whether they are ini-
tially fitted to a gridded data structure.
Algorithms are typically tested against computer simulated
datasets for which the ‘‘true’’ ground is known, or else tested ad
hoc against available LIDAR scans. In the latter case, qualitative
interpretation of the resulting hillshaded surface is common, as
is manually verifying accuracy against a selected representative
sample of the LIDAR returns (Zhang et al., 2003). In all of these
cases, test datasets are idiosyncratic and thus meaningful compar-
ison of performance between algorithms is difficult. To mitigate
this problem, the International Society of Photogrammetry and Re-
mote Sensing (ISPRS) commissioned a study in which eight algo-
rithms were tested against seven high-resolution LIDAR datasets,
four of which were urban landscapes and three of which were for-
ested landscapes (Sithole and Vosselman, 2003, 2004). Fifteen
samples from these larger datasets were selected, and the returns
within each sample were manually coded as either ground/bare-
0924-2716/$ - see front matter Ó2013 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS) Published by Elsevier B.V. All rights reserved.
Corresponding author. Tel.: +1 815 753 0631.
E-mail addresses: (T.J. Pingel), (K.C.
Clarke), (W.A. McBride).
ISPRS Journal of Photogrammetry and Remote Sensing 77 (2013) 21–30
Contents lists available at SciVerse ScienceDirect
ISPRS Journal of Photogrammetry and Remote Sensing
journal homepage:
earth (BE), or non-ground/object (OBJ) observations. The publica-
tion of the findings, and more importantly the datasets used
in the study, has enabled more direct comparisons of accuracy
against the original eight algorithms and to all subsequent algo-
rithms that utilize the data. Table 1 lists the fifteen ISPRS samples
along with several relevant characteristics (Sithole and Vosselman,
There are multiple metrics of performance for ground filtering
algorithms. One metric of accuracy is the Type I error rate, which
is equal to the number of BE points mistakenly classified as OBJ di-
vided by the true number of BE points. Another metric is the Type II
error rate, which is equal to the number of OBJ points mistakenly
classified as BE, divided by true total number of OBJ points. The to-
tal error rate is equal to the sum of all mistaken classifications di-
vided by the total number of points in the dataset. The results of
the original test were given in these values, and some recent algo-
rithms tested against the ISPRS dataset utilize these metrics (Chen
et al., 2007). Other recently published algorithms tested against the
ISPRS samples (e.g., Jahromi et al., 2011; Meng et al., 2009; Shao
and Chen, 2008; Silván-Cárdenas and Wang, 2006) utilize Cohen’s
Kappa (Cohen, 1960) statistic as a measure of accuracy. Kappa
measures the overall agreement between two judges, while taking
into account the possibility of chance in the observed frequencies
and is commonly used in its native domain of psychology as well
as in remote sensing (Congalton, 1991; Jensen, 2005). The range
of Kappa values extends from positive to negative one, with posi-
tive one indicating strong agreement, negative one indicating
strong disagreement, and zero indicating chance-level agreement.
One of the most robust algorithms for ground filtering is Axels-
son’s adaptive triangulated irregular network (TIN) model (1999,
2000). While many algorithms require the data to first be fitted
to a gridded data structure, Axelsson’s algorithm begins with a col-
lection of seed points taken from the original data set, and itera-
tively adds points from the set that meet criteria related to
elevation and directional changes. Of the eight algorithms origi-
nally tested, Axelsson’s performed the best on twelve of the fifteen
samples, had the lowest mean total error rate of 4.82%, and the
highest mean Kappa score of 84.19% (Table 2). The second best per-
forming filter originally tested, the Hierarchical Robust Interpola-
tion algorithm of (Kraus and Pfeifer, 1998; Pfeifer et al., 1999),
had a mean total error rate of 8.03% and an overall mean Kappa
score of 75.27%. It performed the best on two of the fifteen samples
(Samples 2–1 and 3–1), both of which were drawn from urban
landscapes. The Active Contours algorithm of Elmqvist et al.
(2001) performed best on the remaining sample (Sample 4–1),
though poor performance on a number of other samples led to a
comparatively high mean total error rate (20.73%) and low Kappa
score (57.78%).
The excellent performance of Axelsson’s filter is a strong argu-
ment in favor of retaining and processing all of the original data
points before interpolating a Digital Terrain Model (or DTM – used
here synonymously with Digital Elevation Model, or DEM). Recent
attempts to surpass the performance of Axelsson’s adaptive TIN
densification method have had mixed results. Jahromi et al.
(2011) report preliminary results using an algorithm based on arti-
ficial neural networks. They tested their algorithm against four
samples, finding improvement over Axelsson’s in three cases.
Meng et al. (2009) produced a directional ground filtering algo-
rithm that had an overall mean Kappa score of 79.93% (compared
to Axelsson’s mean Kappa score of 84.19%) but performed better
on eleven of fifteen individual samples. Shao’s Climbing and Sliding
Algorithm (CAS) performed quite well against the ISPRS dataset,
though it employed a ‘‘pseudo grid’’ rather than a true grid (Shao,
2007; Shao and Chen, 2008). It improved on Axelsson’s perfor-
mance on nine samples, and reduced mean total error to 4.42%
(compared to Axelsson’s mean total error of 4.82%). These results
indicate that grid-based algorithms can have commensurate per-
formance, and typically do so at far less computational expense.
Grid-based approaches also have the advantage of tying into
widely available and robust image processing algorithms, and thus
cut down on the overhead involved in algorithm development and
Table 1
Study site features, after Sithole and Vosselman (2003, 2004).
Site Density
Site Sample # Features
Urban 0.67 1 1–1 1 Mixed vegetation and buildings on
1–2 2 Mixed vegetation and buildings
2 2–1 3 Road with bridge
2–2 4 Bridge and irregular ground surface
2–3 5 Large, irregularly shaped buildings
2–4 6 Steep slopes with vegetation
3 3–1 7 Complex building
4 4–1 8 Large gaps in data, irregularly
shaped buildings
4–2 9 Trains in railway yard
Rural 0.18 5 5–1 10 Data gaps, vegetation on moderate
5–2 11 Steep, terraced slopes
5–3 12 Steep, terraced slopes
5–4 13 Dense ground cover
6 6–1 14 Large gap in data
7 7–1 15 Underpass
Table 2
Best reported performance of top algorithms, total error (%) and Cohen’s Kappa (%) (in
italics) against ISPRS samples (Axelsson, 1999; Chen et al., 2007; Elmqvist et al. 2001;
Jahromi et al., 2011; Meng et al., 2009; Pfeifer et al., 1999; Shao, 2007; Sithole and
Vosselman, 2003). Values from Meng et al. (2009) are optimized, while values from
Shao (2007) are the best performance out of the six parameter sets tested.
Site Axelsson Chen Elmqvist Jahromi Meng Pfeifer Shao
1 (1–1) 10.76 13.92 22.40 15.90 17.35 11.88
78.48 56.68 68.69 70.96 66.09
2 (1–2) 3.25 3.61 8.18 4.31 4.50 4.02
93.51 83.66 91.37 93.12 91.00
3 (2–1) 4.25 2.28 8.53 0.40 2.57 4.67
86.34 77.40 98.83 95.40 92.51
4 (2–2) 3.63 3.61 8.93 ––6.71 5.51
91.33 80.30 88.75 84.68
5 (2–3) 4.00 9.05 12.28 ––8.22 4.80
91.97 75.59 87.56 83.59
6 (2–4) 4.42 3.61 13.83 ––8.64 4.97
88.50 68.89 83.39 78.43 -
7 (3–1) 4.78 1.27 5.34 1.32 1.80 1.21
90.43 89.31 97.34 97.45 96.37
8 (4–1) 13.91 34.03 8.76 ––10.75 4.91
72.21 82.46 88.58 78.51
9 (4–2) 1.62 2.20 3.68 ––2.64 2.14
96.15 90.86 - 97.25 93.67
10 (5–1) 2.72 2.24 21.31 - 3.71 3.60
91.68 52.74 87.20 89.61
11 (5–2) 3.07 11.52 57.95 ––19.64 2.80
83.63 9.36 65.57 41.02
12 (5–3) 8.91 13.09 48.45 ––12.60 5.27
39.13 7.05 31.25 30.83
13 (5–4) 3.23 2.91 21.26 ––5.47 2.74
93.52 55.88 92.71 88.93
14 (6–1) 2.08 2.01 35.87 ––6.91 1.38
74.52 10.31 52.43 47.09
15 (7–1) 1.63 3.04 34.22 ––8.85 3.12
91.44 26.26 67.36 66.75
Mean 4.82 7.23 20.73 ––8.03 4.20
84.19 57.78 79.93 75.27
Median 3.63 3.61 13.83 ––6.91 4.02
90.43 68.89 87.56 83.59
22 T.J. Pingel et al. / ISPRS Journal of Photogrammetry and Remote Sensing 77 (2013) 21–30
Kilian et al. (1996) proposed a progressive morphological filter
based on a series of opening operations applied to a gridded sur-
face model. This model was later developed into a working algo-
rithm by Zhang et al. (2003). At each grid node, the nearest,
lowest value was selected, thus creating a ‘‘minimum surface.’’
To this minimum surface (ZI
), an opening operation was ap-
plied. Opening is an image processing technique whereby the algo-
rithm searches for relative highs within a neighborhood defined by
a structuring element (usually shaped like a disk or square) and
pulls these high values down to the included low (or background)
values. In Zhang’s algorithm, if the difference in elevation between
the original image and the opened image is above a threshold, it is
flagged as a non-ground or object (OBJ) point. This process contin-
ues, with progressively larger structuring elements, until the win-
dow size is larger than the largest feature to be removed (e.g., a
very large building). As the window size increases, the permissible
elevation difference threshold also increases at a rate governed by
a supplied slope parameter and the difference in size between the
current and last window. The final DEM is interpolated using kri-
ging, a geostatistical method of estimation (Wackernagel, 1998).
Although Zhang’s filter performed well in both urban and rugged
forested environments, it performs poorly in high-relief urban
areas. In these cases, the filter can mistake the tops of hills for
buildings and remove them, thus yielding high Type I error rates
and distorted bare earth models. Unfortunately, the publication
of Zhang et al. (2003) was coincident with Sithole and Vosselman
(2003), and so a direct measure of performance of their algorithm
against the ISPRS dataset was never made.
Chen et al. (2007), in a technique reminiscent of Vosselman
(2000), improved upon the basic technique by adding a condition
whereby the edge pixels of large features selected for potential re-
moval were evaluated to see if elevation changes were gradual or
sudden. If the changes on the periphery were gradual, the feature
was retained, whereas if the elevation differences were large, the
feature was identified as non-ground and removed in much the
same way as in Zhang et al. (2003). In both of these filters, the
ground points that remained from the original minimum surface
were used to interpolate a new ground surface, and any original LI-
DAR point lying within a specified distance (often 0.5 m) was
flagged as ground, while all other points were flagged as non-
ground. The description of the algorithm also contained a modified
minimum surface generation scheme in which large discontinu-
ities in the data set are filled according to the lowest value along
their boundary. This feature allows for a better infill of water
bodies which tend not to reflect LIDAR pulses. Were this addition
not included in minimum surface generation, trees and other high
features near the edges of water bodies could severely distort the
local area of the minimum surface and make ground identification
quite challenging.
Chen et al. (2007) published the results of their algorithm run
against the fifteen ISPRS samples (Sithole and Vosselman, 2003),
improving on Axelsson’s algorithm for seven of fifteen samples.
Although mean total error was not notably low (7.23%), this was
largely the result of markedly poor performance on Sample 4–1,
where total error was 34.03%. Median total error, which tends to
discount the effects of such large outliers, was 3.61%, and was com-
mensurate with Axelsson’s progressive TIN densification method
(median 3.63%).
2. Algorithm
2.1. Parameters
In order to gauge the baseline performance of a progressive
morphological ground filtering algorithm, we developed a simpli-
fied model to determine how each part of the algorithm contrib-
utes to its overall performance. To this end, we designed an
algorithm that requires four parameters in addition to the x,y,
and zcoordinates of the points in the original LIDAR data cloud:
the cell size of the minimum surface grid, a percent slope value
that governs the grid cell BE/OBJ classification at each step, a vector
of window radii that controls the opening operation at each itera-
tion, and a single elevation difference value that governs the ulti-
mate classification of the LIDAR point as bare earth (BE) or object
(OBJ) based on interpolated vertical distance to the minimum sur-
face grid. A fifth, optional elevation scaling parameter assists in the
identification of ground points from the provisional DTM, and is
treated explicitly in the next section.
Both Zhang et al. (2003); Chen et al. (2007) used supplied
parameters to generate exponentially increasing window sizes
based on the cell size and the largest expected feature to be re-
moved. We essentially adopt this approach, with the exception
that the increase in window radius defaults to an increase of one
pixel radius per iteration up to the maximum value that the user
specifies (i.e., it increases slowly, and linearly), though the imple-
mented algorithm (
capable of taking user supplied vectors as well. In the following re-
sults section, we demonstrate the importance of increasing the
window radius gradually. Zhang et al. (2003) noted a preference
for this approach, though the extra computational expense associ-
ated with this method was apparently thought prohibitive at the
2.2. Methodology
The algorithm consists of four conceptually distinct stages
(Fig. 1). The first is the creation of the minimum surface (ZI
The second is the processing of the minimum surface, in which grid
cells from the raster are identified as either containing bare earth
(BE) or objects (OBJ). This second stage represents the heart of
the algorithm. The third step is the creation of a DEM from these
gridded points. The fourth step is the identification of the original
LIDAR points as either BE or OBJ based on their relationship to the
interpolated DEM.
As with many other ground filtering algorithms, the first step is
generation of ZI
from the cell size parameter and the extent of
the data. The two vectors corresponding to [min:cellSize:max] for
each coordinate – xi and yi – may be supplied by the user or
may be easily and automatically calculated from the data. Without
supplied ranges, the SMRF algorithm creates a raster from the ceil-
ing of the minimum to the floor of the maximum values for each of
the (x,y) dimensions. If the supplied cell size parameter is not an
integer, the same general rule applies to values evenly divisible
by the cell size. For example, if cell size is equal to 0.5 m, and the
xvalues range from 52345.6 to 52545.4, the range would be
[52346 52545].
The SMRF technique is intended to apply to the first and last re-
turns of the point cloud, though the minimum surface described in
the following paragraph could be generated nearly as well with
only the last returns. However, though the last return of any given
pulse is the most likely to be ground, it need not be: one can imag-
ine that the last return of one pulse could happen to hit an object at
a given location, while the first return of another pulse might strike
more near to the ground at the same location. In this case, the pre-
mature removal of the first return from the second pulse would
introduce a small error into the DEM that any filter would have dif-
ficulty removing. For this reason, it is suggested that both first and
last returns be used since the extraneous observations are soon re-
moved during the generation of the initial grid.
The minimum surface grid ZI
defined by vectors (xi,yi) is
filled with the nearest, lowest elevation from the original LIDAR
T.J. Pingel et al. / ISPRS Journal of Photogrammetry and Remote Sensing 77 (2013) 21–30 23
point cloud (x,y,z) values, provided that the distance to the nearest
point does not exceed the supplied cell size parameter. This provi-
sion means that some grid points of ZI
will go unfilled. To fill
these values, we rely on computationally inexpensive image
inpainting techniques. Image inpainting involves the replacement
of the empty cells in an image (or matrix) with values calculated
from other nearby values. It is a type of interpolation technique de-
rived from artistic replacement of damaged portions of photo-
graphs and paintings, where preservation of texture is an
important concern (Bertalmio et al., 2000). When empty values
are spread through the image, and the ratio of filled to empty pix-
els is quite high, most methods of inpainting will produce satisfac-
tory results. In an evaluation of inpainting methods on ground
identification from the final terrain model, we found that Laplacian
techniques produced error rates nearly three times higher than
either an average of the eight nearest neighbors or D’Errico’s
spring-metaphor inpainting technique (D’Errico, 2004). The
spring-metaphor technique imagines springs connecting each cell
with its eight adjacent neighbors, where the inpainted value corre-
sponds to the lowest energy state of the set, and where the entire
(sparse) set of linear equations is solved using partial differential
equations. Both of these latter techniques were nearly the same
with regards to total error, with the spring technique performing
slightly better than the k-nearest neighbor (KNN) approach.
It is worthwhile to note that there are other possible methods of
creating the initial surface (e.g., Hollaus et al., 2010). Notably, the
maximum value, instead of the minimum, could be calculated for
any given grid point. This maximum surface would appear much
smoother to the eye when transformed into a hillshaded image,
but would make the task of the progressive filter more difficult
since permeable objects (like trees) would tend to remain in the
image longer. The difference between the maximum and minimum
surfaces tends to highlight edges (as well as vegetation), and initial
work focused on using this surface to guide later stages of the algo-
rithm. Unfortunately, the ISPRS LIDAR dataset samples were of
lower resolution than many of those currently produced, and for
this reason the difference surface proved unhelpful in ground clas-
sification of the ISPRS datasets at 1 m resolution.
The second stage of the ground identification algorithm in-
volves the application of a progressive morphological filter to the
minimum surface grid (ZI
). At the first iteration, the filter ap-
plies an image opening operation to the minimum surface. An
opening operation consists of an application of an erosion filter fol-
lowed by a dilation filter. The erosion acts to snap relative high val-
ues to relative lows, where a supplied window radius and shape (or
structuring element) defines the search neighborhood. The dilation
uses the same window radius and structuring element, acting to
outwardly expand relative highs. Fig. 2 illustrates an opening oper-
ation on a cross section of a transect from Sample 1–1 in the ISPRS
LIDAR reference dataset (Sithole and Vosselman, 2003), following
Zhang et al. (2003).
In this case, we selected a disk-shaped structuring element, and
the radius of the element at each step was increased by one pixel
from a starting value of one pixel to the pixel equivalent of the
maximum value (wk
). The maximum window radius is supplied
as a distance metric (e.g., 21 m), but is internally converted to a
pixel equivalent by dividing it by the cell size and rounding the re-
sult toward positive infinity (i.e., taking the ceiling value). For
example, for a supplied maximum window radius of 21 m, and a
cell size of 2 m per pixel, the result would be a maximum window
radius of 11 pixels. While this represents a relatively slow progres-
sion in the expansion of the window radius, we believe that the
high efficiency associated with the opening operation mitigates
the potential for computational waste. The improvements in clas-
sification accuracy using slow, linear progressions are documented
in the next section.
On the first iteration, the minimum surface (ZI
) is opened
using a disk-shaped structuring element with a radius of one pixel.
An elevation threshold is then calculated, where the value is equal
to the supplied slope tolerance parameter multiplied by the prod-
uct of the window radius and the cell size. For example, if the user
supplied a slope tolerance parameter of 15%, a cell size of 2 m per
pixel, the elevation threshold would be 0.3 m at a window of one
pixel (0.15 12). This elevation threshold is applied to the dif-
ference of the minimum and the opened surfaces. Any grid cell
with a difference value exceeding the calculated elevation thresh-
old for the iteration is then flagged as an OBJ cell. The algorithm
then proceeds to the next window radius (up to the maximum),
and proceeds as above with the last opened surface acting as the
‘‘minimum surface’’ for the next difference calculation.
The end result of the iteration process described above is a bin-
ary grid where each cell is classified as being either bare earth (BE)
or object (OBJ). The algorithm then applies this mask to the starting
minimum surface to eliminate nonground cells. These cells are
then inpainted according to the same process described previously,
producing a provisional DEM (ZI
Fig. 1. Workow diagram of the SMRF algorithm.
24 T.J. Pingel et al. / ISPRS Journal of Photogrammetry and Remote Sensing 77 (2013) 21–30
The final step of the algorithm is the identification of ground/
object LIDAR points. This is accomplished by measuring the verti-
cal distance between each LIDAR point and the provisional DEM,
and applying a threshold calculation. While many authors use a
single value for the elevation threshold, we suggest that a second
parameter be used to increase the threshold on steep slopes, trans-
forming the threshold to a slope-dependent value. The total per-
missible distance is then equal to a fixed elevation threshold plus
the scaling value multiplied by the slope of the DEM at each LIDAR
point. The rationale behind this approach is that small horizontal
and vertical displacements yield larger errors on steep slopes,
and as a result the BE/OBJ threshold distance should be more per-
missive at these points.
The calculation requires that both elevation and slope are inter-
polated from the provisional DEM. There are any number of inter-
polation techniques that might be used, and even nearest neighbor
approaches work quite well, so long as the cell size of the DEM
nearly corresponds to the resolution of the LIDAR data. A compar-
ison of how well these different methods of interpolation perform
is given in the next section. Based on these results, we find that a
splined cubic interpolation provides the best results.
It is common in LIDAR point clouds to have a small number of
outliers which may be either above or below the terrain surface.
While above-ground outliers (e.g., a random return from a bird in
flight) are filtered during the normal algorithm routine, the be-
low-ground outliers (e.g., those caused by a reflection) require a
separate approach. Early in the routine and along a separate pro-
cessing fork, the minimum surface is checked for low outliers by
inverting the point cloud in the z-axis and applying the filter with
parameters (slope = 500%, maxWindowSize = 1). The resulting
mask is used to flag low outlier cells as OBJ before the inpainting
of the provisional DEM. This outlier identification methodology is
functionally the same as that of Zhang et al. (2003).
The provisional DEM (ZI
), created by removing OBJ cells from
the original minimum surface (ZI
) and then inpainting, tends to
be less smooth than one might wish, especially when the surfaces
are to be used to create visual products like immersive geographic
virtual environments. As a result, it is often worthwhile to reinter-
polate a final DEM from the identified ground points of the original
LIDAR data (ZI
). Surfaces created from these data tend to be
smoother and more visually satisfying than those derived from
the provisional DEM.
Very large (>40 m in length) buildings can sometimes prove
troublesome to remove on highly differentiated terrain. To accom-
modate the removal of such objects, we implemented a feature in
the published SMRF algorithm which is helpful in removing such
features. We accomplish this by introducing into the initial mini-
mum surface a ‘‘net’’ of minimum values at a spacing equal to
the maximum window diameter, where these minimum values
are found by applying a morphological open operation with a disk
shaped structuring element of radius (2
). Since only one
example in this dataset had features this large (Sample 4–2, a
trainyard) we did not include this portion of the algorithm in the
formal testing procedure, though we provide a brief analysis of
the effect of using this net filter in the next section.
The methodology described above shares much in common
with previous implementations of progressive morphological fil-
ters (PMFs) (Chen et al., 2007; Zhang et al., 2003), but differs in sev-
eral important details that make a large difference in how well
SMRF handles complex terrain. First, it differs from both previous
PMFs in the method of interpolating empty cells in ZI
, for while
Zhang uses a nearest neighbor approach, and Chen uses a similar
approach with an added function to detect and fill large water-
bodies, we use image inpainting techniques to produce a smoother
grid from the outset. Since the initial grid is the fundamental input
for the progressive filtering stage, differences in its constitution are
quite important. Zhang’s algorithm uses a somewhat more com-
plex system of five parameters to control the growth of the win-
dow size and elevation threshold. In contrast, SMRF uses a
linearly increasing window up to a maximum specified size, where
the elevation threshold is controlled by a single parameter. This
has the effect of making exploratory analysis much more simple,
since the construction of the provisional DEM requires only two
parameters (in addition to the required cell size of the DEM) – a
maximum window radius and a slope parameter. Our testing indi-
cates that not only is the SMRF method of controlling window sizes
and elevation thresholds more simple, it is also more effective than
Zhang’s method. Chen’s method of differentially controlling the
window size and elevation threshold based on whether the filter
is targeted at removing vegetation or buildings is somewhat differ-
ent than either of these, though it is worthwhile to note that the
increased number of parameters required for Chen’s building filter
renders it more difficult to use as an exploratory technique. While
both Chen and Zhang utilize kriging (Wackernagel, 1998) to create
the provisional DEM, SMRF uses an inpainting solution (D’Errico,
2004) derived from image processing to inpaint only missing val-
ues, and retains points from ZI
not excluded by the filter. Finally,
SMRF differs from either the Zhang or Chen filter in its inclusion of
Fig. 2. An open operation applied to a transect [512700 5403805; 512800 5403805] from Sample 1–1 of the ISPRS reference dataset. Opening consists of an erosion operation
(effectively pulling high values within a given space down to local minima) followed by a dilation operation (pulling low values to local maxima). This open operation used a
ten meter linear structuring element, and removed all of the vegetation, while preserving the structure of the included building as well as the sloping ground surface on the
right. It represents an early step in the progressive morphological filter’s operation.
T.J. Pingel et al. / ISPRS Journal of Photogrammetry and Remote Sensing 77 (2013) 21–30 25
a slope dependent elevation threshold for identifying ground
points in the LIDAR data set from the provisional DEM. Thus, while
SMRF is not a truly novel solution to the terrain classification prob-
lem, it represents the sum of a series of critical improvements in
3. Results
We tested the SMRF algorithm against all datasets using a com-
bination of parameter values to find which single parameter set
(applied to all fifteen samples) resulted in the highest mean Kappa
score. These parameters define a baseline set of general parameters
which could be expected to give a reasonable performance for any
given sample, and as such could be used as the starting point to
tune the performance of the SMRF algorithm. These parameters
were: slope tolerance = 15%, maximum window radius
) = 18 m, elevationThreshold = 0.5 m, and elevationScaling-
Factor = 1.25. With these general parameters, the mean Kappa
score was 85.40% (median, 90.52%) and the mean total error was
4.40% (median, 3.40%). Individual scores were generally quite good
with two exceptions: Sample 4–1 had a high total error rate
(10.79%), while Sample 5–3 had a low Kappa score (47.24%).
In order to measure the best-case performance of the simple
progressive morphological filter, we systematically varied four in-
puts (slope, maximum window radius, and elevation threshold and
scaling factor for ground identification) and measured the result on
total error, Type I error, Type II error, and Cohen’s Kappa. The cell
size was fixed at one meter. Elevation was varied from 5% to 50%
slope at 5% increments, and at one percent increments during a
second, fine-tune optimization. Maximum window radius was var-
ied from 1 to 40 m. Elevation thresholds were varied from 0 to
6.0 m at 5 cm increments. Elevation scaling factors were varied
from 0 to 5 at increments of 0.05. Table 3 shows the optimized val-
ues for each of the fifteen samples and the associated Type I, Type
II, total error and Kappa score (in percent), where best performance
was determined by highest Kappa score. On this criterion, the
mean Kappa value across all samples was 90.02 (median 91.81)
and the mean total error was 2.97% (median 2.43). The results of
the SMRF algorithm applied to Sample 1–1 are shown in Fig. 3.
Optimized parameters varied according to the type of scene,
and the mean of each of the optimized parameters generally
matched well with its corresponding value from the single param-
eter set. Correlation analysis using Pearson’s coefficient indicates
that the slope tolerance parameter generally increased as the max-
imum window radius decreased for these samples (r(13) = 0.53,
p= .042). Similarly, samples which required large elevation thresh-
old parameters tended to use small scaling factors (r(13) = 0.82,
p< .001). These relationships are illustrated in Fig. 4. Notable out-
liers were Sample 4–2, which required a maximum window radius
of 49, and Sample 5–3 that required a slope tolerance parameter of
45%. Sample 4–2 was a scene of trains in a railyard, featuring very
long, linear features that are difficult to remove with iterative
applications of disk-shaped elements. The ‘‘cut net’’ feature de-
scribed above was designed to help in just such cases. Its applica-
tion for Sample 4–2 with a net cut at 20 m intervals and run with a
20 m radius structuring element (unoptimized values) produced a
nearly identical error profile (Kappa = 95.88%). In this case, since
the terrain around the railyard is quite flat, a large structuring ele-
ment did not yield the high error rates as one might normally ex-
pect in more differentiated terrain. Sample 5–3 used a very high
slope threshold value (45%) and ultimately produced the lowest
quality ground classification (Kappa = 68.12%) of any sample
tested. Sample 5–3 is a scene taken from a quarry which features
steep and highly terraced slopes. Most algorithms tested against
this sample perform poorly, including Axelsson’s (Kappa = 39.14%).
In this case, there are relatively few objects, of typically small size
to be removed, resulting in an optimized maximum window radius
of only 3 m. Most of the error for this sample is concentrated on the
terrace walls in the form of single pixels along the edge misclassi-
fied as either OBJ or BE. The large discontinuities in the surface due
to terracing are most likely responsible for these misclassifications,
since the inpainting routine oversmooths these features.
In order to further characterize parameter sensitivity for SMRF,
we used the single parameter set as the basis to observe how small
changes in the parameters would change the mean Kappa value
across all fifteen samples (Fig. 5). Thus, while we held constant
the parameters related to classifying ground points from the provi-
sional DEM, we systematically varied slope threshold and the max-
imum window radius. In a second test, we held constant the
Table 3
Optimized and single parameter results of the SMRF algorithm when tested against the ISPRS reference dataset expressed in Type I error rate (T.I), Type II error rate (T.II), total
error rate (T.E), and Kappa (K, %). Single parameter results were obtained by using values (.15, 18, 0.5, 1.25) for slope tolerance, window radius, elevation threshold and scaling
factor, respectively. All results used a one meter cell size to generate the Digital Surface and Elevation Models.
Sample Optimized Single parameter
Slopetol. (dz/dx) Window radius (m) Elevation threshold (m) Scaling factor T.I (%) T.I I (%) T.E (%) K (%) T.E (%) K (%)
1 (1–1) 0.20 16 0.45 1.20 7.88 8.81 8.28 83.12 8.64 82.40
2 (1–2) 0.18 12 0.30 0.95 2.57 3.30 2.92 94.15 3.10 93.80
3 (2–1) 0.12 20 0.60 0.00 0.26 4.07 1.10 96.77 1.88 94.43
4 (2–2) 0.16 18 0.35 1.30 2.57 5.07 3.35 92.21 3.40 92.07
5 (2–3) 0.27 13 0.50 0.90 3.21 6.17 4.61 90.73 6.48 87.02
6 (2–4) 0.16 8 0.20 2.05 2.25 6.90 3.52 91.13 4.19 89.49
7 (3–1) 0.08 15 0.25 1.50 0.39 1.52 0.91 98.17 2.48 95.00
8 (4–1) 0.22 16 1.10 0.00 3.64 8.17 5.91 88.18 10.79 78.41
9 (4–2) 0.06 49 1.05 0.00 0.27 1.98 1.48 96.48 2.93 93.07
10 (5–1) 0.05 17 0.35 0.90 0.59 4.44 1.43 95.76 3.00 90.74
11 (5–2) 0.13 13 0.25 2.20 3.09 10.08 3.82 81.04 4.17 78.80
12 (5–3) 0.45 3 0.10 3.80 1.18 31.97 2.43 68.12 7.41 47.24
13 (5–4) 0.05 11 0.15 2.30 2.51 2.05 2.27 95.44 3.67 92.65
14 (6–1) 0.28 5 0.50 1.45 0.51 10.70 0.86 87.22 2.02 75.38
15 (7–1) 0.13 15 0.75 0.00 0.99 6.84 1.65 91.81 1.85 90.52
Mean 2.13 4.47 2.97 90.02 4.40 85.40
Median 2.25 6.17 2.43 91.81 3.40 90.52
Min 0.26 1.52 0.86 68.12 1.85 47.24
Max 7.88 31.97 8.28 98.17 10.79 95.00
Std 1.99 7.37 2.07 7.85 2.70 12.34
26 T.J. Pingel et al. / ISPRS Journal of Photogrammetry and Remote Sensing 77 (2013) 21–30
parameters related to identifying cells in the minimum surface as
BE/OBJ while systematically varying the elevation threshold and
the slope scale factor. The pattern of error reveals that maximum
window radius should typically be greater than 10 m and that
slope should typically be above 10% to expect a mean accuracy
above 80%. Accuracy rates proved much more sensitive to the ele-
vation threshold (in meters) than to the scale factor, indicating that
the slope dependent threshold adds only marginal value.
These two methods (i.e., a single parameter set and an opti-
mized parameter set) of evaluation provide a good means to com-
pare the performance of the SMRF algorithm with other published
algorithms. Meng et al. (2009) cite a mean Kappa score of 76.7%
when using a two parameter-set solution, and 79.9% when using
an optimized solution. Axelsson’s algorithm (1999) effectively uses
seven parameter sets to achieve a mean total error of 4.82% (Kappa,
84.19%). The algorithm of Chen et al. (2007) also used seven
parameter sets to achieve a mean total error of 7.23%. Shao and
Chen (Shao, 2007; Shao and Chen, 2008) did not optimize their
algorithm, but found that the best performing single parameter
set achieved a mean total error rate of 4.42%; if the best mean
parameter set is used for each sample, the mean overall total error
was 4.20%, though this represents only a weak optimization metric.
The single parameter set evaluation method produced error rates
that were below Axelsson (1999) for nine of fifteen samples, and
better for thirteen samples when the fully optimized parameter
sets were used. Whether evaluated by single parameter set or full
Fig. 3. Performance of SMRF filter on Sample 1–1: (a) ZI
, (b) produced DEM, (c) spatial distribution of Type I and Type II errors, (d) difference between DEM estimated from
reference data and the produced DEM. Tick marks and grid lines on all subfigures are at 50 m intervals.
T.J. Pingel et al. / ISPRS Journal of Photogrammetry and Remote Sensing 77 (2013) 21–30 27
optimization, SMRF compares favorably to these results, and estab-
lishes that a well-implemented progressive morphological filter
can be a useful tool for terrain classification of airborne LIDAR data.
3.1. Analysis of alternative subroutines
The creation of the initial grid is the first step in many ground
filtering algorithms, and the previous section described several
methods of generating such surfaces. In our development of SMRF,
we found that creating a minimum surface improved ground filter-
ing over the creation of a surface where the nearest, highest value
was used (i.e., a maximum surface, which had a mean Kappa score
of 89.29%). The same pattern held true for mean (89.15%) and med-
ian (89.55%) surfaces. The infilling technique described by Chen
et al., 2007, in which the boundaries of large unfilled areas are
filled according to the lowest value along the periphery also had
a lower Kappa score than simple inpainting (87.61%, when infilling
was applied to gaps larger than 250 m
), though the authors note
that the approach is only intended to apply to gaps caused by
water bodies and not to other gaps in the data.
The progressive nature of the algorithm, in which a series of
opening operations are applied with successively larger structuring
elements (or windows), traces back to Kilian et al. (1996) and
Zhang et al. (2003). The value of using progressively larger win-
dows, as opposed to a single value, is shown in Fig. 6. As the win-
dow radius gets larger, Type II errors are reduced at a slight cost of
an increased number of Type I errors. This approach works well not
only to reduce quantitative error, but to greatly improve the visual
product of the final DEM. This is because Type II errors tend to be
quite visually disruptive: any point on a building mistaken for
ground will severely distort its local area in the final appearance
of the DEM.
We advocate that progressive morphological filters use a line-
arly increasing window in which the opening operation starts with
a disc-shaped structuring element of one pixel radius and where
the radius increases by one pixel per iteration until the maximum
window radius is reached. One alternative strategy is an exponen-
tially increasing window radius (Zhang et al., 2003). There are two
ways in which such a strategy might reasonably be implemented.
First, the series might be multiplied at each step by a given factor
starting from one, and where the maximum window radius is ap-
pended at the end of the series if not already present (e.g., [1 2 4 8
16 20] for a factor of two and maximum window radius of 20).
Alternatively, the series might be generated in reverse by halving
the size starting from the maximum, and rounding toward positive
infinity (e.g., [1 2 3 5 10 20]).
We tested each of these two strategies by using both the opti-
mized parameter set and the single parameter set for each sample.
Window radii were generated as above, with a base of two in each
case. The reverse exponential generation strategy outperformed
the forward generation strategy with respect to total error, Kappa,
and number of samples with performance exceeding that of the
slow linear opening strategy. We therefore confine our continued
discussion of the exponential opening strategy to the reverse gen-
eration variety. The results for this test are given in Table 4.
For both the optimized and single-parameter tests, the expo-
nential opening (EO) strategy was worse than the slow linear open-
ing (SLO) strategy with respect to mean total error and Kappa. The
exponential opening on the optimized set (EO
) featured a mean
Kappa score of 88.64% (compared to 90.02% for SLO
), a mean to-
tal error of 3.32% (compared to 2.97% for SLO
), and outperformed
on only 3 of 15 samples. Comparisons with respect to Kappa
and total error were similar for the single parameter test, although
the EO
set outperformed SLO
on 7 of 15 samples. The improve-
Fig. 4. Relationships between parameters for the fully optimized samples. Slope
tolerance was inversely related to maximum window radius (top) and elevation
threshold was inversely related to scaling factor (bottom).
Fig. 5. Parameter sensitivity surfaces, where Kappa score is equal to the mean performance on all fifteen samples. Overall performance drops precipitously when the slope
threshold is below .1 and when the maximum window radius is less than 10 m. The highest Kappa values are located when the elevation threshold is set to. 5 m, and perform
marginally better with the inclusion of a modest scale factor (i.e., transforming the threshold to a slope-dependent value).
28 T.J. Pingel et al. / ISPRS Journal of Photogrammetry and Remote Sensing 77 (2013) 21–30
ment on these samples were typically small, while the differences
on samples for which it underperformed were typically larger,
resulting in the same overall pattern for total error and Kappa ob-
served for the optimized set. These results, taken with the illustra-
tion of the difference between slow linear opening and single
opening described in Fig. 6 demonstrate that in most cases, slow
linear opening will result in improved performance.
The algorithm proceeds from the initial minimum surface
) to create an opened surface at the first iteration. From this
point onward, until the maximum window radius is reached, the
SMRF algorithm uses the opened surface from the previous itera-
tion to create a new opened surface at the present iteration. One
alternative to this method is to use the original minimum surface
as the basis for the opening operation at each iteration. While this
did not greatly impact performance, it did result in a lower mean
Kappa score (89.79%, median 92.00%) and greater mean total error
(3.06%, median 2.44%), where the change was due almost entirely
to increased Type I error.
The final step in the algorithm is the identification of the origi-
nal LIDAR points as either BE or OBJ depending on the vertical dis-
tance to the provisional DEM. Because these LIDAR points do not lie
on the grid, the provisional DEM must be interpolated at each LI-
DAR point to calculate the elevation and slope of the DEM at that
location. A splined cubic interpolation provided the best quantita-
tive performance, while Kappa scores for nearest neighbor
(89.49%), linear (89.36%), and non-spline cubic (89.32%) interpola-
tions were slightly lower.
The strength of the SMRF algorithm lies in its ability to retain a
large number of BE points by minimizing Type I error. Fig. 3c illus-
trates the distribution of Type I and Type II errors on Sample 1–1.
For this sample, Type I error was 7.88%, while Type II error was
8.81%. Over all fifteen samples, SMRF had a mean Type I error rate
of 2.13% (median, 2.25%) and a mean Type II error rate of 7.47%
(median 6.17%). In effect, the SMRF algorithm retains fine details
in the terrain, such as footpaths, without exacting high cost in
the form of Type II errors. This characteristic of the SMRF algorithm
makes it highly useful as the basis of terrain for immersive virtual
environments. The retention of detail means that surfaces devel-
oped from SMRF DEMs are more likely to be useful with less addi-
tional post-processing required. In contrast, Type II errors will tend
to be hidden in the virtual environment, at least for urban land-
scapes, since other models will likely be placed on the terrain at
these locations.
As a matter of comparison, Axelsson’s algorithm achieved a
slightly better overall mean Type II error rate (7.46%) but had a sig-
nificantly higher Type I error rate (5.55%). Chen et al. (2007) re-
ported Type I and II error rates for Sample 1–1 only, showing an
improvement over SMRF on Type II error (6.85%) but with a much
higher Type I error rate (19.18%). Shao (2007), in contrast, had a
balanced distribution between Type I error (4.77%) and Type II er-
ror (6.35%).
4. Discussion
The Simple Morphological Filter (SMRF) algorithm was devel-
oped to solve two problems. First, it was designed to be competi-
tive with other ground filtering algorithms for LIDAR data,
particularly with regard to urban environments on highly differen-
tiated terrain. It was successful in that it improved on previous
work with regard to quantitative performance, achieving the high-
est mean Kappa and lowest mean total error scores of any pub-
lished algorithm run against all fifteen ISPRS samples of which
we are aware. The SMRF algorithm is successful not only when
optimized, but even when using a single set of parameters against
all samples, suggesting that novice users can achieve good results
with it.
The second contribution of SMRF, perhaps more important than
the first, is that it establishes a baseline performance for a progres-
sive morphological filter implemented in its simplest form. The es-
sence of the SMRF algorithm requires the input of a minimum
surface and two parameters – a maximum window radius that cor-
responds to the largest feature to be removed, and a single slope
parameter that governs the cell-based ground / non-ground flag-
ging at each iteration. With these two parameters and a supplied
minimum surface, the central subroutine of SMRF produces a pro-
visional ground surface (DTM) that is then used to classify the ori-
ginal LIDAR points as bare earth (BE) or object (OBJ). The real
contribution of SRMF is that it provides a conceptually and compu-
Fig. 6. Type I (BE as OBJ, blue) and Type II (OBJ as BE, red) errors shown for
progressive versus single opening on Sample 1–1 of ISPRS data set. Progressive
opening greatly reduces Type II error at a slight cost of increased Type I error as
window sizes get larger. (For interpretation of the references to colour in this figure
legend, the reader is referred to the web version of this article.)
Table 4
Optimized and single parameter results of the SMRF algorithm when using
exponentially increasing window radii in place of the slow linear opening method.
All other parameters were equal to the values described in Table 3. Exponential
opening was superior to slow linear opening in three samples (3,4,8) for the
optimized set, and in seven samples (2,3,4,6,7,8,10) for the single parameter set,
though mean and median performance on total error and Kappa across all samples
were worse.
Sample Optimized Single parameter
1 (1–1) 8.59 9.11 8.81 82.04 9.61 8.19 9.00 81.72
2 (1–2) 2.67 3.23 2.94 94.11 1.89 4.10 2.97 94.05
3 (2–1) 0.26 3.97 1.08 96.84 0.23 5.77 1.46 95.70
4 (2–2) 2.30 5.11 3.17 92.61 2.09 5.70 3.22 92.48
5 (2–3) 4.51 6.17 5.30 89.36 9.51 5.22 7.48 85.03
6 (2–4) 2.52 7.14 3.79 90.47 3.35 6.32 4.16 89.63
7 (3–1) 0.42 1.56 0.95 98.10 0.12 4.63 2.20 95.56
8 (4–1) 3.77 4.67 4.22 91.56 15.42 3.07 9.23 81.53
9 (4–2) 0.29 2.02 1.51 96.39 1.74 3.59 3.05 92.79
10 (5–1) 0.97 4.57 1.76 94.82 0.08 12.91 2.88 91.16
11 (5–2) 5.25 8.51 5.59 74.38 5.59 10.46 6.10 72.14
12 (5–3) 1.18 31.97 2.43 68.12 11.07 6.98 10.91 36.81
13 (5–4) 2.91 2.21 2.53 94.91 0.93 6.27 3.80 92.39
14 (6–1) 0.53 10.78 0.88 86.99 3.48 4.23 3.51 63.62
15 (7–2) 4.87 4.80 4.86 78.84 4.51 7.12 4.81 78.67
Mean 2.74 7.05 3.32 88.64 4.64 6.30 4.99 82.89
Median 2.52 4.80 2.94 91.56 3.35 5.77 3.80 89.63
Min 0.26 1.56 0.88 68.12 0.08 3.07 1.46 36.81
Max 8.59 31.97 8.81 98.10 15.42 12.91 10.91 95.70
Std 2.35 7.39 2.18 8.94 4.68 2.63 2.89 15.76
T.J. Pingel et al. / ISPRS Journal of Photogrammetry and Remote Sensing 77 (2013) 21–30 29
tationally simple basis to achieve good results, while establishing a
baseline performance for progressive morphological filters against
which future proposed enhancements can be measured.
One important limitation of the results presented here involves
the comparison to other terrain classification algorithms. The algo-
rithms originally tested against the ISPRS dataset are at a distinct
disadvantage in that subsequent authors have been able to develop
algorithms against a reference dataset where BE/OBJ classifications
have been made manually. This means that such filters, including
SMRF, can be optimized to produce better quantitative results.
Our approach has been to establish not only maximum expected
performance (as the result of optimization) but also the results of
a single parameter test that establish a meaningful ‘‘maximin’’ per-
formance. Any solution involving non-optimization of parameter
values merely indicates sub-maximum performance, without pro-
viding a particularly useful estimate as to what the maximum per-
formance might be. Such optimization does not mean that ground-
truthing or training is a critical component of the application of the
algorithm – in fact the relatively small difference between SMRF’s
performance on a single parameter set (mean Kappa = 85.4%) ver-
sus the fully optimized set (mean Kappa = 90.02%) indicates that
the algorithm robustly handles most scenes. This argument does
imply, however, that comparisons between optimized and non-
optimized results should be made with the knowledge that the re-
sults derived from non-optimization methodologies could be
slightly, though perhaps not greatly, improved.
While SMRF performs well overall, future work will address
Type II errors. Type II errors can be particularly damaging for prod-
ucts that are meant to be used visually, since objects mistaken for
ground tend to create bulges and other artifacts that can be dis-
tracting. Since our purpose is to use such surfaces for immersive
geographic virtual environments, this issue is of particular concern
to us. However, the SMRF algorithm performs exceptionally well
with regard to Type I error, and thus retains many ground points
that give shape and character to the terrain. We have found, for in-
stance, that visually significant but subtle ground features like
footpaths tend to be retained very well. In this sense the SMRF
algorithm greatly contributes to the potential for automatic gener-
ation of virtual environments, since important features for which
there may not be any existing data are included in the models at
no additional generation cost. Similarly, the retention of such fea-
tures may significantly add to their value when applied to other
problems, such as the modeling of human movement in rugged
terrain (Pingel, 2010).
This study was supported by the IC Postdoctoral Fellowship
Program (Grant #HMN1582-09-1-0013). The authors also wish to
thank three anonymous reviewers, each of whom provided com-
ments that directed important improvements in the manuscript.
Axelsson, P., 1999. Processing of laser scanner data – algorithms and applications.
ISPRS Journal of Photogrammetry and Remote Sensing 54 (2–3), 138–147.
Axelsson, P., 2000. DEM generation from laser scanner data using adaptive TIN
models. International archives of photogrammetry. Remote Sensing and Spatial
Information Sciences 33 (Part B4), 110–117.
Bertalmio, M., Guillermo, S., Caselles, V., Ballester, C., 2000. Image inpainting. In:
Proceedings of the 27th Annual Conference on Computer Graphics and
Interactive Techniques (SIGGRAPH 2000), New Orleans, LA, 23–28 July, pp.
Chen, Q., Gong, P., Baldocchi, D., Xie, G., 2007. Filtering airborne laser scanning data
with morphological methods. Photogrammetric Engineering and Remote
Sensing 73 (2), 175–185.
Cohen, J., 1960. A coefficient of agreement for nominal scales. Educational and
Psychological Measurement 20 (1), 37–46.
Congalton, R., 1991. A review of assessing the accuracy of classifications of remotely
sensed data. Remote Sensing of Environment 37 (1), 35–46.
D’Errico, J., 2004. Inpaint_nans.m. <
fileexchange/4551>. (accessed 7.06.11).
Elmqvist, M., Jungert, E., Lantz, F., Persson, A., Söderman, U., 2001. Terrain modelling
and analysis using laser scanner data. International archives of photogrammetry.
Remote Sensing and Spatial Information Sciences 34 (Part 3/W4), 219–226.
Hollaus, M., Mandlburger, G., Pfeifer, N., Mücke, W., 2010. Land cover dependent
derivation of digital surface models from airborne laser scanning data.
International archives of photogrammetry. Remote Sensing and Spatial
Information 38 (Part 3A), 221–226.
Jahromi, A.B., Zoej, M.J.V., Mohammadzadeh, A., Sadeghian, S., 2011. A novel
filtering algorithm for bare-earth extraction from airborne laser scanning data
using an artificial neural network. IEEE Journal of Selected Topics in Applied
Earth Observations and Remote Sensing 4 (4), 836–843.
Jensen, J.R., 2005. Introductory Digital Image Processing: A Remote Sensing
Perspective. Prentice Hall, New York.
Kilian, J., Haala, N., Englich, M., 1996. Capture and evaluation of airborne laser
scanner data. International archives of photogrammetry. Remote Sensing and
Spatial Information Sciences 31 (Part B3), 383–388.
Kraus, K., Pfeifer, N., 1998. Determination of terrain models in wooded areas with
airborne laser scanner data. ISPRS Journal of Photogrammetry and Remote
Sensing 53 (4), 193–203.
Liu, X., 2008. Airborne LiDAR for DEM generation: some critical issues. Progress in
Physical Geography 32 (1), 31–49.
Meng, X., Currit, N., Zhao, K., 2010. Ground filtering algorithms for airborne LiDAR
data. A Review of Critical Issues 2 (3), 833–860.
Meng, X., Wang, L., Silván-Cárdenas, J.L., Currit, N., 2009. A multi-directional ground
filtering algorithm for airborne LIDAR. ISPRS Journal of Photogrammetry and
Remote Sensing 64 (1), 117–124.
Pfeifer, N., Reiter, T., Briese, C., Rieger, W., 1999. Interpolation of high quality ground
models from laser scanner data in forested areas. International archives of
photogrammetry. Remote Sensing and Spatial Information Sciences 32 (Part 3/
W14), 31–36.
Pingel, T., 2010. Modeling slope as a contributor to route selection in mountainous
areas. Cartography and Geographic Information Science 37 (2), 137–148.
Shan, J., Toth, C.K. (Eds.), 2008. Topographic Laser Ranging and Scanning: Principles
and Processing. CRC Press, Boca Raton.
Shao, Y.C., 2007. Ground point selection and building detection from Airborne
LiDAR Data. PhD thesis. National Central University.
Shao, Y.C., Chen, L.C., 2008. Automated searching of ground points from airborne
lidar data using a climbing and sliding method. Photogrammetric Engineering
and Remote Sensing 74 (5), 625–635.
Silván-Cárdenas, J., Wang, L., 2006. A multi-resolution approach for filtering LiDAR
altimetry data. ISPRS Journal of Photogrammetry and Remote Sensing 61 (1),
Sithole, G., Vosselman, G., 2003. Comparison of filtering algorithms. International
archives of photogrammetry. Remote Sensing and Spatial Information Sciences
34 (Part 3/W13), 71–78.
Sithole, G., Vosselman, G., 2004. Experimental comparison of filter algorithms for
bare-earth extraction from airborne laser scanning point clouds. ISPRS Journal
of Photogrammetry and Remote Sensing 59 (1–2), 85–101.
Vosselman, G., 2000. Slope based filtering of laser altimetry data. International
archives of photogrammetry. Remote Sensing, and Spatial Information Sciences
33 (Part B3), 935–942.
Vosselman, G., Maas, H.G. (Eds.), 2010. Airborne and Terrestrial Laser Scanning. CRC
Press, Boca Raton.
Wackernagel, H., 1998. Multivariate Geostatistics. Springer-Verlag, Berlin.
Zhang, K., Chen, S., Whitman, D., Shyu, M., Yan, J., Zhang, C., 2003. A progressive
morphological filter for removing nonground measurements from airborne LIDAR
data. IEEE Transactions on Geoscience and Remote Sensing 41 (4), 872–882.
30 T.J. Pingel et al. / ISPRS Journal of Photogrammetry and Remote Sensing 77 (2013) 21–30
... The progressive morphological filter (PMF) [27] uses sliding windows of increasing size to filter the points with elevation difference. Similarly, the simple morphological filter (SMRF) [28] filters the points with linearly increasing windows and slope thresholding. Such methods still rely on thresholds selection for the size of windows and the elevation functions. ...
... To evaluate the data-driven approach, we compare and show the test results of our best-validated iteration and those of the well-established ground-filtering methods designed for DTM extraction. Specifically, four baselines are used with default parameters, including PMF [27] (1m cell, exponential window of maximum 33m, distance of 15cm-250cm, and slope of 45°), SMRF [28] (1m cell, max window of 18m, elevation scalar of 1.25m, threshold of 0.5m, and 15% slope tolerance), SBM [29], and CSF [27] (cloth resolution of 0.5m, rigidness of 3, time step of 0.65, 500 iterations, and slope smoothing postprocessing). The implementations of PMF, SMRF and SBM are from the Point Data Abstraction Library (PDAL) 5 while CSF is provided by the original authors 6 . ...
Full-text available
Despite the popularity of deep neural networks in various domains, the extraction of digital terrain models (DTMs) from airborne laser scanning (ALS) point clouds is still challenging. This might be due to the lack of dedicated large-scale annotated dataset and the data-structure discrepancy between point clouds and DTMs. To promote data-driven DTM extraction, this paper collects from open sources a large-scale dataset of ALS point clouds and corresponding DTMs with various urban, forested, and mountainous scenes. A baseline method is proposed as the first attempt to train a Deep neural network to extract digital Terrain models directly from ALS point clouds via Rasterization techniques, coined DeepTerRa. Extensive studies with well-established methods are performed to benchmark the dataset and analyze the challenges in learning to extract DTM from point clouds. The experimental results show the interest of the agnostic data-driven approach, with sub-metric error level compared to methods designed for DTM extraction. The data and source code is provided at for reproducibility and further similar research.
... The first category is mathematical morphological filtering methods on a lidar point cloud (Chen et al. 2007;Li et al. 2013;Pingel et al. 2013;Hui et al. 2016). The basic operations in these methods include opening, dilation, and erosion, or a combination of them. ...
Full-text available
Ground-point filtering from point-cloud data is an important process in remote sensing and the photogrammetric map-production line, especially in generating digital elevation models from airborne lidar and aerial photogrammetric point-cloud data. In this article, a new and simple boundary-based method is proposed for ground-point filtering from the photogrammetric point-cloud data. The proposed method uses the local height difference to extract the boundaries of objects. Then the extracted boundary points are traced to generate polygons around the borders of any objects on the ground. Finally, the points located inside these polygons, which are classified as non-ground points, are filtered. The experimental results on the photogrammetric point cloud show that the proposed method can adapt to complex environments. The total error of the proposed method is about 8.96%, which is promising in these challenging data sets. Moreover, the proposed method is compared with cloth simulation filtering, multi-scale curvature classification, and gLiDAR methods and gives better results.
... The evaluation of the quality of MVRS results was performed by comparing results to those obtained by selected conventional filters: Progressive Morphological Filter (PMF, [23]), Simple Morphological Filter (SMRF, [24]), Cloth Simulation Filter (CSF, [33]) and adaptive TIN models filter ( [32], ATIN). PDAL software ( ...
With the ever-increasing popularity of unmanned aerial vehicles and other platforms providing dense point clouds, filters for identification of ground points in such dense clouds are needed. Many filters have been proposed and are widely used, usually based on the determination of an original surface approximation and subsequent identification of points within a predefined dis-tance from such surface. We present a new filter, Multi-view and shift rasterization algorithm (MVSR) is based on a different principle, i.e., on the identification of just the lowest points in in-dividual grid cells, shifting the grid along both planar axis and subsequent tilting of the entire grid. The principle is presented in detail and compared both visually and numerically to other commonly used ground filters (PMF, SMRF, CSF, ATIN) on three sites with different ruggedness and vegetation density. Visually, the MVSR filter showed the smoothest and thinnest ground profiles, with ATIN the only filter performing comparably. The same was confirmed when comparing ground filtered by other filters with the MVSR-based surface. The goodness of fit with the original cloud is demonstrated by the root mean square deviations (RMSD) of the points from the original cloud found below the MVSR-generated surface (ranging, depending on site, between 0.6-2.5 cm). The MVSR filter performed outstandingly at all sites, identifying the ground points with great accuracy while filtering out the maximum of vegetation/above-ground points. The filter dilutes the cloud somewhat; in such dense point clouds, however, this can be perceived rather as a benefit than as a disadvantage.
... We compare our results against three state-of-the-art techniques to better understand the quality of our findings (Tables 5 and 6). It is worth noting that we used nDSMs of the test regions to meet the input criteria of those previously developed approaches, and in this study, nDSMs were successfully generated by using the approach in Pingel, Clarke, and McBride (2013). Except for the technique employed by Dalponte et al. (2015b), where we resampled the GSD of the nDSM to 0.5 m to achieve representative outputs, the GSD of the input nDSMs was precisely the same as the DSM used for our approach. ...
Full-text available
Stone Pine (Pinus pinea L.) is currently the pine species with the highest commercial value with edible seeds. In this respect, this study introduces a new methodology for extracting Stone Pine trees from Digital Surface Models (DSMs) generated through an Unmanned Aerial Vehicle (UAV) mission. We developed a novel enhanced probability map of local maxima that facilitates the computation of the orientation symmetry by means of new probabilistic local minima information. Four test sites are used to evaluate our automated framework within one of the most important Stone Pine forest areas in Antalya, Turkey. A Hand-held Mobile Laser Scanner (HMLS) was utilized to collect the reference point cloud dataset. Our findings confirm that the proposed methodology, which uses a single DSM as an input, secures overall pixel-based and object-based F1-scores of 88.3% and 97.7%, respectively. The overall median Euclidean distance revealed between the automatically extracted stem locations and the manually extracted ones is computed to be 36 cm (less than 4 pixels), demonstrating the effectiveness and robustness of the proposed methodology. Finally, the comparison with the state-of-the-art reveals that the outcomes of the proposed methodology outperform the results of six previous studies in this context.
... As the cropland is widely distributed, it is labor-and timeconsuming to acquire cropland dynamics through manual field investigation [6]. With the wide application of satellite images, remote sensing technology have been served as an effective and realistic approach to many aspects, such as terrain classification [7], building footprint extraction [8], as well as land cover CD [9]. Traditional CD methods are mainly based on multispectral images, which extract rich spectral, textural and structural features for rapid pixel-or object-wisely change results. ...
Nonagriculturalization incidents are serious threats to local agricultural ecosystem and global food security. Remote sensing change detection (CD) can provide an effective approach for in-time detection and prevention of such incidents. However, existing CD methods are difficult to deal with the large intraclass differences of cropland changes in high-resolution images. In addition, traditional CNN based models are plagued by the loss of long-range context information, and the high computational complexity brought by deep layers. Therefore, in this article, we propose a CNN-transformer network with multiscale context aggregation (MSCANet), which combines the merits of CNN and transformer to fulfill efficient and effective cropland CD. In the MSCANet, a CNN-based feature extractor is first utilized to capture hierarchical features, then a transformer-based MSCA is designed to encode and aggregate context information. Finally, a multibranch prediction head with three CNN classifiers is applied to obtain change maps, to enhance the supervision for deep layers. Besides, for the lack of CD dataset with fine-grained cropland change of interest, we also provide a new cropland change detection dataset, which contains 600 pairs of 512 × 512 bi-temporal images with the spatial resolution of 0.5–2m. Comparative experiments with several CD models prove the effectiveness of the MSCANet, with the highest F1 of 64.67% on the high-resolution semantic CD dataset, and of 71.29% on CLCD.
... Off-terrain points (i.e. objects on the dune surface such as persons passing the scene) and strong noise are removed by applying the simple morphological filter (SMRF) (Pingel et al., 2013) with a cell size of 0.5 m and a slope parameter of 2 (PDALContributors, 2020). Areas where this filtering did not fully remove all off-terrain points exhibit a high local surface roughness. ...
Full-text available
Morphologies of highly complex star dunes are the result of aeolian dynamics in past and present times. These dynamics reflect climatic conditions and associated forces like sediment availability and vegetation cover as well as feedbacks with adjacent environments. However, an understanding of aeolian dynamics on star dune morphometries is still lacking sufficient detail and their influence for formation and evolution remain unclear. We therefore investigate dynamics of a complex star dune (Erg Chebbi, Morocco), by analyzing wind measurements compared to morphometric changes derived from multitemporal high‐accuracy 3D observations during two surveys (October 2018 and February 2020). Using Real‐Time Kinematic Global Navigation Satellite System (RTK‐GNSS) measurements and Terrestrial Laser Scanning (TLS), the reaction of a star dune surface to an observed constant unimodal sand‐moving wind is presented. TLS point clouds are used for morphometric analysis as well as direct surface change analysis, which relates to sand transport. RTK‐GNSS measurements enable the assessment of horizontal crest movement. Observed surface changes lead to the identification of an overall shielding effect, resulting in sand accumulation mainly on windward slopes. Our results point to a self‐sustained dune growth, which has not yet been described in such spatial detail. Steep slopes, often found on star dunes around the globe, seem to partly hinder up‐slope sand transport. Though a comparatively short observation period, we therefore hypothesize that, besides wind intensity alone, slope angles are more decisive for sand transport than previously assumed. Our methodological approach of combining meteorological data and high‐resolution multitemporal 3D elevation models can be used for a monitoring of all dunes forms and contributes to a general understanding of dune dynamics and evolution.
... After the alignment, vegetation points and outliers were removed from the dataset by applying a statistical outlier filter (k=8, multiplier=10.0; Rusu et al., 2008) and an SMRF filter (cell size=0.5 m, slope=2; Pingel et al., 2013), as well as a filter on the waveform deviation (≤50), all implemented in PDAL (PDAL Contributors, 2018). ...
Full-text available
Virtual laser scanning (VLS) allows the generation of realistic point cloud data at a fraction of the costs required for real acquisitions. It also allows carrying out experiments that would not be feasible or even impossible in the real world, e.g., due to time constraints or when hardware does not exist. A critical part of a simulation is an adequate substitution of reality. In the case of VLS, this concerns the scanner, the laser-object interaction, and the scene. In this contribution, we present a method to recreate a realistic dynamic scene, where the surface changes over time. We first apply change detection and quantification on a real dataset of an erosion-affected high-mountain slope in Tyrol, Austria, acquired with permanent terrestrial laser scanning (TLS). Then, we model and extract the time series of a single change form, and transfer it to a virtual model scene. The benefit of such a transfer is that no physical modelling of the change processes is required. In our example, we use a Kalman filter with subsequent clustering to extract a set of erosion rills from a time series of high-resolution TLS data. The change magnitudes quantified at the locations of these rills are then transferred to a triangular mesh, representing the virtual scene. Subsequently, we apply VLS to investigate the detectability of such erosion rills from airborne laser scanning at multiple subsequent points in time. This enables us to test if, e.g., a certain flying altitude is appropriate in a disaster response setting for the detection of areas exposed to immediate danger. To ensure a successful transfer, the spatial resolution and the accuracy of the input dataset are much higher than the accuracy and resolution that are being simulated. Furthermore, the investigated change form is detected as significant in the input data. We, therefore, conclude the model of the dynamic scene derived from real TLS data to be an appropriate substitution for reality.
Wind-blown snow particles often contaminate Terrestrial Laser Scanning (TLS) data of snow covered terrain. However, common filtering techniques fail to filter wind-blown snow and incorrectly filter data from the true surface due to the spatial distribution of wind-blown snow and the TLS scanning geometry. We present FlakeOut, a filter designed specifically to filter wind-blown snowflakes from TLS data. A key aspect of FlakeOut is a low false positive rate of 2.8x10⁻⁴—an order of magnitude lower than standard filtering techniques—which greatly reduces the number of true ground points that are incorrectly removed. This low false positive rate makes FlakeOut appropriate for applications requiring quantitative measurements of the snow surface in light to moderate blowing snow conditions. Additionally, we provide mathematical and software tools to efficiently estimate the false positive rate of filters applied for the purpose of removing erroneous data points that occur very infrequently in a dataset.
Despite the popularity of deep neural networks in various domains, the extraction of digital terrain models (DTMs) from airborne laser scanning (ALS) point clouds is still challenging. This might be due to the lack of the dedicated large-scale annotated dataset and the data-structure discrepancy between point clouds and DTMs. To promote data-driven DTM extraction, this article collects from open sources a large-scale dataset of ALS point clouds and corresponding DTMs with various urban, forested, and mountainous scenes. A baseline method is proposed as the first attempt to train a deep neural network to extract DTMs directly from ALS point clouds via rasterization techniques, coined DeepTerRa. Extensive studies with well-established methods are performed to benchmark the dataset and analyze the challenges in learning to extract DTM from point clouds. The experimental results show the interest of the agnostic data-driven approach, with submetric error level compared to methods designed for DTM extraction. The data and source code are available online at for reproducibility and further similar research.
Airborne platforms have been improved in the past decade to provide geographic information systems (GISs) with large-scale 3D geographical information. Objectification of such information organized in meshes is a significant challenge for 3D GISs. The ground filtering of 3D meshes is a key step in meeting this challenge, however, its accuracy is highly affected by negative blunders and unbalanced vertex density. This paper proposes a novel method for differentiating ground geometric primitives from realistic 3D meshes based on a cloth simulation filter. Within the method, the fall of a piece of cloth is simulated on a flipped 3D mesh, and the stationary shape of the cloth is considered to be the fitted ground. Utilizing the spatial continuity of meshes, a collision detection based on bounding volume hierarchy is introduced, making the results independent of vertex density. Further, a collision correction based on the scan line and ray casting is proposed to make it applicable to data with negative blunders. The method is assessed quantitatively and visually over several datasets with different vertex densities, scenes, and noise distributions. Results demonstrate that it is a robust method suitable for different landscapes and is not impacted by vertex density and noise.
Conference Paper
Full-text available
Slope exerts a powerful influence on the route selection processes of humans. Attempts to model human movement in hilly and mountainous terrain that have largely focused on least-time route transformations can be improved by incorporating research that suggests humans systematically overestimate slopes. Such research suggests that cost functions derived from slope should be more expensive than time derivations alone would indicate. This paper presents a method that empirically estimates cost functions for slopes. The method is then used to predict routes and paths that are more likely to be selected by humans based on their perceptions of slope. We also evaluate that method and find it successfully predicts road, track and trail locations over a variety of conditions and distances.
Full-text available
The extraction of bare-earth points from airborne laser scanning (ALS) data and the generation of high-quality digital terrain models (DTMs) are important research challenges. In this study, a novel filtering algorithm based on artificial neural networks (ANNs) is proposed to extract bare-earth points from ALS data efficiently. An efficient set of conditions were defined to choose the training data semi-automatically when an expert user is not available. Four standard study sites were used to evaluate the performance of the method. The obtained results were compared with four popular filtering algorithms based on type I error, type II error, the kappa coefficient and the total error. First echoes were used in the proposed method to increase the reliable detection of vegetated areas. The proposed algorithm has an easy implementation procedure and low computational costs. The results obtained for both semiautomatic and supervised training data selection reveal acceptable accuracies, especially for type II errors. Use of this algorithm would lead to high-quality DTM generation using accurately identified bare-earth points in urban areas.
Full-text available
Air-borne laser scanning is an applicable method to derive digital terrain models (DTMs) in wooded terrain. A considerable amount of the laser points are reflections on the tree tops (vegetation points). Thus, special filtering algorithms are required to obtain the ground surface. Earlier, we proposed to use iterative linear prediction. We review existing methods and compare them to our approach. A list of advantages and disadvantages of our method is presented, but this list has also validity for laser scanner data processing in general. The quality of the DTMs derived from laser scanner data and accuracy investigations are presented for two examples.
Full-text available
Laser altimetry is becoming the prime method for large scale acquisition of height data. Although laser altimetry is full integrated into processes for the production of digital elevation models in different countries, the derivation of DEM's from the raw laser altimetry measurements still causes problems. In particular the laser pulses reflected on the ground surface need to be distinguished from those reflecting on buildings and vegetation. In this paper a new method is proposed for filtering laser data. This method is closely related to the erosion operator used for mathematical grey scale morphology. Based on height differences in a representative training dataset, filter functions are derived that either preserve important terrain characteristics or minimise the number of classification errors. In experiments it is shown that the latter filter causes smaller errors in the resulting digital elevation models. In general the performance of the filters deteriorates with a decreasing point density.
Full-text available
To determine the performance of filtering algorithms a study was conducted in which eight groups filtered data supplied to them. The study aimed to determine the general performance of filters, the influence of point resolution on filtering and future research directions. To meet the objectives the filtered data was compared against reference data that was generated manually. In general the filters performed well in landscapes of low complexity. However, complex landscapes as can be found in city areas and discontinuities in the bare earth still pose challenges. Comparison of filtering at lower resolutions confirms that amongst other factors the method of filtering also has an impact on the success of filtering and hence on the choice of scanning resolution. It is suggested that future research be directed at heuristic classification of point-clouds (based on external data), quality reporting, and improving the efficiency of filter strategies.
The extraction of a digital elevation model (DEM) from airborne lidar point clouds is an important task in the field of geoinformatics. In this paper, we describe a new automated scheme that utilizes the so-called “climbingand-sliding” method to search for ground points from lidar point clouds for DEM generation. The new method has the capability of performing a local search while preserving the merits of a global treatment. This is done by emulating the natural movements of climbing and sliding in order to search for ground points on a terrain surface model. To improve efficiency and accuracy, the scheme is implemented with a pseudo-grid data and includes a back selection step for densification. The test data include a dataset released from the ISPRS Working Group III/3 and one for a mountainous area located in southern Taiwan. The experimental results indicate that the proposed method is capable at producing a high fidelity terrain model.
Airborne LiDAR is one of the most effective and rel iable means of terrain data collection. Using LiDAR data for DEM generation is becoming a standard practice in spatial related areas. However, the eff ective processing of the raw LiDAR data and the generation of an efficient and high-qu ality DEM remain big challenges. This paper reviews the recent advances of airborne LiDAR systems and the use of LiDAR data for DEM generation, with special focus on LiDAR data filters, interpolation methods, DEM resolution, and LiDAR data reduction. Separating LiDAR points into ground and non-ground is the most critical and difficult step for DEM generation from LiDAR data. Commonly used and most recently developed LiDAR filtering methods are presented. Interpolatio n methods and choices of suitable interpolator and DEM resolution for LiDAR DEM generation are discussed in detail. In order to reduce the data redundancy and increase the efficiency in terms of storage and manipulation, LiDAR data reduction is required in the process of DEM generation. Feature specific elements such as break lines contribute significantly to DEM quality. Therefore, data reduction should be co nducted in such a way that
Very detailed high-resolution (3D) digital terrain models can be obtained using airborne laser scanner data. However, laser scanning usually entails huge data sets even for moderate areas, making data management and analysis both complex and time consuming. For this reason, automatic terrain modelling and efficient storage structures supporting data access are needed. In this paper a number of methods supporting automatic construction of 3D digital terrain models, especially ground surface modelling and detection and measurement of individual trees will be discussed. Furthermore automatic and/or interactive terrain feature analysis will be discussed. A special data representation structure for the terrain model allowing efficient data storage and data access will be presented. Beside this, it is possible to create a symbolic information structure from the terrain model that can be used in queries for determination of different terrain features, such as ditches or ridges etc., but also for detection of changes in the terrain.