Conference PaperPDF Available

Sparsity-Driven Digital Terrain Model Extraction


Abstract and Figures

We here introduce an automatic Digital Terrain Model (DTM) extraction method. The proposed sparsity-driven DTM extractor (SD-DTM) takes a high-resolution Digital Surface Model (DSM) as an input and constructs a high-resolution DTM using the variational framework. To obtain an accurate DTM, an iterative approach is proposed for the minimization of the target variational cost function. Accuracy of the SD-DTM is shown in a real-world DSM data set. We show the efficiency and effectiveness of the approach both visually and quantitatively via residual plots in illustrative terrain types.
Content may be subject to copyright.
Fatih Nar1, Erdal Yilmaz2, Gustau Camps-Valls3
1Konya Food and Agriculture University, Konya, Turkey; 2Zibumi Studios, Ankara, Turkey
3Image Processing Lab (IPL), Universitat de Val`
encia, Val`
encia, Spain
We here introduce an automatic Digital Terrain Model (DTM)
extraction method. The proposed sparsity-driven DTM ex-
tractor (SD-DTM) takes a high-resolution Digital Surface
Model (DSM) as an input and constructs a high-resolution
DTM using the variational framework. To obtain an accurate
DTM, an iterative approach is proposed for the minimization
of the target variational cost function. Accuracy of the SD-
DTM is shown in a real-world DSM data set. We show the
efficiency and effectiveness of the approach both visually and
quantitatively via residual plots in illustrative terrain types.
Index Termsdigital surface model, digital terrain
model, sparsity, variational inference
A Digital Terrain Model (DTM) is an elevation map of bare
ground where man-made objects (buildings, vehicles, etc.) as
well as vegetation (trees, bushes, etc.) are removed from the
Digital Surface Model (DSM) [1]. In Fig.1,grepresents sur-
face elevations hence DSM, frepresents terrain elevations
hence DTM, and trepresents terrain vs non-terrain classifica-
tion (t= 1 for terrain regions, t= 0 for non-terrain regions).
Fig. 1. DSM versus DTM.
DTMs are useful for extracting man-made and vegetation
objects, extracting terrain parameters, precision farming and
forestry, planning of new roads and railroads, visualization
and simulation of the 3D world, modeling physical phenom-
enas, such as water flow or mass movement, rectification of
aerial photography or satellite imagery, and many other Ge-
ographic Information Systems (GIS) tasks [15]. However,
This work was supported by the Scientific and Technical Research Coun-
cil of Turkey (TUBITAK), Grant Number: TUBITAK-BIDEB-2219. GCV
was funded by the European Research Council (ERC) under the ERC-CoG-
2014 SEDAL project (grant agreement 647423). Authors would like to thank
A. Ozgur and M. Ergul for their help during the preliminary investigation.
manual preparation of a DTM using ground measurements is
expensive and time consuming [2]. Certainly, the definition
of DTM is often elusive and controversial. Thus, automatic
extraction of a DTM from an automatically obtainable DSM
is a reasonable and often preferred alternative, even though it
poses important challenges to be addressed [6,7].
Several approaches to derive DTM exist in the literature.
In [2], a modified linear prediction technique followed by
adaptive processing is proposed for DTM extraction. In [3],
a progressive morphological filter is developed to preserve
ground while removing non-ground objects. An alternative
approach was presented in [4], where a variational approach is
proposed for the semiautomatic generation of the DTM. More
recently, in [8], the most contrasted connected-components
are extracted to generate DTM from LiDAR data, while in [5],
the DSM is segmented into uniform regions and interpolation
is applied between selected regions. Lately, in [9], 2D empir-
ical mode decomposition is proposed for DTM generation.
In this work we propose, a methodology based on the vari-
ational approach introduced in [4]. Our proposed methods
follows an iterative procedure that ‘peels the onion’ according
to a target cost function under sparsity-preserving constraints.
Accuracy of the derived DTM will be shown in a real-world
DSM data set, and analyzed both qualitatively and quantita-
tively in illustrative terrain types.
The remainder of the paper is organized as follows. sec-
tion 2briefly reviews the proposed method used in this work.
Section 3first describes the dataset collected, and then gives
an empirical evidence of performance both visually and quan-
titatively. We conclude in section 4with some remarks and an
outline future work.
DTM can be constructed from a DSM by interpolating the
elevation values in the non-terrain cells using the elevations
of the nearby terrain cells [5]. However, manual delineation
of the cells (as terrain versus non-terrain) is a tedious task [2]
error-prone, and automatic classification is challenging [10,
11]. On top of all this, determining the elevation values for
the non-terrain cells is an ill-posed scattered data interpolation
problem, where it is also sensitive to errors in the terrain non-
terrain boundaries [1].
Inspired by [4], to handle the above mentioned issues, we
propose the minimization of a similar variational cost func-
tion, yet by using a novel iterative approach and numerical
solver for the construction of DTM. The pseudocode is given
in Algorithm 1: firstly the DTM (f) is initialized with ele-
vations of the DSM (g), then a terrain indicator map (t) is
updated which is followed by an update of the terrain eleva-
tion values in an iterative manner. The algorithm is iterated
with the previous solution until it convergences or a maximum
number of iterations nmax is reached. In this study, we use a
regular grid format for representing the DSM and the DTM,
where each grid cell stores a floating number for its elevation
Algorithm 1 DTM Extraction Pseudo-code
1: Input: g, nmax
2: Initialize: f(1) g
3: for n= 1 to nmax do
4: Update terrain indicator map t(n)using f(n)and g
5: Update terrain elevations f(n+1) using t(n),f(n),g
6: Check for convergence using f(n)and f(n+1)
7: end for
8: return f
If DSM is smoothed, then elevations of non-terrain ob-
jects will become lower. However, this simple approach also
leads to an increase in the elevations for the terrain regions
(see Fig. 2, top). In order to prevent this problem, smooth-
ing can be applied onto the DSM using the prior knowledge
f6g. This prior knowledge can be included in the mini-
mization functional as an inequality constraint, and thus can
be combined into a smoothing operation by the minimization
of a cost function which will prevent the height increase in
terrain regions. (see Fig. 2, middle). In Fig. 2, solid blue line
is surface (g) where dotted red line is smoothed surface (f).
Fig. 2. DSM versus DTM.
If we define fas the smoothed version of the surface g,
then the terrain indicator map for each cell can be defined as
below (see Fig. 2, bottom):
tp= 1 min 1
where pis the cell index number, tis the terrain indicator
map, gis the existing DSM, fis the smoothed DSM (rough
DTM), Tng is a terrain threshold (set to 0.5for simplicity).
In this study, the proposed variational cost function that is
minimized to obtain terrain elevations (f) by smoothing the
surface elevations (g) using the prior (f6g) and the terrain
indicator map (t) as following:
J(f) = X
tp((|fpgp|+ 1)21) + λ|(f)p|
w.r.t. fp6gp,
where pis the cell index number, tis the terrain indicator
map, gis the existing DSM, fis the DTM to be obtained, λ
is a positive value determining smoothing level, and is the
gradient operator. The first term is the data fidelity term that
ensures keeping fsimilar to gby using an 1-norm penalty
when the difference between fand gis small, and an 2-norm
as the difference gets larger. The second term is the total vari-
ation (TV) regularization term that implies a penalty on the
changes in image gradients using an 1-norm, thus preserving
details while enforcing smoothness [12]. Higher smoothing
effects are obtained by an increasing the λvalue. The con-
straint, fp6gp, prevents terrain elevations being higher than
surface elevations, as common sense dictates. Here, tindi-
cates a fuzzy membership (06tp61) such that tp= 0 for
non-terrain cells and tp= 1 for terrain cells. As tpgets closer
to 0, data fidelity term vanishes and only TV-regularization
(TV diffusion) term remains, thus cost function acts as a scat-
tered data interpolator. As tpgets closer to 1, data fidelity
term becomes active and surface is preserved more.
2.1. Minimization of the cost function
After doing algebraic manipulations and taking the constraint,
fp6gp, into the cost function using the penalty method with
λpas penalty multiplier, equation (2) becomes as below:
J(f) = X
tp((fpgp)2+ 2|fpgp|)
In equation (3), maximum function (max) returns zero
penalty if fp6gpand it returns a penalty proportional to
λpotherwise. λpshould be increased as the smoothing (λ)
increases, thus we set λp= 0.5λ.
Although equation (3) is convex, absolute and max func-
tions are non-differentiable which makes the minimization
difficult. Inspired from [13,14], we set ˆ
fpas a proxy for fp
to be able to approximate non-differentiable terms in equa-
tions (4), (5), and (6). First, the absolute function in the data
fidelity term is approximated as below:
|fpgp| ≈ dp(fpgp)2
dp= (|ˆ
where εis a small positive constant. In this study, ε= 0.1
is used for all the experiments. Second, the absolute value of
the gradient operator is approximated as:
wx,p = (|(xˆ
wy,p = (|(yˆ
Finally, the max-function is approximated as:
max(fpgp,0) hp(fpgp)2
hp=sgn(max( ˆ
where sgn is the sign function. The approximated cost func-
tion in equation (7) is accurate around ˆ
fpso it must be solved
in an iterative manner [14], where nis the iteration number.
This cost function has a different data fidelity term and numer-
ical minimization approach and it is also iterative comparing
to two-phase solution proposed in [4].
J(n)(f) = X
tp((fpgp)2+ 2dp(fpgp)2)
Equation (7) can be cast in the matrix-vector form as below:
J(n)(vf) =(vfvg)>+ 2(vfvg)>DT(vfvg)
where vg,vf, and vˆ
fare vector forms of gp,fp, and ˆ
a diagonal matrix formed of dp;Tis a diagonal matrix with
entries tp,His a diagonal matrix formed of hp;Wx,Wyare
diagonal matrices with entries wx,p,wy,p; and Cx,Cyare the
Toeplitz matrices as the forward difference gradient operators
with zero derivatives at the right and bottom boundaries.
Equation (8) is quadratic; and hence taking its derivatives
with respect to vfand equating to zero yields its global mini-
mum. This leads to the below sparse linear system:
b= (R+λpH)vg,
where R=T(2D+I)as Ibeing the identity matrix. Here,
iteration number is nfor the A,R,T,D,H,Wx,Wyma-
trices and bvector unless it is explicitly stated.
2.2. DTM Extraction Algorithm
The DTM extraction method is provided in Algorithm 1. De-
tails on the terrain indicator map update and terrain elevations
update approaches are given therein.
In Algorithm 2, preconditioned conjugate gradient (PCG)
with incomplete Cholesky preconditioner (ICP) is used as an
iterative solver to solve the linear system at line 9, where the
maximum number of PCG iterations is set to 103and conver-
gence tolerance is set to 103.
Algorithm 2 DTM Extraction Algorithm
1: Input: g, λ = 5, nmax = 104, Ctoler ance = 103
2: vgg,vfg,λp0.5λ,Tng 0.5,ε0.1
3: for n= 1 to nmax do
4: Update terrain indicator map:
5: vt=
6: Update terrain elevations:
7: vˆ
8: Construct Wx,Wy,T,D,R,H,A, and b
9: solve Avf=b
10: vfmin(vf, vg)force the f6gconstraint
11: Check for the convergence:
12: if kvfvˆ
fk< Ctolerance then break the loop
13: end for
14: return fwhere fvf
In [4], large smoothing factor was used to determine the
terrain indicator map, and then the algorithm was executed
again with a smaller smoothing factor. Alternatively, in our
approach, a small smoothing factor is used and the terrain
indicator map is iteratively updated, which leads to a better
preservation of details in terrain regions. Therefore, in Al-
gorithm 2, terrain elevations (f) are initialized as surface el-
evations (g) and then both the terrain elevations (f) and the
terrain indicator map (t) are iteratively refined.
Fig. 3. Evolution of terrain elevations (f) and terrain indicator
map (t) for the Algorithm 2on 1-dimensional data.
3.1. Data Collection and Characteristics
We applied the proposed method to Cerkes village dataset to
illustrate performance in a large terrain with wide variety of
features (i.e. flat regions, hills, rivers, buildings, utility poles,
cars, trees, etc.). This dataset covers 11 km2area which is ob-
tained using photogrammetry techniques, where raster image
has 5 centimeter pixel resolution and DSM has 5 centimeter
pixel-spacing. Coverage of Cerkes village (at Cankiri city of
Turkey) dataset as a bounding box is given as below:
N4049047.0700 E3252022.1700 to N4047054.9800 E3254049.6100
3.2. Visual results
Fig. 4shows rasters (top), DSMs (middle), and extracted
DTMs (bottom) of 3 subregions in the Cerkes village dataset.
As seen in Fig. 4, man-made objects and vegetations are suc-
cessfully removed from the terrain and these regions are also
interpolated smoothly. Thus, it can be noted that the proposed
method is able to extract bare earth successfully.
Fig. 4. Cerkes data: (a) raster, (b) DSM, (c) extracted DTM.
3.3. Numerical evaluation
A numerical evaluation was conducted using residual his-
togram for Cerkes village dataset (in 11 km2) where mean
residual is 0.24 cm, median residual is 0.1cm, and mean
squared error is 1.19 cm. The residual histogram in Fig. 5
shows that the proposed method performs well for a real
world data. Note that, frequencies of residuals are shown in
log10-scale to prevent zero residual dominating the plot.
Fig. 5. Residual histogram of the proposed method.
In this study, we presented an automatic DTM extraction
method that iteratively estimates terrain indicator map and
terrain elevations. Experiments show that proposed method
can produce an accurate DTM for the given high-resolution
DSM where wide variety of non-terrain objects exist on the
terrain with various slopes. Future work will consider adding
asymmetry constraints and doing more experiments in re-
gions showing additional characteristics.
[1] Z. Li, C. Zhu, and C. Gold, Digital Terrain Modeling: Princi-
ples and Methodology, CRC Press, 2004.
[2] H.S. Lee and N.H. Younan, “DTM extraction of LiDAR returns
via adaptive processing, IEEE Transactions on Geoscience
and Remote Sensing, vol. 41, no. 9, pp. 2063–2069, 2003.
[3] Keqi Zhang, Shu-Ching Chen, D. Whitman, Mei-Ling Shyu,
Jianhua Yan, and Chengcui Zhang, “A progressive morpho-
logical filter for removing nonground measurements from air-
borne LiDAR data, IEEE Transactions on Geoscience and
Remote Sensing, vol. 41, no. 4, pp. 872–882, 2003.
[4] M. Grabner A. Klaus M. Unger, T . Pock and H. Bischof, “A
variational approach to semiautomatic generation of digital ter-
rain models,” in International Symposium Advances in Visual
Computing, pp. 1119–1130. 2009.
[5] Charles Beumier and Mahamadou Idrissa, “Digital terrain
models derived from digital surface model uniform regions in
urban areas,” International Journal of Remote Sensing, vol. 37,
no. 15, pp. 3477–3493, 2016.
[6] G. Sithole and G. Vosselman, “Experimental comparison of
filter algorithms for bare-earth extraction from airborne laser
scanning point clouds,” ISPRS Journal of Photogrammetry and
Remote Sensing, vol. 59, no. 1, pp. 85–101, 2004.
[7] Joachim Hohle and Michael Hohle, “Accuracy assessment of
digital elevation models by means of robust statistical meth-
ods,” ISPRS Journal of Photogrammetry and Remote Sensing,
vol. 64, no. 4, pp. 398–406, 2009.
[8] D. Mongus and B. Zalik, “Computationally efficient method
for the generation of a digital terrain model from airborne Li-
DAR data using connected operators, IEEE Journal of Se-
lected Topics in Applied Earth Observations and Remote Sens-
ing, vol. 7, no. 1, pp. 340–351, 2014.
[9] A. H. Ozcan and C. Unsalan, “LiDAR data filtering and DTM
generation using empirical mode decomposition,” IEEE Jour-
nal of Selected Topics in Applied Earth Observations and Re-
mote Sensing, vol. 10, no. 1, pp. 360–371, 2017.
[10] L. Bruzzone J. Munoz-Mari and G. Camps-Valls, “A support
vector domain description approach to supervised classifica-
tion of remote sensing images,” IEEE Transactions on Geo-
science and Remote Sensing, vol. 45, no. 8, pp. 2683–2692,
[11] G. Camps-Valls, D. Tuia, L. G´
omez-Chova, S. Jim´
enez, and
J. Malo, “Remote sensing image processing, Synthesis Lec-
tures on Image, Video, and Multimedia Processing, vol. 12, pp.
1–194, 2012.
[12] Leonid I Rudin, Stanley Osher, and Emad Fatemi, “Nonlinear
total variation based noise removal algorithms, Physica D:
Nonlinear Phenomena, vol. 60, no. 1, pp. 259–268, 1992.
[13] C. Ozcan, B. Sen, and F. Nar, “Sparsity-driven despeckling for
SAR images,” IEEE Geoscience and Remote Sensing Letters,
vol. 13, no. 1, pp. 115–119, 2016.
[14] F. Nar, A. Ozgur, and A. N. Saran, “Sparsity-driven change
detection in multitemporal SAR images,” IEEE Geoscience
and Remote Sensing Letters, vol. 13, no. 7, pp. 1032–1036,
... On the other hand, there are different methods based on finding the ground points to interpolate the remaining surface. These methods mostly depend on rulebased techniques that uses slope-based methods, morphological approaches (Weidner & Förstner, 1995), multi-directional scanlines (Perko, Raggam, Gutjahr, & Schardt, 2015), the cloth simulation (Zhang, et al., 2016) and the sparsity driven algorithms (Nar, Yilmaz, & Camps-Valls, 2018). Even there is a big progress achieved in automated DTM extraction methods, there are still some misclassifications in features such as cliffs, hills or small rocks. ...
Conference Paper
Full-text available
A Digital Terrain Model (DTM) is a representation of the bare-earth with elevations at regularly spaced intervals. This data is captured via aerial imagery or airborne laser scanning. Prior to use, all the above-ground natural (trees, bushes, etc.) and man-made (houses, cars, etc.) structures needed to be identified and removed so that surface of the earth can be interpolated from the remaining points. Elevation data that includes above-ground objects is called as Digital Surface Model (DSM). DTM is mostly generated by cleaning the objects from DSM with the help of a human operator. Automating this workflow is an opportunity for reducing manual work and it is aimed to solve this problem by using conditional adversarial networks. In theory, having enough raw and cleaned (DSM & DTM) data pairs will be a good input for a machine learning system that translates this raw (DSM) data to cleaned one (DTM). Recent progress in topics like 'Image-to-Image Translation with Conditional Adversarial Networks' makes a solution possible for this problem. In this study, a specific conditional adversarial network implementation "pix2pix" is adapted to this domain. Data for "elevations at regularly spaced intervals" is similar to an image data, both can be represented as two dimensional arrays (or in other words matrices). Every elevation point map to an exact image pixel and even with a 1-millimeter precision in z-axis, any real-world elevation value can be safely stored in a data cell that holds 24-bit RGB pixel data. This makes total pixel count of image equals to total count of elevation points in elevation data. Thus, elevation data for large areas results in sub-optimal input for "pix2pix" and requires a tiling. Consequently, the challenge becomes "finding most appropriate image representation of elevation data to feed into pix2pix" training cycle. This involves iterating over "elevation-to-pixel-value-mapping functions" and dividing elevation data into sub regions for better performing images in pix2pix.
Full-text available
In this letter, a method for detecting changes in multitemporal synthetic aperture radar (SAR) images by minimizing a novel cost function is proposed. This cost function constructed with a log-ratio based data fidelity terms and ℓ1-norm based total variation (TV) regularization term. Log-ratio terms model the changes between the two SAR images where TV regularization term imposes smoothness on these changes in a sparse manner such that fine details are extracted while effects like speckle noise are reduced. Proposed method, sparsity driven change detection (SDCD), employs accurate approximation techniques for the minimization of the cost function since data fidelity terms are not convex and employed ℓ1-norm TV regularization term is not differentiable. Performance of the SDCD is shown on real-world SAR images obtained from various SAR sensors.
Full-text available
Speckle noise inherent in synthetic aperture radar (SAR) images seriously affects the result of various SAR image processing tasks such as edge detection and segmentation. Thus, speckle reduction is critical and is used as a preprocessing step for smoothing homogeneous regions while preserving features such as edges and point scatterers. Although state-of-the-art methods provide better despeckling compared with conventional methods, their resource consumption is higher. In this letter, a sparsity-driven total-variation (TV) approach employing l0-norm, fractional norm, or l1-norm to smooth homogeneous regions with minimal degradation in edges and point scatterers is proposed. Proposed method, sparsity-driven despeckling (SDD), is capable of using different norms controlled by a single parameter and provides better or similar despeckling compared with the state-of-the-art methods with shorter execution times. Despeckling performance and execution time of the SDD are shown using synthetic and real-world SAR images.
Full-text available
This paper proposes a new mapping schema, named $Theta $ mapping, for filtering nonground objects from LiDAR data, and the generation of a digital terrain model. By extending the CSL model, $Theta $ mapping extracts the most contrasted connected-components from top-hat scale-space and attributes them for an adaptive multicriterion filter definition. Areas of the most contrasted connected-components and the standard deviations of contained points' levels are considered for this purpose. Computational efficiency is achieved by arranging the input LiDAR data into a grid, represented by a Max-Tree. Since a constant number of passes over the grid is required, the time complexity of the proposed method is linear according to the number of grid-cells. As confirmed by the experiments, the average CPU execution time decreases by nearly 98%, while the average accuracy improves by up to 10% in comparison with the related method.
Full-text available
A constrained optimization type of numerical algorithm for removing noise from images is presented. The total variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed using Lanrange multipliers. The solution is obtained using the gradient-projection method. This amounts to solving a time dependent partial differential equation on a manifold determined by the constraints. As t → ∞ the solution converges to a steady state which is the denoised image. The numerical algorithm is simple and relatively fast. The results appear to be state-of-the-art for very noisy images. The method is noninvasive, yielding sharp edges in the image. The technique could be interpreted as a first step of moving each level set of the image normal to itself with velocity equal to the curvature of the level set divided by the magnitude of the gradient of the image, and a second step which projects the image back onto the constraint set.
Digital terrain models (DTMs) are of significant interest for applications such as environment planning, flood risk assessment or building detection. A digital surface model (DSM) can be obtained efficiently in both time and cost from light detection and ranging (lidar) acquisition or from digital photogrammetry with aerial or satellite stereoscopic imagery. A DTM can be derived from a DSM if the distinction between ground and non-ground pixels can be automated. We propose in this article a new automatic DSM-to-DTM transform targeting urban areas. Our approach segments the DSM twice: first to get large uniform regions normally corresponding to the road network and attached town squares, and second to obtain smoother areas. Smoother DSM areas overlapping the large regions are selected to populate the DTM, which is then completed by a hierarchical interpolation procedure. As a refinement step, unused smoother regions lying under this DTM are added to create, after interpolation, the final DTM. This approach is positioned relative to the literature about segmentation-based lidar ground filtering. The procedure was developed for DSM rasters. Since the DTM extraction is intended to be applied to large images, special attention was devoted to optimize image processing tasks relative to memory usage and execution time. The proposed development was integrated in a building detection procedure and validated qualitatively in the context of a benchmark on urban object detection of the International Society for Photogrammetry and Remote Sensing (ISPRS). It was also applied to Brussels data for which lidar DTMs are available. The DTM comparison supports the correctness of our solution although difficulties may be encountered in off-terrain regions surrounded by higher regions and some interiors of city blocks. A final test on a rural and peri-urban scene opens positive perspectives for scenes more general than urban areas.
LiDAR technology is advancing. As a result, researchers can benefit from high-resolution height data from Earth’s surface. Digital terrain model (DTM) generation and point classification (filtering) are two important problems for LiDAR data. These are connected problems since solving one helps solving the other. Manual classification of LiDAR point data could be time consuming and prone to errors. Hence, it would not be feasible. Therefore, researchers proposed several methods to solve DTM generation and point classification problems. Although these methods work fairly well in most cases, they may not be effective for all scenarios. To contribute in this research topic, a novel method based on two-dimensional (2-D) empirical mode decomposition (EMD) is proposed in this study. Local, nonlinear, and nonstationary characteristics of EMD allow better DTM generation. The proposed method is tested on two publicly available LiDAR dataset, and promising results are obtained. Besides, the proposed method is compared with other methods in the literature. Comparison results indicate that the proposed method has certain advantages in terms of performance.
Over the past years, several filters have been developed to extract bare-Earth points from point clouds. ISPRS Working Group III/3 conducted a test to determine the performance of these filters and the influence of point density thereon, and to identify directions for future research. Twelve selected datasets have been processed by eight participants. In this paper, the test results are presented. The paper describes the characteristics of the provided datasets and the used filter approaches. The filter performance is analysed both qualitatively and quantitatively. All filters perform well in smooth rural landscapes, but all produce errors in complex urban areas and rough terrain with vegetation. In general, filters that estimate local surfaces are found to perform best. The influence of point density could not well be determined in this experiment. Future research should be directed towards the usage of additional data sources, segment-based classification, and self-diagnosis of filter algorithms.
Measures for the accuracy assessment of Digital Elevation Models (DEMs) are discussed and characteristics of DEMs derived from laser scanning and automated photogrammetry are presented. Such DEMs are very dense and relatively accurate in open terrain. Built-up and wooded areas, however, need automated filtering and classification in order to generate terrain (bare earth) data when Digital Terrain Models (DTMs) have to be produced. Automated processing of the raw data is not always successful. Systematic errors and many outliers at both methods (laser scanning and digital photogrammetry) may therefore be present in the data sets. We discuss requirements for the reference data with respect to accuracy and propose robust statistical methods as accuracy measures. Their use is illustrated by application at four practical examples. It is concluded that measures such as median, normalized median absolute deviation, and sample quantiles should be used in the accuracy assessment of such DEMs. Furthermore, the question is discussed how large a sample size is needed in order to obtain sufficiently precise estimates of the new accuracy measures and relevant formulae are presented.
Conference Paper
We present a semiautomatic approach to generate high quality digital terrain models (DTM) from digital surface models (DSM). A DTM is a model of the earths surface, where all man made objects and the vegetation have been removed. In order to achieve this, we use a variational energy minimization approach. The proposed energy functional incorporates Huber regularization to yield piecewise smooth surfaces and an L1 norm in the data fidelity term. Additionally, a minimum constraint is used in order to prevent the ground level from pulling up, while buildings and vegetation are pulled down. Being convex, the proposed formulation allows us to compute the globally optimal solution. Clearly, a fully automatic approach does not yield the desired result in all situations. Therefore, we additionally allow the user to affect the algorithm using different user interaction tools. Furthermore, we provide a real-time D visualization of the output of the algorithm which additionally helps the user to assess the final DTM. We present results of the proposed approach using several real data sets.
In this paper a number of approaches to multispectral image segmentation and classification are considered. The methods range from the simple Bayesian decision rule for classification of image data on pixel-by-pixel basis, to sophisticated algorithms using contextual information. Both the spatial pixel category dependencies and the two-dimensional correlation-type contextual information have been incorporated in decision-making schemes. The aim of these algorithms is to achieve a greater reliability in the process of interpretation of remote-sensing data.