Article

Shuttle Radar Topography Mission Elevation Data Error and Its Relationship to Land Cover

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The Shuttle Radar Topography Mission (SRTM) has resulted in the construction of the first publicly available near-global high resolution digital elevation model (DEM). The utility of this DEM, as for any geospatial data set, is a function of its quality. This paper is concerned with the assessment of SRTM accuracy and its relationship to land cover. Two methods—one raster-based and one point-based—are compared to match "finished" three-arc-second SRTM data to high precision, high accuracy surveyed elevations, as well as a corresponding DEM from the USGS National Elevation Dataset (NED). Differences between the two methodologies were not found to be significant. Error for the study site is substantially less than the mission objective, but substantially more than that for the NED. Significant overestimation of actual elevations pervades the SRTM DEM, and the overestimation is significantly higher in forested areas. This systematic error has implications both for applications employing SRTM data and for research on elevation data error modeling.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... This DEM was produced from data collected during a US Space Shuttle mission in February 2000 that covered 80% of the global land surface [13]. It is the first global DEM with 30 m resolution, but these DEM data have large vertical errors in densely vegetated and urban areas [14][15][16]. To apply a geospatial inundation analysis, which is widely used to model SLR e.g., [17][18][19][20], while accounting for these errors, error propagation modelling must be applied. ...
... The data collected brought substantial significance with it, especially in regions with little or no free terrain data at medium to high resolution. SRTM DEMs are not without their issues, of which accuracy is particularly important here [14]. SRTM 1 arc second data were downloaded from the USGS Earth Explorer, then clipped and mosaiced in ArcGIS to acquire a DEM for the SARR. ...
... In regional and global studies, vegetation cover fraction (VCF) has been used as a variable in global land process models for earth surface change and climate change assessment. This study also required the use of VCF as a variable since SRTM error is associated with vegetation cover [14,[24][25][26][27]. VCF Landsat 5 TM Collection 1 products are available for download from the USGS Earth Explorer; four images, which were acquired on 19 February 2002, were used in this study (Path 165, 166, and Row 39). ...
Article
Full-text available
Global elevation datasets such as the Shuttle Radar Topography Mission (SRTM) digital elevation model (DEM) are the best available terrain data in many parts of the world. Consequently, SRTM is widely used for understanding the risk of coastal inundation due to climate change-induced sea level rise. However, SRTM elevations are prone to error, giving rise to uncertainty in the quality of the inundation projections. This study investigated the error propagation model for the Shatt al-Arab River region (SARR) to understand the impact of DEM error on an inundation model in this sensitive, low-lying coastal region. The analysis involved three stages. First, a multiple regression model, parameterized from the Mississippi River delta region, was used to generate an expected DEM error surface for the SARR. This surface was subtracted from the SRTM DEM for the SARR to adjust it. Second, residuals from this model were simulated for the SARR. Modelled residuals were subtracted from the adjusted SRTM to produce 50 DEM realizations capturing potential elevation variation. Third, the DEM realizations were each used in a geospatial “bathtub” inundation model to estimate flooding area in the region given 1 m of sea level rise. Across all realizations, the area predicted to flood covered about 50% of the entire region, while predicted flooding using the raw SRTM covered only about 28%, indicating substantial underprediction of the affected area when error was not accounted for. This study can be an applicable approach within such environments worldwide.
... The importance of these data stems from the vast areas of the world they cover, especially regions with little or no free terrain data at medium to high resolution. The advantages of these data for scientific and research applications for such regions are substantial (Shortridge 2006). ...
... SRTM DEM error has been investigated in depth, and many studies have focused on error propagation modeling to enhance SRTM elevations in different land cover and topographic environments (Gamba, Dell Acqua, and Houshmand 2002;Hofton et al. 2006;Rodriguez, Morris, and Belz 2006;Shortridge 2006;Bhang, Schwartz, and Braun 2007;LaLonde, Shortridge, and Messina 2010;Kulp and Strauss 2016). Our study contributes to this literature and addresses the challenge of estimating future SLR when using error-prone elevation data: We develop an error model to reduce SRTM 1" error using associations with SRTM derivatives and vegetation cover fraction (VCF). ...
... Vegetation Data. Because SRTM error is positively associated with vegetation cover (Shortridge 2006), we used VCF as a secondary variable to correct SRTM error. VCF has been used in many global land process models for earth surface change and climate change in regional and global studies (Barlage and Zeng 2003;Jiapaer, Chen, and Bao 2011;Baret et al. 2013;Zhang et al. 2013). ...
Article
Full-text available
Digital elevation data are essential to estimate coastal vulnerability to flooding due to sea-level rise. Shuttle Radar Topography Mission (SRTM) 1 arc-second global is considered the best free global digital elevation data set available. Inundation estimates from SRTM, however, are subject to uncertainty due to inaccuracies in the elevation data. Small systematic errors in low, flat areas can generate large errors in inundation models, and SRTM is subject to positive bias in the presence of vegetation canopy, such as along channels and within marshes. In this study, we conducted an error assessment and developed a statistical error model for SRTM to improve the quality of elevation data in the Mississippi River Delta (MRD) region. Vegetation cover, SRTM elevation, and slope were found to be closely associated with SRTM error for a random sample of 10,000 small sites across the MRD region, with an ordinary least squares regression model using these variables explaining over 80 percent of the variation in error. Residuals from this model were spatially autocorrelated, and a variogram model was readily fit to them. We conclude by speculating on the utility of application of this model, developed for the MRD region, to similar near-coastal riverine regions around the world.
... Many studies focused on error propagation modeling to enhance (SRTM in different types of land cover (Tara et al., 2010;Shortridge, 2006;Hofton et al., 2006;Bhang et al., 2007;Rodriguez et al., 2006;Gamba et al., 2002;. Our study is creating correlated surfaces in near coastal riverine regions to achieve the main aims of our study, which are: first, using propagation error model to reduced ( key questions for developing globally applicable error models for inundation assessment: ...
... The advantages of this data for scientific and research applications for such regions are substantial. Accuracy is one important issue with SRTM DEMs (Shortridge, 2006). 1 arc second SRTM was downloaded from Earth Explorer: three tiles were required to cover the study site. I did subsite and mosaic of these images to get MRDR SRTM, and then projected to UTM zone 16 N referenced to the WGS 1984 datum; ...
... Since SRTM error is positively associated with vegetation cover (Shortridge, 2006), I used vegetation cover fraction (VCF) as a secondary variable to correct the SRTM error. VCF has been used in many global land process models as earth surface change and climate change in regional and global studies (Barlage and Zeng, 2003;Zhang et al., 2013;Jiapaer et al., 2011;Baret et al., 2013). ...
Thesis
Full-text available
There is a growing debate among scientists on how sea level rise (SLR) will impact coastal environments, particularly in countries where economic activities are sustained along these coasts. An important factor in this debate is how best to characterize coastal environmental impacts over time. This study investigates the measurement and modeling of SLR and effects on near-coastal riverine regions. The study uses a variety of data sources, including satellite imagery from 1975 to 2017, digital elevation data and previous studies. This research is focusing on two of these important regions: southern Iraq along the Shatt Al-Arab River (SAR) and the southern United States in Louisiana along the Mississippi River Delta (MRD). These sites are important for both their extensive low-lying land and for their significant coastal economic activities. The dissertation consists of six chapters. Chapter one introduces the topic. Chapter two compares and contrasts bothregions and evaluates escalating SLR risk. Chapter three develops a coupled human and natural system (CHANS) perspective for SARR to reveal multiple sources of environmental degradation in this region. Alfa century ago SARR was an important and productive region in Iraq that produced fruits like dates, crops, vegetables, and fish. By 1975 the environment of this region began to deteriorate, and since then, it is well-documented that SARR has suffered under human and natural problems. In this chapter, I use the CHANS perspective to identify the problems, and which ones (human or natural systems) are especially responsible for environmental degradation in SARR. I use several measures of ecological, economic, and social systems to outline the problems identified through the CHANS framework. SARR has experienced extreme weather changes from 1975 to 2017 resulting in lower precipitation (-17mm) and humidity (-5.6%), higher temperatures (1.6 C), and sea level rise, which are affecting the salinity of groundwater and Shatt Al Arab river water. At the same time, human systems in SARR experienced many problems including eight years of war between Iraq and Iran, the first Gulf War, UN Security Council imposed sanctions against Iraq, and the second Gulf War. I modeled and analyzed the regions land cover between 1975 and 2017 to understand how the environment has been affected, and found that climate change is responsible for what happened in this region based on other factors. Chapter four constructs and applies an error propagation model to elevation data in the Mississippi River Delta region (MRDR). This modeling both reduces and accounts for the effects of digital elevation model (DEM) error on a bathtub inundation model used to predict the SLR risk in the region. Digital elevation data is essential to estimate coastal vulnerability to flooding due to sea level rise. Shuttle Radar Topography Mission (SRTM) 1 Arc-Second Global is considered the best free global digital elevation data available. However, inundation estimates from SRTM are subject to uncertainty due to inaccuracies in the elevation data. Small systematic errors in low, flat areas can generate large errors in inundation models, and SRTM is subject to positive bias in the presence of vegetation canopy, such as along channels and within marshes. In this study, I conduct an error assessment and develop statistical error modeling for SRTM to improve the quality of elevation data in these at-risk regions. Chapter five applies MRDR-based model from chapter four to enhance the SRTM 1 Arc-Second Global DEM data in SARR. As such, it is the first study to account for data uncertainty in the evaluation of SLR risk in this sensitive region. This study transfers an error propagation model from MRDR to the Shatt al-Arab river region to understand the impact of DEM error on an inundation model in this sensitive region. The error propagation model involves three stages. First, a multiple regression model, parameterized from MRDR, is used to generate an expected DEM error surface for SARR. This surface is subtracted from the SRTM DEM for SARR to adjust it. Second, residuals from this model are simulated for SARR: these are mean-zero and spatially autocorrelated with a Gaussian covariance model matching that observed in MRDR by convolution filtering of random noise. More than 50 realizations of error were simulated to make sure a stable result was realized. These realizations were subtracted from the adjusted SRTM to produce DEM realizations capturing potential variation. Third, the DEM realizations are each used in bathtub modeling to estimate flooding area in the region with 1 m of sea level rise. The distribution of flooding estimates shows the impact of DEM error on uncertainty in inundation likelihood, and on the magnitude of total flooding. Using the adjusted DEM realizations 47 ± 2 percent of the region is predicted to flood, while using the raw SRTM DEM only 28% of the region is predicted to flood. Show less
... Hofton et al. (2006) do not report directly on the spatial structure of SRTM error, but their maps indicate substantial positive spatial autocorrelation. Shortridge (2006) calculated a semivariogram of SRTM errors and found autocorrelation persisting to 800 m. In their global assessment, Rodriguez et al. (2006) identified both very long range and short range correlation in SRTM height errors. ...
... Other studies have confirmed this positive relationship between elevation error (e.g. biased upwards) and height of the canopy (Hofton et al. 2006, Shortridge 2006. Since dense vegetation cover effectively screens the surface, fine-scale topographic patterns may be obscured or missing in the SRTM DEM (Valeriano et al. 2006). ...
... Evergreen and deciduous forested categories were maintained as distinct classes because their canopy structure characteristics could generate different error patterns in the SRTM DEM. Since IFSAR does not penetrate the tree canopy completely, because its short wavelength is prone to scattering by branches and leaves (Walker et al. 2007), the presence or absence of leaves in forested environments could impact SRTM error characteristics (Carabajal and Harding 2006, Shortridge 2006, Zandbergen 2008. The mixed forest class (NLCD code 43), defined as one with deciduous and coniferous tree species, was combined with the shrub class (NLCD code 52) as a heterogeneous vegetation class. ...
Article
Abstract The Shuttle Radar Topography Mission (SRTM), the first relatively high spatial resolution near-global digital elevation dataset, possesses great utility for a wide array of environmental applications worldwide. This article concerns the accuracy of SRTM in low-relief areas with heterogeneous vegetation cover. Three questions were addressed about low-relief SRTM topographic representation: to what extent are errors spatially autocorrelated, and how should this influence sample design? Is spatial resolution or production method more important for explaining elevation differences? How dominant is the association of vegetation cover with SRTM elevation error? Two low-relief sites in Louisiana, USA, were analyzed to determine the nature and impact of SRTM error in such areas. Light detection and ranging (LiDAR) data were employed as reference, and SRTM elevations were contrasted with the US National Elevation Dataset (NED). Spatial autocorrelation of errors persisted hundreds of meters spatially in low-relief topography; production method was more critical than spatial resolution, and elevation error due to vegetation canopy effects could actually dominate the SRTM representation of the landscape. Indeed, low-lying, forested, riparian areas may be represented as substantially higher than surrounding agricultural areas, leading to an inverted terrain model.
... These comparisons can help to determine how the selection of one of these DEMs could affect application use. It is expected that different DEM data will have different levels of elevation accuracy for different land use and land cover classes, so that the type of landscape the DEM is to be used in can be pertinent (Shortridge, 2006;TaraLouise, 2008;Mukherjee et al., 2013). Previous studies have shown that the SRTM DEM overestimates elevation in locations of dense tree canopy (Carabajal and Harding, 2006;Shortridge, 2006). ...
... It is expected that different DEM data will have different levels of elevation accuracy for different land use and land cover classes, so that the type of landscape the DEM is to be used in can be pertinent (Shortridge, 2006;TaraLouise, 2008;Mukherjee et al., 2013). Previous studies have shown that the SRTM DEM overestimates elevation in locations of dense tree canopy (Carabajal and Harding, 2006;Shortridge, 2006). Therefore, we also looked at the relationship between the DEM data and land cover classes to ascertain the accuracy of the DEMs in different landscapes. ...
... SRTM and GDEM are in fact digital surface models (DSMs), which are influenced by natural and man-made features. Recent studies have demonstrated that relief (Carabajal and Harding 2006;Jacobsen 2010) as well as vegetation (Shortridge 2006;Miliaresis and Paraschou 2011;Chirico et al. 2012) affect the spatial variability of the vertical accuracy of these global DEMs. SRTM and GDEM may contain large areas of missing data (data voids) or other anomalies, which can hinder immediate use of these data in their applications. ...
... In other words, SRTM41 overestimated elevations. For SRTM, overestimation of elevation is predictable since radar signal returns are affected by vegetation cover (Guth 2006;Shortridge 2006). Mean differences between the two versions GDEM and reference DEM are of different signs. ...
Article
This paper evaluates the quality characteristics of existing versions of 3-arc-second SRTM and 1-arc-second GDEM over Anji County, Zhejiang, China using reference elevations from a high-quality 1:10 k topographic map. Results show that SRTM has higher accuracy (RMSE =12.44 m) than GDEM (RMSE=14.20 m for Version 1, 12.76 m for Version 2); however, unsatisfactory void filling and an overall 1/2 pixel shift exits in SRTM version 4.1. Although spurious elevations over omitted water bodies still persist, GDEM Version 2 demonstrated significant improvement over Version 1. Accuracies of both SRTM and GDEM decrease on steeper slopes. Aspect also influences both the magnitude and the sign of errors. DEM accuracy in non-forested areas is considerably higher than that in forested areas. SRTM version 4.1 and GDEM version 2 possessed actual spatial resolutions of 90 m, despite both of them failed to match their nominal accuracies. SRTM Version 4.1 could be the first choice, while ASTER GDEM Version 2 would be a good alternative for areas where extensive voids exist in SRTM.
... Coarse DEMs have been used for global or continentalscale analyses of sea-level rise (see Weiss and Overpeck 2003;Rowley and others 2007;CEGIS 2009;Li and others 2009), including the Global Land One-Kilometer Base Elevation (GLOBE) elevation data set (Hastings and Dunbar 1998), which provides 30 arc-second horizontal or spatial resolution, and the Shuttle Radar Topography Mission (SRTM) with 3 arc-second spatial resolution for much of the earth. Studies related to these specific data sets have demonstrated how these DEMs may over-or underestimate actual elevation values; SRTM, for example, has been found to overestimate elevation consistently with the error varying by land cover type (Shortridge 2006). For maps and geovisualizations at regional or local scales, higher spatial resolution data sets are more appropriate such as 10m or 30-m DEMs provided by the US Geological Survey (USGS). ...
... SRTM reliably predicted the smallest inundation areas for each level of inundation. These results are consistent with the findings of Shortridge (2006): generally higher elevation values in the SRTM data set are likely a result of first return signals influenced by forest vegetation and not true bare-ground elevation measurements. The coarse resolution GLOBE data illustrate the abrupt changes in inundation that result from larger raster cells, especially when compared to the smooth changes in inundation produced by the finer resolution NED data sets. ...
Article
Full-text available
Increased attention to global climate change in recent years has resulted in a wide array of maps and geovisualizations that forecast various scenarios. Since many consequences of climate change are inherently geographic in nature, effective cartographic representations that depict these risks are valuable for planning and mitigation purposes. In particular, sea-level rise resulting from climate change calls attention to the numerous representation issues that warrant consideration for hazard and risk mapping in general, including categorizing and representing risk, selecting an appropriate level of realism, and displaying potential impacts of a hazard on human populations as well as on the natural and built environments. Using examples of potential inundation from sea-level rise at global, regional, and local scales, the authors propose a conceptual framework of key cartographic considerations for maps, Web-based mashups, and geovisualizations that depict risk. The cartographic framework presented here may be extended to other risks of an ambiguous or fuzzy nature and may be used to organize key future research areas for hazard or risk mapping in general.
... Last but not least, a series of recent studies has shown a positive relationship between elevation error and height of the canopy (e.g. Carabajal and Harding, 2006;Hofton et al., 2006;Shortridge, 2006;Berry et al., 2007;Bhang et al., 2007) and one in particular, indicating how low-lying, riparian areas may be represented as substantially higher than the surrounding agricultural areas (leading to an inverted terrain model; LaLonde et al., 2010), highlights the need to assess fitness for use before deploying one or more of these datasets for a specific application or study area. The presence of and propagation of error is taken up again in more detail in Section 5. Some, but not all of the aforementioned problems, may be addressed by the Advanced Spaceborne Thermal Emission and Reflectance Radiometer Global Digital Elevation Model (ASTER G-DEM) that was released in 2009. ...
... Hutchinson and Gallant (2000) have suggested a larger and more diverse list of simple metrics for measuring quality for DEMs constructed from surface-specific point elevation and contour-and stream-line data that incorporate some of the same ideas and there is a rapidly growing literature documenting the quality of the DEMs constructed from remote sensed sources (e.g. Carabajal and Harding, 2006;Hofton et al., 2006;Rodriguez et al., 2006;Shortridge, 2006;Berry et al., 2007;Bhang et al., 2007). ...
Article
This article examines how the methods and data sources used to generate DEMs and calculate land surface parameters have changed over the past 25years. The primary goal is to describe the state-of-the-art for a typical digital terrain modeling workflow that starts with data capture, continues with data preprocessing and DEM generation, and concludes with the calculation of one or more primary and secondary land surface parameters. The article first describes some of ways in which LiDAR and RADAR remote sensing technologies have transformed the sources and methods for capturing elevation data. It next discusses the need for and various methods that are currently used to preprocess DEMs along with some of the challenges that confront those who tackle these tasks. The bulk of the article describes some of the subtleties involved in calculating the primary land surface parameters that are derived directly from DEMs without additional inputs and the two sets of secondary land surface parameters that are commonly used to model solar radiation and the accompanying interactions between the land surface and the atmosphere on the one hand and water flow and related surface processes on the other. It concludes with a discussion of the various kinds of errors that are embedded in DEMs, how these may be propagated and carried forward in calculating various land surface parameters, and the consequences of this state-of-affairs for the modern terrain analyst.
... Bias appears to be smaller in grasslands and greater in forest. Since the mission was flown during the northern hemisphere winter, deciduous forests in leaf-off condition displayed less elevation bias than evergreen forest (Castel & Oettli, 2008;Shortridge, 2006;Weydahl et al. 2007). Topographic factors, namely slope and aspect, also play an important role in SRTM accuracy. ...
... Variables are in column 1, their associated regression coefficients are in column 2, and the associated t-statistics and P-values (H0: t is not different from 0) are in columns 3 and 4. benchmark data to find a national RMSE of 2.4 m, which is well below any SRTM estimate. High-resolution, high-accuracy DEMs have been compared with both NED and SRTM in several site-specific studies; NED appeared to be 2-3 times more accurate (e.g., Lalonde et al., 2010;Shortridge, 2006). We therefore have confidence that the observed differences, with their various structural and landscape associations, are Open symbols indicate high forest cover (above the indicated threshold); closed symbols indicate low forest cover (below the indicated threshold). ...
... Such accuracy estimates were verified by multiple other independent studies over smaller regions (Bhang et al., 2007;Carabajal & Harding, 2005). However, the performance varies spatially (Shortridge & Messina, 2011) and is dependent on the land cover type (Shortridge, 2006). NASADEM is the latest official version of the SRTM DEM; its previous version is the SRTM V3. ...
Article
Full-text available
This study evaluates global radar‐derived digital elevation models (DEMs), namely the Shuttle Radar Topography Mission (SRTM), NASADEM and GLO‐30 DEMs. We evaluate their accuracy over bare‐earth terrain and characterize elevation biases induced by forests using global Lidar measurements from the Ice, Cloud, and Land Elevation Satellite (ICESat)'s Geoscience Laser Altimeter System (GLAS), the Global Ecosystem Dynamics Investigation (GEDI) and the ICESat‐2 Advanced Topographic Laser Altimeter System (ATLAS) instruments collected on locally flat terrain. Our analysis is based on error statistics calculated for each 1°×1° 1×11{}^{\circ}\times 1{}^{\circ} DEM tile, which are then summarized as global error percentiles, providing a regional characterization of DEM quality. We find NASADEM to be a significant improvement upon the SRTM V3. Over bare ground areas, the mean elevation bias and root mean square error (RMSE) improved from 0.68 to 2.50 m respectively to 0.00 and 1.5 m as compared to ICESat/GLAS. GLO‐30 is more accurate with bare ground elevation bias and RMSE were below 0.05 and 0.55 m. Similar improvements were observed when compared to GEDI and ICESat‐2 measurements. The DEM biases associated with the presence of vegetation vary linearly with canopy height, and more closely follow the 50th 50th5{0}^{th} percentile of Lidar Relative Height (RH50). Other factors such as canopy density, radar frequency and Lidar technology also contribute to observed elevation biases. This global analysis highlights the potential of various technologies for mapping of Earth's topography, and the need for more advanced remote sensing observations that can resolve vegetation structure and sub‐canopy ground elevation.
... Hangi yöntemle üretilmiş oldukları fark etmeksizin tüm raster yüzey modellerinin yükseklik hassasiyetini etkileyen en önemli unsurlar, verinin alındığı mevkiinin topografik yapısı ve arazi örtüsü tipidir (Shortridge, 2006;Wechsler ve Kroll, 2006;Hebeler ve Purves, 2009;Altunel, 2018;Gonzalez ve Rizzoli, 2018). Bu çalışmada, stereo hava fotoğraflarına dayalı olarak farklı zamanlarda (1993 ve 2010) ve farklı tekniklerle (analog ve dijital) üretilen topografik haritaların yükseklik hassasiyetleri üç farklı arazi örtüsü tipi (orman, parçalı orman ve ziraat) ve haritalardan farklı yöntemlerle türetilen dört farklı raster yüzey modeli üzerinden karşılaştırılmış ve elde edilen istatistiksel sonuçlar Çizelge 2'de verilmiştir. ...
Article
Full-text available
Depending upon the technological advancements, many differently scaled topographical maps were manufactured and put into service in Türkiye. In this particular study, the elevation values of 1:25.000 scaled topographic maps produced with analogue means in 1992-1993 period were compared to later produced, digital means integrated 2009-2010 period maps through precisely measured CORS-GPS ground control points over three different land cover types (agriculture, partial-forest, and forest). 615, 3688 and 1739 systematic ground control points were respectively used inside, agriculture, partial forest and forest designated study sites. Comparisons were made over four different surface models produced from the same topographic maps: purpose-cut topographic sheets (KS), entire topographic sheets (TP), 10 m and 30 m resampled entire topographic sheets (R10 and R30). The results showed that digitizing the map production means and techniques really improved the elevation accuracies of 1:25.000 scaled topographic maps. Elevation accuracies between analogue and digital means produced maps were distinct in agriculture-designated site however, they were not as easily identifiable in partial-forest and forest designated sites. Besides, it was obvious that a resampling algorithm applied to the raster surface models produced using these maps would certainly improve their elevation accuracies.
... As apparent from many studies based on both active and passive-sensor produced surface models, DEMs, the vertical accuracy performance of the end product is highly correlated with the land cover type and topographic uncertainties within the target during the actual image acquisition (Shortridge 2006;Wechsler and Kroll 2006;Hebeler and Purves 2009;Altunel 2018;Gonzalez and Rizzoli 2018;). Although smaller in caliber in terms of investment, coverage and know-how, aerial stereo image capture, today, is not entirely different from those of the satellite-based ones. ...
Conference Paper
Full-text available
Elevation, vertical accuracy of any topographic Earth representation, e. g. stereo surface models, topo maps, DEMs, etc., is important if such data will be the base of further projects or development plans. The main form of these types of data in Türkiye is “1:25000” scaled quad maps. The third generation such maps were produced via digital stereo air-photo capture and photogrammetry capabilities as opposed to the previous two analogue based releases. Through this long-adapted scale, land cover types, hydrological formations, surface features, down to house rooftops, can be depicted in these maps. Elevation integration are also provided through the contour lines drawn in 10 m elevation difference showing intervals. They are the most frequently addressed topographic data type in forestry education as well as in profession. With the establishment of county-wide active GNSS network, very high precision elevation verification has become available for multitude of purposes. In this study, four dam reservoirs intensively surveyed using CORS-GPS were used to assess the vertical accuracies of the corresponding quad-map based DEMs produced in different resolutions. RMSEs ranged from 5.49 m to 14.22 m when the entire quad sheets were used while they ranged from 2.58 m to 8.95 m when the quads were purposely cut. Canopy closure apparently worsened the results.
... Most applications also require digital elevation data, which are usually provided by SRTM and ASTER GDEM. However, many studies have reported shortcomings regarding completeness, artifacts, elevation errors and co-registration problems in these data sets (Shortridge, 2006;Tighe and Chamberlain, 2009;Guth, 2010;Shortridge and Messina, 2011;Rexer and Hirt, 2014). To overcome this problem Robinson et al. (2014) proposed a quasi-global, voidfree, multiscale smooth DEM fused from ASTER and SRTM data at 90 m GSD. ...
Article
Full-text available
Very high-resolution (VHR) optical Earth observation (EO) satellites as well as low-altitude and easy-to-use unmanned aerial systems (UAS/drones) provide ever-improving data sources for the generation of detailed 3-dimensional (3D) data using digital photogrammetric methods with dense image matching. Today both data sources represent cost-effective alternatives to dedicated airborne sensors, especially for remote regions. The latest generation of EO satellites can collect VHR imagery up to 0.30 m ground sample distance (GSD) of even the most remote location from different viewing angles many times per year. Consequently, well-chosen scenes from growing image archives enable the generation of high-resolution digital elevation models (DEMs). Furthermore, low-cost and easy to use drones can be quickly deployed in remote regions to capture blocks of images of local areas. Dense point clouds derived from these methods provide an invaluable data source to fill the gap between globally available low-resolution DEMs and highly accurate terrestrial surveys. Here we investigate the use of archived VHR satellite imagery with approx. 0.5 m GSD as well as low-altitude drone-based imagery with average GSD of better than 0.03 m to generate high-quality DEMs using photogrammetric tools over Tristan da Cunha, a remote island in the South Atlantic Ocean which lies beyond the reach of current commercial manned airborne mapping platforms. This study explores the potentials and limitations to combine this heterogeneous data sources to generate improved DEMs in terms of accuracy and resolution. A cross-validation between low-altitude airborne and spaceborne data sets describes the fit between both optical data sets. No co-registration error, scale difference or distortions were detected, and a quantitative cloud-to-cloud comparison showed an average distance of 0.26 m between both point clouds. Both point clouds were merged applying a conventional georeferenced approach. The merged DEM preserves the rich detail from the drone-based survey and provides an accurate 3D representation of the entire study area. It provides the most detailed model of the island to date, suitable to support practical and scientific applications. This study demonstrates that combination archived VHR satellite and low-altitude drone-based imagery provide inexpensive alternatives to generate high-quality DEMs.
... While a global average error can be small, local errors can be large and also spatially autocorrelated (Holmes et al., 2000). The spatial structure of error is seldom reported for global DEMs (Fisher and Tate, 2006;Wechsler, 2007), with only several studies for SRTM (Hawker et al., 2018b;LaLonde et al., 2010;Rodriguez et al., 2006;Shortridge, 2006;Shortridge and Messina, 2011) and just one for MERIT (Hawker et al., 2018b). ...
Article
Full-text available
Freely available Global Digital Elevation Models (GDEMs) are essential for many scientific and humanitarian applications. Recently, TanDEM-X 90 has been released with a global coverage at 3 arc sec resolution. Its release is sure to generate keen interest as it provides an alternative to the widely used Shuttle Radar Topography Mission (SRTM) DEM, especially for flood risk management as for low slope floodplains height errors can become particularly significant. Here, we provide a first accuracy assessment of TanDEM-X 90 for selected floodplain sites and compare it to other popular global DEMs – the Shuttle Radar Topography Mission (SRTM) and the error-reduced version of SRTM called Multi-Error-Removed-Improved-Terrain (MERIT) DEM. We characterize vertical height errors by comparing against high resolution LiDAR DEMs for 32 floodplain locations in 6 continents. Results indicate that the average vertical accuracy of TanDEM-X 90 and MERIT are similar and are both a significant improvement on SRTM. We further our analysis by assessing vertical accuracy by landcover, with our results suggesting that TanDEM-X 90 is the most accurate global DEM in all landcover categories tested except short vegetation and tree-covered areas where MERIT is demonstrably more accurate. Lastly, we present the first characterization of the spatial error structure of any TanDEM-X DEM product, and find the spatial error structure is similar to MERIT, with MERIT generally having lower sill values and larger ranges than TanDEM-X 90 and SRTM. Our findings suggest that TanDEM-X 90 has the potential to become the benchmark global DEM in floodplains with careful removal of errors from vegetation, and at this stage should be used alongside MERIT in any flood risk application.
... On an important note, it has been shown through comparison with independently derived elevation data that the C-band interferometric synthetic aperture radar (InSAR) methodology of SRTM produces larger elevation errors in areas of greater topographic roughness as well as an increasing bias in elevation error with denser vegetation or increasing presence of built structures Harding 2005, 2006;Shortridge 2006, Hofton et al. 2006). But, firstly, our research is interested in relatively large scale landforms (valley floors and valleys) and, secondly, the method implemented in this case study relies on relative elevation differences of raster cells and is less dependent upon absolute elevation accuracy. ...
Article
“What is a valley?” and “where is the valley?”. These questions may appear a little clumsy, since they are not often asked to us explicitly; everybody just ‘knows’ and ‘sees’ the answers. However, the matter is not so straightforward in geography and its sub-disciplines. Geomorphology – the science of study and characterisation of landforms – and geographic information science deal with formalisations of such terms. Formalisation enables GIS to handle such landform terms in automated, objective workflows while bringing – depending upon the landform term at hand – a degree of human perception into such systems. In the long run incorporation of such naïve geographic knowledge into, and the ability to handle vernacular terms with, GIS could facilitate interaction with users. In the short run characterisations of landforms are of practical interest in, for instance, descriptions of places or the contents of georeferenced images or documents. Compared to traditional, quantitative terrain parameters delineations or characterisations of landforms are less sensitive to errors or uncertainties in the underlying digital elevation model, more easily and readily understandable by human beings and they are essentially qualitative, which makes them more apt to capture the fuzziness of landform phenomena. Before developing landform characterisation methods this thesis posits an emphasis on in-depth investigation of the semantics of landform terms (something which is not done often) as a requirement. Through a thorough analysis of six geographic standards and additional geomorphology-related reference works and subsequent reconciliation of terms and conceptual hierarchies a tentative taxonomy of landforms is devised. This can be seen as an inventory of landform-related terminology and categories which future approaches at landform characterisation can be built upon. Regarding delineation and characterisation methods, a bias is found in the literature in that it almost exclusively focuses on topographic eminences such as mountains and hills. Thus in the applied parts, the thesis deals with topographic depressions such as valleys and related features. The derived landform taxonomy allows the development of semantically informed algorithms for the delineation of valley floors and the characterisation of valleyness in this thesis. The usefulness of the algorithms for delineating valley floors and for characterisation of valleyness is assessed independently. First, a case study compares the delineated valley floors to naïve geographic knowledge gained from a crowd-sourced online reference work, topographic maps and authorities in the region. The extent of the valley floors in the study area appears to share common features with the independent data. Further, the classes (peaks, ridges, passes, channels, plains and pits) of what is termed morphometric feature classification interact sensibly with the valley floor delineation. At the same time the morphometric feature classification in itself seems incapable of producing an equivalent delineation. Subsequently, the valley floor delineation algorithm is employed in a geomorphologic case study to derive low-gradient sediment storage areas in valleys in the European Alps. Comparison with independent empirical data suggests a very good agreement of the automatically derived extent of sediment storage areas (R2 = 0.98, n = 13). Making use of a relationship gained from literature, the volumes of the sediment bodies are assessed. Remarkably, the size-frequency relationships of both sediment storage areas and volumes follow power-law distributions over several orders of magnitude with large valleys storing a disproportionately high volume of alpine sediment. A third case study aims at characterising valleys. To this end three fuzzy valleyness measures are developed which are based to a varying degree on the above valley floor delineation. Since the valleyness measures are developed to mimic the human perception and appreciation of the landform in question, their validity is, consequentially, assessed in a human-subject experiment involving a questionnaire survey. In the survey participants are confronted with georeferenced images and assess the valleyness of the photographer’s location. Analyses show that the human assessment of valleyness is related to the algorithmic measures and the correlations yield statistically significant results (R2 = 0.35–0.37, n = 100). Accounting for a suspected confounding factor in some of the images and weighing the stimuli according to the associated uncertainty in the human judgment process further increase the goodness of fit of the relations (R2 = 0.50–0.55, n = 83). The contributions of this thesis are diverse. Practically, the thesis offers a tentative landform taxonomy which can inform future research efforts and algorithm development. Further, the thesis suggests methods to delineate valley floors and low-gradient sediment storage areas as well as methods to fuzzily characterise valleys, and investigates their suitability in comparing them to independent data. On a theoretical level, the three case studies demonstrate ways how to better incorporate semantic knowledge into geomorphometric algorithms. Additionally, a research methodology for ‘human-centred’, semantically rich characterisations of landforms is suggested, which importantly incorporates the assessment of an algorithm’s results by contrasting them to the subjective judgment of a large group of human subjects – which, to the author’s best knowledge, was done in this thesis for the first time. “Was ist ein Tal?” und “wo ist das Tal?“. Diese Fragen können einem seltsam erscheinen, da sie kaum je so explizit gestellt werden. Jede und jeder ‘weiss’ beziehungsweise ‘sieht’ die Antworten darauf. Jedoch ist die Sachlage innerhalb der Geographie und ihren Unterdisziplinen nicht so einfach. Geomorphologie – die Disziplin, die sich mit Landformen beschäftigt – und die Geographische Informationswissenschaft arbeiten an der Formalisierung solcher Begriffe bezüglich Landformen. Formalisierung ermöglicht es Geographischen Informationssystemen (GIS), solche Begriffe in automatisierten, objektiven Abläufen einzusetzen. Gleichzeitig kann sie – abhängig davon, welcher Landform-Begriff formalisiert wird – GIS etwas mit der menschlichen Wahrnehmung bereichern. Längerfristig sollte dies dazu führen, dass die Benutzung von GIS einfacher wird. Kurzfristig sind Formalisierungen und Charakterisierungen von Landformen beispielsweise interessant für Beschreibungen von Orten oder von Inhalten von georeferenzierten Bildern oder Dokumenten. Verglichen mit traditionellen, quantitativen Terrainparametern sind Abgrenzungen oder Charakterisierungen von Landformen robuster gegenüber Fehlern oder Unsicherheiten im digitalen Höhenmodell, einfacher und schneller verständlich (auch für Laien) und üblicherweise qualitativ, wodurch sie sich besser zur Erfassung der Unschärfe von Landform-Phänomenen eignen. Diese Dissertation betont die Notwendigkeit eingehender Analysen der Semantik von Landform-Begriffen vor der Entwicklung von Methoden zur Abgrenzung oder Charakterisierung (dies wird nur selten so gehandhabt). Durch eine umfassende Analyse sechs geographischer Standards und zusätzlicher geomorphologischer Referenzliteratur und anschliessender Integration und Abgleichung von Begriffen und konzeptuellen Hierarchien wird eine Taxonomie von Landformen entwickelt. Letztere kann als Inventur der Terminologie im Bereich von Landformen verstanden werden und zukünftige Ansätze der Charakterisierung von Landformen können darauf aufgebaut werden. In der Literatur findet sich ein Ungleichgewicht im thematischen Fokus von Arbeiten über Abgrenzung und Charakterisierung von Landformen. Die Mehrheit der Veröffentlichungen befasst sich mit topographischen Erhebungen wie Bergen und Hügeln. Daher konzentriert sich diese Dissertation in ihren angewandten Teilen auf topographische Vertiefungen wie Täler und damit verbundene Erscheinungen. Die bereits erwähnte Taxonomie von Landformen hilft in dieser Dissertation dabei, semantisch sinnvolle Algorithmen zur Abgrenzen von Talböden und zur Charakterisierung von Talhaftigkeit zu entwickeln. Die Nützlichkeit der Algorithmen wird unabhängig voneinander überprüft und bewertet. Eine erste Fallstudie vergleicht automatisch abgegrenzte Talböden mit sogenanntem naiven geographischen Wissen, welches aus einer gemeinschaftlich erstellten und nachgeführten Online-Enzyklopädie, aus topographischen Karten und von Behörden in der betreffenden Region gewonnen worden ist. Die Ausdehnung der Talböden innerhalb des Untersuchungsgebiets weist Übereinstimmungen mit den unabhängig erhobenen Daten auf. Weiter stehen die Klassen der sogenannten morphometric feature classification (Gipfel, Grat, Pass, Rinne, Ebene und Senke) in einer sinnvollen Beziehung zur Talboden-Abgrenzung. Gleichzeitig scheint die morphometric feature classification aber nicht geeignet, eigenständig eine gleichwertige Talboden-Abgrenzung vorzunehmen. Die Talboden-Abgrenzung wird anschliessend in einer zweiten, geomorphologischen Fallstudie verwendet, um flach gelagerte Sedimentspeicherflächen in den europäischen Alpen zu kartieren. Der Vergleich mit unabhängigen, empirisch erhobenen Daten zeigt eine sehr gute Übereinstimmung (R2 = 0.98, n = 13). Mithilfe einer empirischen Beziehung aus der Literatur können auch die Volumina der Sedimentspeicher abgeschätzt werden. Bemerkenswerterweise, folgen die Häufigkeitsdichten sowohl der Volumina als auch der Flächen über einige Grössenordnungen hinweg einem Potenzgesetz. Dabei speichern die grossen Alpentäler, flächen- und volumenmässig, einen deutlich überproportionalen Anteil an Sedimenten. Eine dritte Fallstudie beschäftigt sich mit der Charakterisierung von Tälern. Zu diesem Zweck werden drei unscharfe Masse für Talhaftigkeit entwickelt. Diese basieren zu einem unterschiedlichen Grad auf der obengenannten Talboden-Abgrenzung. Da die Masse für Talhaftigkeit mit dem Ziel entworfen werden, die menschliche Wahrnehmung und Einschätzung der Landform nachzuahmen, wird deren Güte konsequenterweise in einer Befragung überprüft. In diesem Experiment werden Teilnehmerinnen und Teilnehmer mit georeferenzierten Fotografien konfrontiert und müssen die Talhaftigkeit des Aufnahmestandorts einschätzen. Die Analysen zeigen, dass die Einschätzung der Talhaftigkeit mit den Resultaten der Algorithmen in statistisch signifikanten Beziehungen stehen (R2 = 0.35–0.37, n = 100). Die Berücksichtigung eines mutmasslichen Störfaktors und die Gewichtung der Stimuli gemäss der assoziierten Unsicherheit verstärken diese Beziehungen noch deutlich (R2 = 0.50–0.55, n = 83). Die Beiträge zur Forschung der vorliegenden Dissertation sind vielfältig. Auf der praktischen Seite bietet die Dissertation die Taxonomie von Landformen, die für zukünftige Forschungsprojekte und Algorithmenentwicklung zur Unterstützung beigezogen werden kann. Weiter werden Methoden zur Abgrenzung von Talböden und flach gelagerter Sedimentkörper sowie zur unscharfen Charakterisierung von Tälern eingeführt und deren Gültigkeit im Vergleich mit unabhängigen Daten überprüft. Auf der theoretischen Ebene demonstrieren drei Fallstudien Ansätze, semantisches Wissen besser in geomorphometrischen Algorithmen zu nutzen. Zusätzlich wird eine Untersuchungsmethodik für ‚menschen-nahe’, semantisch reichhaltige Charakterisierungen von Landformen vorgestellt. Diese umfasst die Bewertung der Resultate von Algorithmen anhand der subjektiven Einschätzungen von Teilnehmerinnen und Teilnehmern einer Befragung. Eine solche Methodik wird in dieser Dissertation – gemäss dem Wissen des Autors – zum ersten Mal überhaupt verfolgt.
... An example of random error is speckle noise (multiplicative noise in a granular pattern) (Rodriguez et al., 2006;Farr et al., 2007). Sources of systematic errors and blunders relevant to flood modeling derive from interpolation techniques (Desmet, 1997;Wise, 2007;Bater and Coops, 2009;Guo et al., 2010), erroneous sink filling (Burrough and McDonnell, 1998), hydrological correction (Callow et al., 2007;Woodrow et al., 2016), deficient spatial sampling causing urban features not to be resolved (Gamba et al., 2002;Farr et al., 2007), slope and aspect (foreslope vs backslope) (Toutin, 2002;Falorni et al., 2005;Shortridge and Messina, 2011;Szabó et al., 2015), striping caused by instrument setup (Walker et al., 2007;Tarakegn and Sayama, 2013) and vegetation (Carabajal and Harding, 2006;Hofton et al., 2006;Shortridge, 2006;Weydahl et al., 2007;LaLonde et al., 2010). ...
Article
Full-text available
Open-access global Digital Elevation Models (DEM) have been crucial in enabling flood studies in data-sparse areas. Poor resolution (>30 m), significant vertical errors and the fact that these DEMs are over a decade old continue to hamper our ability to accurately estimate flood hazard. The limited availability of high-accuracy DEMs dictate that dated open-access global DEMs are still used extensively in flood models, particularly in data-sparse areas. Nevertheless, high-accuracy DEMs have been found to give better flood estimations, and thus can be considered a ‘must-have’ for any flood model. A high-accuracy open-access global DEM is not imminent, meaning that editing or stochastic simulation of existing DEM data will remain the primary means of improving flood simulation. This article provides an overview of errors in some of the most widely used DEM data sets, along with the current advances in reducing them via the creation of new DEMs, editing DEMs and stochastic simulation of DEMs. We focus on a geostatistical approach to stochastically simulate floodplain DEMs from several open-access global DEMs based on the spatial error structure. This DEM simulation approach enables an ensemble of plausible DEMs to be created, thus avoiding the spurious precision of using a single DEM and enabling the generation of probabilistic flood maps. Despite this encouraging step, an imprecise and outdated global DEM is still being used to simulate elevation. To fundamentally improve flood estimations, particularly in rapidly changing developing regions, a high-accuracy open-access global DEM is urgently needed, which in turn can be used in DEM simulation.
... While high-quality lidar-derived digital elevation models (DEMs) are freely available in a small number of countries, such as the United States, most non-US and global flood exposure analyses depend on lower-accuracy DEMs, such as NASA's Shuttle Radar Topography Mission (SRTM) (Hallegatte et al., 2013;Hinkel et al., 2014;McGranahan et al., 2007;Neumann et al., 2015). However, SRTM is known to contain large errors with a positive bias (Becek, 2014;Shortridge, 2006;Tighe and Chamberlain, 2009), in part due to vegetation (LaLonde et al., 2010;Shortridge and Messina, 2011) and urban development (Gamba et al., 2002). These errors cause SRTM data to systematically underpredict population exposure to coastal flooding by as much as 60%, depending on the water height (Kulp and Strauss, 2016). ...
Article
Positive vertical bias in elevation data derived from NASA's Shuttle Radar Topography Mission (SRTM) is known to cause substantial underestimation of coastal flood risks and exposure. Previous attempts to correct SRTM elevations have used regression to predict vertical error from a small number of auxiliary data products, but these efforts have been focused on reducing error introduced solely by vegetative land cover. Here, we employ a multilayer perceptron artificial neural network to perform a 23-dimensional vertical error regression analyses, where in addition to vegetation cover indices, we use variables including neighborhood elevation values, population density, land slope, and local SRTM deviations from ICESat altitude observations. Using lidar data as ground truth, we train the neural network on samples of US data from 1–20 m of elevation according to SRTM, and assess outputs with extensive testing sets in the US and Australia. Our adjustment system reduces mean vertical bias in the coastal US from 3.67 m to less than 0.01 m, and in Australia from 2.49 m to 0.11 m. RMSE is cut by roughly one-half at both locations, from 5.36 m to 2.39 m in the US, and from 4.15 m to 2.46 in Australia. Using ICESat data as a reference, we estimate that global bias falls from 1.88 m to −0.29 m, and RMSE from 4.28 m and 3.08 m. The methods presented here are flexible and effective, and can be effectively applied to land cover of all types, including dense urban development. The resulting enhanced global coastal DEM (CoastalDEM) promises to greatly improve the accuracy of sea level rise and coastal flood analyses worldwide.
... SRTM data overestimate the elevations. For SRTM, overestimation of elevation is predictable since radar signal returns are affected by vegetation cover (Guth, 2006;Shortridge 2006). It might be because of the spatial resolution of the same data. ...
Article
Full-text available
Detection and delineation of Water Body Area (WBA), particularly over inaccessible hilly region is not always possible in view of time, resources and cost issues. An automated procedure for detection and delineation of water bodies in the hilly region was performed using satellite-derived DEMs. CartoDEM, SRTM and ASTER GDEM data with 30, 90 and 30m resolutions, respectively to generate the Elevation Points Features (EPF) in GIS platform. Total 7194906 EPFs were generated using these three DEMs. Contour and slope maps were also prepared to eliminate the outlier EPFs (non-water bodies) with flattered surface logic. Flattened area on DEMs, connected contour at edges of water bodies and 0° to 0.5° slopping area were considered as WBA in the region (2311 Km2) of Western Ghat (India). The nearest neighbor to cubic convolution conversion of DEMs was found useful for detection of boundary of water bodies more precisely. These results were validated from Landsat-8 satellite images and topographic maps (Survey of India). About 3.09% from CartoDEM, 2.22% area from ASTER GDEM and 4.38% from SRTM DEM were estimated as WBA. CartoDEM data can be suggested for precise detection of smaller water bodies in hilly region. Methodology formulated in this study could be used as a rapid assessment tool for detection of water bodies, especially in the inaccessible region for better water resources management.
... However, these studies have not taken into account the vertical or height accuracy of the SRTM data, and consequently, the uncertainties associated with the computed geomorphic indices were unknown. Moreover, the vertical accuracy of the SRTM data is also known to get adversely affected in densely vegetated regions (Shortridge, 2006), higher elevation areas (Karwel and Ewiak, 2008;Mukul et al., 2015Mukul et al., , 2016, and in regions of data voids in which interpolation algorithms (Jarvis et al., 2008) are used to fill in data. All these are likely to lead to large uncertainties in SRTM heights and, consequently, in geomorphic indices computed using these heights. ...
Article
Quantitative tectonic geomorphology has emerged as a powerful discipline for studying evolution of topography, landscapes, and neotectonics using geomorphic indices computed from digital elevation data such as the Shuttle Radar Topography Mission (SRTM) data. We computed SRTM-based geomorphic indices to study neotectonics in the Relli River basin in the Darjiling Himalaya. We also used Real Time Kinematic Global Positioning System (RTK-GPS) independent checkpoints to assess the quality of the SRTM data used to compute the geomorphic indices along with their uncertainties. Our analysis revealed that though the SRTM C-Band 90-m resolution (C90) digital elevation data has been used extensively for geomorphic studies, the 30-m resolution (C30) data were significantly more accurate. Moreover, geomorphic indices computed using SRTM C30 and C90 elevations in the Relli basin indicate that normalized, nondimensional indices such as the relief ratio (Rh), hypsometric integral (HI), basin elongation (Re), and valley floor width-to-height ratio (Vf) are statistically indistinguishable with uncertainty (1σ) at least an order of magnitude below the index value. The geomorphic indices in the Relli basin reveal neotectonic activity related to the Munsiari thrust (MT) and intraformational faults in its footwall in the Lesser Himalayan rocks and also indicate that the basin is at an early mature stage close to equilibrium between tectonic and erosional process. However, analysis of the uncertainties associated with the indices suggest that the normalized or nondimensional geomorphic indices have the lowest uncertainties and that neotectonics in the Relli basin may only be confined to reactivation of the MT. The reactivation of the MT by out-of-sequence neotectonics implies the possibility of large earthquake events in the Darjiling Himalaya and significant seismic and landslide hazard for populations in large towns specifically located on the MT. Our new approach of looking at geomorphic indices and their uncertainties delivers a novel perspective for improved understanding of out-of-sequence neotectonics in river basins that may be applied more broadly across the Himalaya and elsewhere.
... Recent studies have investigated the absolute elevation error in SRTM (Shortridge, 2006;Tighe and Chamberlain, 2009;Becek, 2014), including the impacts of vegetation (LaLonde et al., 2010;Shortridge and Messina, 2011) and urban development (Gamba et al., 2002) on error. As most coastal exposure analysis is performed within the first few vertical meters above high tide lines, estimates are highly sensitive to small errors and differences in land elevation. ...
Article
Full-text available
Elevation data based on NASA's Shuttle Radar Topography Mission (SRTM) have been widely used to evaluate threats from global sea level rise, storm surge, and coastal floods. However, SRTM data are known to include large vertical errors in densely urban or densely vegetated areas. The errors may propagate to derived land and population exposure assessments. We compare assessments based on SRTM data against references employing high-accuracy bare-earth elevation data generated from lidar data available for coastal areas of the United States. We find that both 1-arcsecond and 3-arcsecond horizontal resolution SRTM data systemically underestimate exposure across all assessed spatial scales and up to at least 10 m above the high tide line. At 3 m, 1-arcsecond SRTM underestimates U.S. population exposure by more than 60%, and under-predicts population exposure in 90% of coastal states, 87% of counties, and 83% of municipalities. These fractions increase with elevation, but error medians and variability fall to lower levels, with national exposure underestimated by just 24% at 10 m. Results using 3-arcsecond SRTM are extremely similar. Coastal analyses based on SRTM data thus appear to greatly underestimate sea level and flood threats, especially at lower elevations. However, SRTM-based estimates may usefully be regarded as providing lower bounds to actual threats. We additionally assess the performance of NOAA's Global Land 1-km Base Elevation Project (GLOBE), another publicly-available global DEM, but do not reach any definitive conclusion because of the spatial heterogeneity in its quality.
... There are many studies that have identified land-cover issues with SRTM data (Shortridge, 2006;Castel and Oettli, 2008;Miliaresis, 2008;LaLonde et al., 2010), but most of them take a local area as a case study. Current systematic studies of SRTM data quality over large areas have primarily focused on the Amazon basin, Australia, and the United States, and the SRTM data error exhibits different characteristics due to variations in the topography and land-cover in these areas. ...
Article
In this paper the distribution of 3-second elevation error in the data from the Shuttle Radar Topography Mission (SRTM) over the whole of China and its associations with topographic and land cover factors were systematically evaluated. The landscape features extracted from different datasets at more than 500,000 sites were used to determine the variation pattern in the errors by the method of single factor analysis. The results showed that the topographic attributes derived from SRTM data could adequately represent the terrain of China. However, there were extended and observable areas with abnormalities in a small proportion of the data. Slope was the dominant factor affecting elevation error compared with other landscape features (aspect, vegetation, etc.). The mean errors in glaciers, deserts and wetlands were -1.05 m, -2.03 m and -2.43 m, and 1.05 m in built-up areas. In general the elevation errors in the SRTM data formed a complex pattern of variation across China.
... Carara et al. (1997) suggested simple criteria to evaluate DEM quality when the DEM is constructed from contours: (1) the DEM should have the same values as contours close to the contour lines; (2) the DEM values must be in the range given by the bounding contour lines; (3) the DEM values should vary almost linearly between the values of the bounding contour lines; (4) the DEM patterns must reflect realistic shapes in flat areas; and (5) the artifacts must be limited to a small proportion of the data set. Hutchinson and Gallant (2000) have suggested a larger and more diverse list of simple metrics for measuring quality for DEMs constructed from surface-specific point elevation and contour-and stream-line data that incorporate some of the same ideas, and a rapidly growing literature is documenting the quality of DEMs constructed from remotely sensed sources (e.g., Carabajal and Harding, 2006;Hoften et al., 2006;Rodriguez et al., 2006;Shortridge, 2006;Berry et al., 2007;Bhang et al., 2007). ...
Chapter
The study of surface processes and landforms requires quantitative characterization of the topography. New theoretical/conceptual and practical advances in understanding and mapping various aspects of geomorphological systems have emerged from new geospatial data and analysis of the topography. This chapter describes geomorphometry, or the science of quantitative land-surface characterization, and how it can be used to represent and sample the land surface, generate digital elevation models (DEMs), correct errors and artifacts from surface models, compute land-surface parameters and objects, and use various forms of quantitative information in different application domains to address or solve problems.
... They were gathered during an 11-day mission in February 2000 and carried out by the spaceship Endeavour (STS-99), which was launched into an orbit with an altitude of 233 km. The SRTM used the technique of radar interferometry IFSAR (Interferometric Synthetic Aperture Radar) (Bourgine and Baghdadi, 2005;Jordan et al., 2005;Mantelli et al., 2009;Nasa Science Missions, 2012;Rabus et al., 2003;Shortridge, 2006). These altitudinal data were given with an horizontal precision of 20 m (circular error confidence 90%) in a grid of 90 * 90 m per cell and an error RMSE from +/− 5 to 20 m (Miliaresis and Paraschou, 2005). ...
Article
Full-text available
Thanks to its geographical position in the western Mediterranean domain, Tunisia faces, since mid-Cretaceous (Aptian/Albian time period), to the inversion of the Tethys due to the northward African plate motion toward Eurasia. The coastal Jeffara is a part of the southern zone of deformation witness of the eastward migration of Tunisia to the Mediterranean Sea. We focus herein following Perthuisot (Cartes géologiques au 1/50.000 et notices explicatives des feuilles de Houmet Essouk, Midoun, Jorf, Sidi Chamakh, 1985) and others on the neotectonic of the coastal Jeffara (southern Tunisia) and its engineering implications. Based on the results of previous studies and new evidences developed herein, we propose a new structural and geodynamic coastal Jeffara model, influenced by the continuous post lower cretaceous northward migration of northern African toward the Eurasian plates. We herein study the Digital Elevation Model (issued from SRTM), which was checked with field surveys and 2D numerous seismic profiles at depth both onshore and offshore. All data were, then, integrated within a GIS Geodatabase, which showed the coastal Jeffara, as a part of a simple N-S pull-apart model within a NW-SE right lateral transtensive major fault zone (Medenine Fault zone). Our structural, geological and geomorphological analyses prove the presence of NNW-SSE right lateral en-echelon tension gashes, offshore NW-SE aligned salt diapirs, numerous folds offsets, en-echelon folds, and so-on… that are associated with this major right lateral NW-SE transtensive major coastal Jeffara fault zone that affect the Holocene and the Villafrachian deposits. These evidences confirm the fact that the active NW-SE Jeffara faults correspond to the tectonic accident, located in the south of the Tunisian extrusion, which is active, since mid-cretaceous, as the southern branch of the eastward Sahel block Tunisian extrusion toward the free Mediterranean sea boundary. Therefore this geodynamicalmovement explains the presence, offshore, of small elongated NW-SE, N-S and NE-SW transtensive basins and grabens, which are interesting for petroleum exploration.
... Rodriguez et al., (2006) thoroughly documented the extensive accuracy testing performed for SRTM, which showed that for all continents the absolute vertical accuracy specification was met and exceeded by a significant margin (the values ranged from 5.6 to 9 m LE90). Other investigators have tested SRTM and reported accuracy results that show the data generally meet vertical accuracy specifications Gorokhovich and Voustianiouk, 2006;Shortridge, 2006;Shortridge and Messina, 2011), and it was consistently noted that errors are strongly correlated with relief, slope, some aspect conditions, and presence of forest cover. An accuracy assessment of SRTM against more than 13,000 high-accuracy geodetic control points in the conterminous United States and comparison with the USGS National Elevation Dataset (Gesch, 2007) exhibited similar findings that error increases as overall terrain slope increases (see Figure 4.1). ...
Chapter
Full-text available
Elevation, as a measurement of topography (one of Earth’s most fundamental geophysical properties), is often a key variable in earth science studies. Because of its importance in the earth sciences, topography has been widely represented by regularly spaced measurements across the land surface in the form of digital elevation models (DEMs). Elevation data are collected with a number of methods, including surveys conducted at ground level, as well as with remote sensing techniques from both airborne and spaceborne platforms. Remote sensing techniques used to collect topographic information include stereo-optical imagery, interferometric synthetic aperture radar (InSAR), radar altimetry, and laser altimetry (or lidar). Elevation models derived from all of these approaches are often used together in developing global DEMs for earth science studies, and the important differences (and similarities) among elevation measurements from the differing remote sensing methods must be well understood. Satellite remote sensing is an ideal method for collecting data suitable for development of global elevation models at a range of resolutions over broad areas. Spaceborne systems have the advantage of acquiring consistent quality data over the globe. Future prospects are excellent for better understanding of existing data and for collection of new data suitable for improving global DEMs.
... The true resolution of 1 SRTM DEMs was found to be no better than that of 2 SRTM DEMs. The quality of SRTM data was also investigated by Shortridge (2006), with sub-meter accuracy elevation postings serving as ground-truth and land cover data as covariates explaining spatial variation in accuracy. Error for the study site is substantially less than the mission's objective, but substantially more than that for the NED DEM used in the test. ...
Article
Issues of accuracy, uncertainty, and spatial data quality have been on the top of most GIScience research agendas around the world from the late 1980s. Ever since then, growing research efforts have been directed toward uncertainty characterization in spatial information, analysis, and applications, aiming for better understanding of spatial uncertainty and thus improved methods and techniques for assessing and managing data quality. Impressive progress has been made in various issues concerning data quality. In addition, growing research on extensions to the conventional norms of data quality, such as the quality aspects of geospatial information services, has been observed. Chinese researchers have contributed to this great cause by keeping abreast with the developments abroad and striving for their own innovative work. This paper reviews the past research on data quality-related issues and provides a perspective on future developments. These will be seen not only in continued research on theoretical and technical issues concerning data quality, but also in developments of tools for quality assessment and decision-making under uncertainty through geospatial information processing and applications.
... SRTM and ASTER DEMs differ in their production techniques/generation process in that SRTM is an InSAR DEM, while ASTER is generated using 3N (nadir-viewing) and 3B (backwardviewing) bands by digital photogrammetry. Obviously, overestimation of radar DEMs is predictable since radar signal returns are affected by vegetation cover (Guth, 2006;Shortridge, 2006). Rodriguez et al. (2005) and Nelson et al. (2009) suggested that SRTM data of mountainous areas is susceptible to problems due to foreshortening and shadowing. ...
Article
Full-text available
The paper evaluates sensitivity of various spaceborne digital elevation models (DEMs), viz., Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Shuttle Radar Topography Mapping Mission (SRTM) and Global Multi-resolution Terrain Elevation Data 2010 (GMTED), in comparison with the DEM (TOPO) derived from contour data of 20 m interval of Survey of India topographic sheets of 1: 50,000 scale. Several topographic attributes, such as elevation (above mean sea level), relative relief, slope, aspect, curvature, LS factor, terrain ruggedness index (TRI), topographic wetness index (TWI), hypsometric integral (Ihyp) and drainage network attributes (stream number and stream length) of two tropical mountain river basins, viz., Muthirapuzha River Basin and Pambar River Basin are compared to evaluate the variations. Though the basins are comparable in extent, they differ in respect of terrain characteristics and climate. The results suggest that ASTER and SRTM provide equally reliable representation of topography portrayed by TOPO and the topographic attributes extracted from the spaceborne DEMs are in agreement with those derived from TOPO. Despite the coarser resolution, SRTM shows relatively higher vertical accuracy (RMSE = 23 and 20 m respectively in MRB and PRB) compared to ASTER (RMSE = 33 and 24 m) and GMTED (RMSE = 59 and 48 m). Vertical accuracy of all the spaceborne DEMs is influenced by relief of the terrain as well as vegetation. Further, GMTED shows significant deviation for most of the attributes, indicating its inability for mountain-river-basin-scale studies.
... The SRTM elevation data have also been validated using results from the earlier Shuttle Laser Altimeter-02 (SLA-02) collected in 1997 (Sun et al. 2003). Other sources of elevation data have also been used to validate the SRTM data, including airborne laser altimetry, ground control points, and traditional cartographic DEMs (Bourgine and Baghdadi 2005; Brown et al. 2005; Falorni et al. 2005; Gorokhovich and Voustianouk 2006; Shortidge 2006). Results generally confirm the findings from JPL: SRTM data have a vertical accuracy that exceeds the original data specifications, Fig. 5. Absolute vertical error of STRM data by continent. ...
Article
The Shuttle Radar Topography Mission (SRTM) was flown in February 2000 and collected the first ever high-resolution near-global digital elevation data. The final SRTM data have become widely available at 1 arc-second resolution for the United States and 3 arc-second resolution for other areas. This article reviews the background of the SRTM mission, the data quality characteristics of the SRTM elevation data, and the many applications of SRTM elevation data that have emerged in recent years, including forest ecology, volcanology, glaciology, geomorphology, and hydrology. SRTM data have been particularly useful for areas where previously limited topographic data were available, but results from STRM data also compare reasonably well with those obtained from other high-resolution digital elevation models.
Article
Full-text available
Digital terrain models (DTMs) are digital elevation models (DEMs) that represent the bare ground surface. They are created by multiple sources, including satellite remote sensing, aerial photography, and ground-based surveys, and are often combined with other data sources to create highly detailed models. As the demand for accurate and detailed information about the Earth's surface continues to grow, DTMs have become an increasingly important tool for researchers in different fields. This study aims to create a DTM with a spatial resolution of 0.50 m for São Caetano do Sul, São Paulo, Brazil, integrated with a topobathymetric map of three water courses running along the borders of the study area. For the conventional DTM generation, a WV-2 stereo pair was used. A total of 55 ground control points (GCPs) were collected using the GNSS-RTK method, being 60% used for model building and 40% employed for validation. The topobathymetric survey was accomplished using a GNSS-RTK device placed along the analyzed open streams. For validation purposes, we used bias and MAE metrics. Overall, the methodology presented in this article provides a useful approach for generating high-resolution DTMs that can be used in a range of applications, especially in urban hydrodynamic studies.
Article
The Shuttle Radar Topography Mission has long been used as a source topographic information for flood hazard models, especially in data-sparse areas. Error corrected versions have been produced, culminating in the latest global error reduced digital elevation model (DEM)—the Multi-Error-Removed-Improved-Terrain (MERIT) DEM. This study investigates the spatial error structure of MERIT and Shuttle Radar Topography Mission, before simulating plausible versions of the DEMs using fitted semivariograms. By simulating multiple DEMs, we allow modelers to explore the impact of topographic uncertainty on hazard assessment even in data-sparse locations where typically only one DEM is currently used. We demonstrate this for a flood model in the Mekong Delta and a catchment in Fiji using deterministic DEMs and DEM ensembles simulated using our approach. By running an ensemble of simulated DEMs we avoid the spurious precision of using a single DEM in a deterministic simulation. We conclude that using an ensemble of the MERIT DEM simulated using semivariograms by land cover class gives inundation estimates closer to a light detection and ranging-based benchmark. This study is the first to analyze the spatial error structure of the MERIT DEM and the first to simulate DEMs and apply these to flood models at this scale. The research workflow is available via an R package called DEMsimulation.
Article
Helped by the studies and results of previous researchers, we herein study the neotectonic of the coastal Jeffara with the input of numerous 2D reflection seismic profiles onshore, combined with Digital Elevation Model analyses (issued from SRTM) and field works. Acquired and available data were then integrated within a GIS Geodatabase, where Jerba, Zarzis and Jorf appear to be part of a N-S pull-apart basin within a NW-SE transtensive right-lateral major fault zone. Our structural geologic and geomorphologic analyses confirm and prove the presence of NNW-SSE right-lateral en-echelon tension gashes, NW-SE aligned salt diapirs, numerous active folds offsets, en-echelon folds, and so-on… They are associated with this major right-lateral NW-SE transtensive major coastal Jeffara fault zone that affect the Holocene and the Villafranchian deposits. We therefore confirm herein a new structural geodynamic Jeffara model, due to the post Lower Cretaceous northward migration of northern African to the Eurasian plates, this NW-SE transtensive fault zone is interpreted as a part of the southern branch of the eastward Sahel block extrusion toward the free Mediterranean Sea boundary. Therefore this geodynamic movement may explain the presence, offshore, of small elongated NW-SE, N-S and NE-SW transtensive basins and grabens with petroleum interest. To conclude, at the regional scale, the structural geomorphologic approach combined with both field work and 2D reflection seismic profile analyses appear to be an excellent tool to prove and confirm the NW-SE right-lateral transtensive extrusion fault zone of the coastal Jeffara.
Article
The "M" in digital elevation models (DEM) stands for model, which literally means "a schematic description of a system, theory, or phenomenon that accounts for its known or inferred properties and may be used for further study of its characteristics." A DEM fulfills the requirement of "a schematic description" of terrain. However, how to make it account for the "known or inferred properties" warrants further scrutiny. This article outlines three properties of terrain and examines their four implications to DEM generation. The three properties are as follows: (1) each terrain point has a single, fixed elevation; (2) terrain points have an order and sequence that is determined by their elevations; and (3) terrain has skeletons. The four implications to DEM generation methods are as follows: (1) a method must be a bijection; (2) a method must be an isomorphism in order to preserve elevation sequence; (3) a method must guarantee that the vertical error at any point, not just checkpoints, is acceptable in order to assure the vertical accuracy of a DEM; and (4) a method must involve generalization if terrain skeletons are to be preserved. These implications are discussed in the context of light detection and ranging-derived DEMs. Generalization is highlighted as the top priority for future research.
Article
Absolute elevation error in digital elevation models (DEMs) can be within acceptable National Map Accuracy standards, but still have dramatic impacts on field-level estimates of surface water flow direction, particularly in level regions. We introduce and evaluate a new method for quantifying uncertainty in flow direction rasters derived from DEMs. The method utilizes flow direction values derived from finer resolution digital elevation data to estimate uncertainty, on a cell-by-cell basis, in flow directions derived from coarser digital elevation data. The result is a quantification and spatial distribution of flow direction uncertainty at both local and regional scales. We present an implementation of the method using a 10-m DEM and a reference 1-m lidar DEM. The method contributes to scientific understanding of DEM uncertainty propagation and modeling and can inform hydrological analyses in engineering, agriculture, and other disciplines that rely on simulations of surface water flow.
Article
The TIGER (Topologically Integrated Geographic Encoding and Referencing) system has served the U.S. Census Bureau and other agencies' geographic needs successfully for two decades. Poor positional accuracy has however made it extremely difficult to integrate TIGER with advanced technologies and data sources such as GPS, high resolution imagery, and state/local GIS data. In this paper, a potential solution for conflation of TIGER road centerline data with other geospatial data is presented. The first two steps of the approach (feature matching and map alignment) remain the same as in traditional conflation. Following these steps, a third is added in which active contour models (snakes) are used to automatically move the vertices of TIGER roads to high-accuracy roads, rather than transferring attributes between the two datasets. This approach has benefits over traditional conflation methodology. It overcomes the problem of splitting vector road line segments, and it can be extended for vector imagery conflation as well. Thus, a variety of data sources (GIS, GPS, and Remote Sensing) could be used to improve TIGER data. Preliminary test results indicate that the three-step approach proposed in this paper performs very well. The positional accuracy of TIGER road centerline can be improved from an original 100 plus meters' RMS error to only 3 meters. Such an improvement can make TIGER data more useful for much broader application.
Article
Full-text available
Increased attention to global climate change in recent years has resulted in a wide array of maps and geovisualizations that forecast the anticipated consequences of climate change, including sea level rise in coastal areas. Since inundation from sea level rise is inherently geographic in nature, there is a need for effective cartographic representations at global, regional, and local scales that depict where the inundation is expected to occur as well as the magnitude of the projected inundation. The recent increase in production of maps and geovisualizations displaying the anticipated impacts of sea level rise calls attention to numerous cartographic issues that warrant consideration for such displays, including representation of various categories of uncertainty. Using examples from analyses of sea level rise at global, regional, and local scales, we provide a general framework of cartographic issues related to sea level rise as well as a collection of specific examples that demonstrate the challenges and limitations of sea level rise maps and geovisualizations. We draw our observations on the effectiveness and limitations of these displays through informal, qualitative feedback from scientists, educators, and other map users.
Article
Digital elevation models at a variety of resolutions are increasingly being used in geomorphology, for example in comparing the hypsometric properties of multiple catchments. A considerable bodyof research has investigated the sensitivity of topographic indices to resolution and algorithms, but little work has been done to address the impact of DEM uncertainty and elevation value error on derived products. By using higher resolution data from the Shuttle Radar Topography Mission - of supposed higher accuracy - for comparison with the widely used GLOBE 1km data set, error surfaces for three mountainous regions were calculated. Correlation analysis showed that error surfaces related to a range of topographic variables for all three regions, namely roughness, minimum and mean extremity and aspect. This correlation of error with local topography was used to develop a model of uncertainty including a stochastic component, permitting Monte Carlo Simulations. These suggest that global statistics for a range of topographic indices are robust to the introduction of uncertainty. However, the derivation of watersheds and related statistics per watershed (e.g. hypsometry) is shown to vary significantly as a result of the introduced uncertainty.
Article
Full-text available
As part of a countywide large-scale mapping effort for Richland County, South Carolina, an accuracy assessment of a recently acquired lidar-derived data set was conducted. Airborne lidar (2-m nominal posting) was collected at a flying height of 1207 meters above ground level (AGL) using an Optech ALTM (Airborne Laser Terrain Mapper) 1210 system. Unique to this study are the reference point elevations. Rather than using an interpolation approach for gathering observed elevations at reference points, the x-y coordinates of lidar points were located in the field and these elevations were surveyed. Using both total-station-based and rapid-static GPS techniques, observed vertical heights were measured at each reference lidar posting. The variability of vertical accuracy was evaluated for six land-cover categories. Root-meansquared error (RMSE) values ranged from a low of 17 to 19 cm (pavement, low grass, and evergreen forests) to a high of 26 cm (deciduous forests). The unique error assessment of lidar postings also allowed for the creation of an error budget model. The observed lidar elevation error was decomposed into errors from lidar system measurements, horizontal displacement, interpolation error, and surveyor error. A crossvalidation approach was used to assess the observed interpolated lidar elevation error for each field-verified reference point. In order of decreasing importance, the lidar system measurements were the dominant source of error followed by interpolation error, horizontal displacement error, and surveyor error. Observed elevation error in steeper slopes (e.g., 25º) was estimated to be twice as large as those on low slopes (e.g., 1.5º).
Article
Full-text available
The NED is a seamless raster dataset from the USGS that fulfills many of the concepts of framework geospatial data as envisioned for the NSDI, allowing users to focus on analysis rather than data preparation. It is regularly maintained and updated, and it provides basic elevation data for many GIS applications. The NED is one of several seamless datasets that the USGS is making available through the Web. The techniques and approaches developed for producing, maintaining, and distributing the NED are the type that will be used for implementing the USGS National Map (http://nationalmap.usgs.gov/).
Article
Full-text available
Spatial data uncertainty models (SDUM) are necessary tools that quantify the reliability of results from geographical information system (GIS) applications. One technique used by SDUM is Monte Carlo simulation, a technique that quantifies spatial data and application uncertainty by determining the possible range of application results. A complete Monte Carlo SDUM for generalized continuous surfaces typically has three components: an error magnitude model, a spatial statistical model defining error shapes, and a heuristic that creates multiple realizations of error fields added to the generalized elevation map. This paper introduces a spatial statistical model that represents multiple statistics simultaneously and weighted against each other. This paper's case study builds a SDUM for a digital elevation model (DEM). The case study accounts for relevant shape patterns in elevation errors by reintroducing specific topological shapes, such as ridges and valleys, in appropriate localized positions. The spatial statistical model also minimizes topological artefacts, such as cells without outward drainage and inappropriate gradient distributions, which are frequent problems with random field-based SDUM. Multiple weighted spatial statistics enable two conflicting SDUM philosophies to co-exist. The two philosophies are 'errors are only measured from higher quality data' and 'SDUM need to model reality'. This article uses an automatic parameter fitting random field model to initialize Monte Carlo input realizations followed by an inter-map cell-swapping heuristic to adjust the realizations to fit multiple spatial statistics. The inter-map cell-swapping heuristic allows spatial data uncertainty modelers to choose the appropriate probability model and weighted multiple spatial statistics which best represent errors caused by map generalization. This article also presents a lag-based measure to better represent gradient within a SDUM. This article covers the inter-map cell-swapping heuristic as well as both probability and spatial statistical models in detail.
Article
Full-text available
The availability of Shuttle Radar Topography Mission (SRTM) data opens the possibility to characterize urban areas by means of this sensor all over the world. In particular, the global digital elevation model (DEM) can provide a three-dimensional atlas of urban agglomerates. However, the coarse ground resolution and the problems due to radar imaging of the complex urban environments do not allow a precise characterization of urban features. This work is aimed to a first assessment of building detection in urban areas using SRTM DEMs. We considered data over Los Angeles and Salt Lake City, and showed that, despite the above mentioned problems, it is still possible to detect tall structures and identify the major buildings in the area. Due to the coarseness of the data, classical algorithms for building detection and recognition are not immediately useful, but a comparison with a ground survey for the Wilshire Corridor in Los Angeles provides a first evaluation of the usefulness of SRTM data for urban applications.
Article
Full-text available
Two digital elevation models are compared for the Echo Mountain SE quadrangle in the Cascade Mountains of Oregon. Comparisons were made between 7.5-minute (1:24,000-scale) and 1-degree (1:250,000-scale) images using the variables of elevation, slope aspect, and slope gradient. Both visual and statistical differences are presented.
Article
In February 2000 the first mission using space-borne single-pass-interferometry was launched - the Shuttle Radar Topography Mission (SRTM). The goal of the mission was to survey the Earth surface and to generate a homogeneous elevation data set of the world with a grid spacing of 1 arcsec. Antennas with two different wavelengths were used: Beside the American SIR-C the German / Italian X-SAR system was on board. This paper deals with the assessment of the Interferometric Terrain Elevation Data derived from the X-SAR system. These so called ITED-2 data were compared to reference data of higher quality of a well known test site in the south of Hannover (Trigonometric Points and Digital Model). The approach used is based on a spatial similarity transformation without using any kind of control point information. The algorithm matches the SRTM data onto the reference data in order to derive seven unknown parameters which describe horizontal and vertical shifts, rotations and a scale difference with respect to the reference data. These values describe potentially existing systematic errors. The standard deviation of the SRTM ITED-2 was found to be ±3,3 m in open landscape, after applying the spatial similarity transformation. Maximum systematic shifts of 4-6 m were detected, representing only 20-25 % of the ITED-2 grid size. In summary, it can be stated that the results are much better than predicted before the start of the mission. Thus, the quality of the SRTM ITED-2 is indeed remarkable.
Article
An assessment of four different remote sensing based methods for deriving digital elevation models (DEMs) was conducted in a flood-prone watershed in North Carolina. New airborne LIDAR (light detecting and ranging) and IFSAR (interferometric synthetic aperture radar (SAR)) data were collected and corresponding DEMs created. These new sources were compared to two methods: Gestalt Photomapper (GPM) and contour-to-grid, used by the U.S. Geological Survey (USGS) for creating DEMs. Survey-grade points (1470) for five different land cover classes were used as reference points. One unique aspect of this study was the LIDAR and IFSAR data were collected during leaf-on conditions. Analyses of absolute elevation accuracy and terrain slope were conducted. The LIDAR- and contour-to-grid derived DEMs exhibited the highest overall absolute elevation accuracies. Elevation accuracy was found to vary with land cover categories. Elevation accuracy also decreased with increasing slopes—but only for the scrub/shrub land cover category. Appreciable terrain slope errors for the reference points were found with all methods.
Article
The calculation of slope (downhill gradient) for a point in a digital elevation model (DEM) is a common procedure in the hydrological, environmental and remote sensing sciences. The most commonly used slope calculation algorithms employed on DEM topography data make use of a three-by-three search window or kernel centered on the grid point (grid cell) in question in order to calculate the slope at that point. A comparison of eight such slope calculation algorithms has been carried out using an artificial DEM, consisting of a smooth synthetic test surface with various amounts of added Gaussian noise. Morrison's Surface III, a trigonometrically defined surface, was used as the synthetic test surface. Residual slope grids were calculated by subtracting the slope grids produced by the algorithms on test from true/reference slope grids derived by analytic partial differentiation of the synthetic surface. The resulting residual slope grids were used to calculate root-mean-square (RMS) residual error estimates that were used to rank the slope algorithms from “best” (lowest value of RMS residual error) to “worst” (largest value of RMS residual error). Fleming and Hoffer’s method gave the “best” results for slope estimation when values of added Gaussian noise were very small compared to the mean smooth elevation difference (MSED) measured within three-by-three elevation point windows on the synthetic surface. Horn’s method (used in ArcInfo GRID) performed better than Fleming and Hoffer’s as a slope estimator when the noise amplitude was very much larger than the MSED. For the large noise amplitude situation the “best” overall performing method was that of Sharpnack and Akin. The popular Maximum Downward Gradient Method (MDG) performed poorly coming close to last in the rankings, for both situations, as did the Simple Method. A nonogram was produced in terms of standard deviation of the Gaussian noise and MSED values that gave the locus of the trade-off point between Fleming and Hoffer’s and Horn’s methods.
Conference Paper
In February 2000 the Shuttle Radar Topography Mission (SRTM) was flown on board the Space Shuttle Endeavour. The aim of the mission was to survey about sixty percent of the complete landmasses of the Earth's surface. During the mission a US C-band antenna and a German/Italian X-band antenna were installed on board the Shuttle. The main result of the mission will be a three-dimensional digital surface model (DSM) obtained from single-pass interferometry. During the validation process the SRTM elevation data will be analysed by comparing them to reference data of a well-known test site. This paper describes and investigates an algorithm for this task which was developed at the Institute for Photogrammetry and Engineering Surveys (IPI) of the University of Hannover. It is based on a spatial similarity transformation which matches the SRTM data onto reference data of higher accuracy. The algorithm is comparable to the absolute orientation of a photogrammetric block by means of a DTM. Any detected transformation parameters which differ from the identity transformation point to potentially existing systematic errors of the SRTM data, the standard deviation of the remaining height differences represents the accuracy of the SRTM data. The algorithms was successfully tested using simulated and real data, the obtained results are reported in this paper
Represen-tation of terrain
  • M F Hutchinson
  • J C Gallant
Hutchinson, M. F., and J.C. Gallant. 1999. Represen-tation of terrain. In: Longley, P. A., M. F. Goodchild, D. J. Maguire, and D. W. Rhind (eds), Geographical Information Systems, 2
Spatial analysis of DEM error
  • P L Guth
Guth, P. L. 1992. Spatial analysis of DEM error. ASPRS/ ACSM/RT Technical Papers, Washington. 2: 187-96.