Article

LiDAR data reduction using vertex decimation and processing with GPGPU and multicore CPU technology

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Airborne light detection and ranging (LiDAR) topographic data provide highly accurate representations of the earth's surface. However, large data volumes pose computing issues when disseminating and processing the data. The main goals of this paper are to evaluate a vertex decimation algorithm used to reduce the size of the LiDAR data and to test parallel computation frameworks, particularly multicore CPU and GPU, in processing the data. In this paper we use a vertex decimation technique to reduce the number of vertices available in a triangulated irregular network (TIN) representation of LiDAR data. In order to validate and verify the algorithm, the authors have used last returns only (LRO) and all returns (AR) of points from four tiles of LiDAR data taken from flat and undulating terrains. The results for flat terrain data showed decimation rates of roughly 95% for last returns only and 55% for all returns without significant loss of accuracy in terrain representation. Accordingly, file sizes were reduced by about 96.5% and 60.5%. The processing speed greatly benefited from parallel programming using the multicore CPU framework. The GPU usage demonstrated an additional impediment caused by noncomputational overhead. Nonetheless, tremendous acceleration was demonstrated by the GPU environment in the computational part alone.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Dihedral Angle between faces uses the angle between planar normals of two adjacent faces (triangle) to prioritize lines for collapse or preservation [8]. The criterion for line selection for this algorithm is the minimum dihedral angle between faces to be dissolved. ...
... Stupariu also observed "Gaussian curvature could be linked to peak/pit/pass-type forms, while the mean curvature seemed to be related to curvilinear shapes such as ridges and channels." ( [8], p9). This observation is reiterated in the discussion in Section 4.2.5. ...
... Yet this review could find no published literature which applies Gaussian curvature to terrain point cloud decimation. For identifying linear features, the work of [8], by using dihedral angles of mesh lines, can be used, though they made no mention of this advantage. ...
Article
Full-text available
Increased availability of QL1/QL2 Lidar terrain data has resulted in large datasets, often including large quantities of redundant points. Because of these large memory requirements, practitioners often use decimation to reduce the number of points used to create models. This paper introduces a novel approach to improve decimation, thereby reducing the total count of ground points in a Lidar dataset while retaining more accuracy than Random Decimation. This reduction improves efficiency of downstream processes while maintaining output quality nearer to the undecimated dataset. Points are selected for retention based on their discrete curvature values computed from the mesh geometry of the TIN model of the points. Points with higher curvature values are preferred for retention in the resulting point cloud. We call this technique Curvature Weighted Decimation (CWD). We implement CWD in a new free, open-source software tool, CogoDN, which is also introduced in this paper. We evaluate the effectiveness of CWD against Random Decimation by comparing the resulting introduced error values for the two kinds of decimation over multiple decimation percentages, multiple statistical types, and multiple terrain types. The results show that CWD reduces introduced error values over Random Decimation when 15 to 50% of the points are retained.
... LiDAR technologies are accurate, cost effective, and timely for elevation data collection when compared with other methods such as photogrammetric techniques (Hodgson et al., 2003;Oryspayev et al., 2012;Sugumaran et al., 2011). This massive quantity of elevation data is valuable to a large number of potential users (academia, local, state, and federal government agencies, tribal governments, environmental and engineering consulting companies), at local to global scales, and can be used to generate high-resolution digital elevation model (DEMs), which in turn can be used to accurately map surface features such as buildings and trees (Meng et al., 2010). ...
... Though the pace of 3D spatial data collection continues to accelerate, the provision of affordable technology for dealing with issues such as processing, archiving, managing, disseminating, and analyzing large data volumes has lagged (Chen, 2009;Evangelinos and Hill, 2008;Han et al., 2009;Liu and Zhang, 2008;Meng et al., 2010;Oryspayev et al., 2012;Schön et al., 2009;Sugumaran et al., 2011). Of these challenges, dataset size and the computational resources required to process massive amounts of information are major issues for users such as local governments and small businesses. ...
... At the preprocessing stage, users can invoke a data reduction algorithm (e.g., vertex decimation approach), clip and merge raw data, and initiate user-specified data download tools. The vertex decimation function enables a user to reduce data volume (Oryspayev et al., 2012). During postprocessing, users can choose to create a DEM from a triangulated irregular network (TIN) or point cloud, merge and clip a DEM, and derive secondary products from a DEM or TIN (e.g., generate contours, as well as calculate slope and aspect). ...
... A variety of methods and ideas have been developed to reduce data set size while retaining information (Anderson, Thompson, and Austin 2005;Krishnan, Baru, and Crosby 2010;Oryspayev et al. 2012). In terms of data set size reduction, two techniques are generally used. ...
... In terms of data set size reduction, two techniques are generally used. The first is decimation, or the selective removal of particular points thought to convey little information, from the LiDAR point cloud (e.g., Oryspayev et al. 2012). The second is gridding, in which the entire point cloud is replaced by a rasterized image created using an interpolation method to generate an approximate z-value for each grid point (e.g., Krishnan, Baru, and Crosby 2010). ...
... Finally, Question 3, regarding the order of decimation and triangulation, stems from the experiences of Oryspayev et al. (2012). In their work, which comprised triangulation followed by reduction, they found that DT of the full data set seemed to be a major bottleneck in the processing of a point cloud. ...
Article
This article explores the use of an advanced, high-memory cloud-computing environment to process large-scale topographic light detection and ranging (LiDAR) data. The new processing techniques presented herein for LiDAR point clouds intelligently filter and triangulate a data set to produce an accurate digital elevation model. Ample amounts of random-access memory (RAM) allow the employment of efficient hashing techniques for spatial data management; such techniques were utilized to reduce data distribution overhead and local search time during data reduction. Triangulation of the reduced, distributed data set was performed using a local streaming approach to optimize processor utilization. Computational experiments used Amazon Web Services Elastic Compute Cloud resources. Analysis was performed to determine (1) the accuracy of the binning/array-based reduction, as measured by root mean square error and (2) the scalability of the approach on varying-size clusters of high-memory instances (nodes having 244 GB of RAM). For experimental data sets, topographic LiDAR data generated by the Iowa LiDAR Mapping Project was used. This article concludes that the data-reduction strategy is computationally efficient and outperforms a comparable randomized filter control when moderate reduction is undertaken – e.g., when the data set is being reduced by between 30% and 70%. Performance speed-up ratios of up to 3.4, comparing between a single machine and a 9-node cluster, are exhibited. A task-specific stratification of the results of this work demonstrates Amdahl’s law and suggests the evaluation of distributed databases for geospatial data.
... A survey of the typical methods for selecting key points was carried out by Heckbert and Garland [16]. Amongst those methods, the point-additive method [5,[17][18][19][20] and the point-subtractive method [18,21,22] are considered suitable and effective methods for scattered point cloud data representing a terrain surface. In the iterative addition method, some initial key points (e.g., local highest or lowest data points [5]) are selected and used to generate a triangulated irregular network (TIN) surface. ...
... It iteratively removes one or several data points until a predefined number of data points or a threshold error is reached. Although these two types of methods are effective in producing a thinned point cloud with a data density that varies with the terrain surface complexity, such methods often have a high computational cost [5,16,22] because they need to sweep through each and every candidate data point. In addition to the aforementioned methods, sub-sampling with local adaptation to surface roughness can also be achieved using a non-stationary geostatistical approach [23][24][25]. ...
Article
Full-text available
Point clouds obtained from laser scanning techniques are now a standard type of spatial data for characterising terrain surfaces. Some have been shared as open data for free access. A problem with the use of these free point cloud data is that the data density may be more than necessary for a given application, leading to higher computational cost in subsequent data processing and visualisation. In such cases, to make the dense point clouds more manageable, their data density can be reduced. This research proposes a new coarse-to-fine sub-sampling method for reducing point cloud data density, which honours the local surface complexity of a terrain surface. The method proposed is tested using four point clouds representing terrain surfaces with distinct spatial characteristics. The effectiveness of the iterative coarse-to-fine method is evaluated and compared against several benchmarks in the form of typical sub-sampling methods available in open source software for point cloud processing.
... Graphics processing units (GPUs) have been widely used (in GPGPU applications) to address remote-sensing problems Christophe et al., 2011;Song et al., 2011;Yang et al., 2011). Oryspayev et al. (2012) developed an approach to LiDAR processing that used data-mining algorithms coupled with parallel computing technology. A specific comparison was made between the use of multiple central processing units (CPUs) (Intel Xeon Nehalem chipsets) and GPUs (Intel i7 Core CPUs using the NVIDIA Tesla s1070 GPU cards). ...
... The final data are served to end users using a standard OGC WMS and Web Coverage Processing Service tools. Oryspayev et al. (2012) studied LiDAR data reduction algorithms that were implemented using the GPGPU and multicore CPU architectures available on the AWS EC2. This paper tests the veracity of a vertex-decimation algorithm for reducing LiDAR data size/density and analyzes the performance of this approach on multicore CPU and GPU technologies, to better understand processing time and efficiency. ...
Chapter
Full-text available
During the past four decades, scientific communities around the world have regularly accumulated massive collections of remotely sensed data from ground, aerial, and satellite platforms. In the United States, these collections include the U.S. Geological Survey's (USGS) 37-year record of Landsat satellite images (comprising petabytes of data) (USGS, 2011); the NASA Earth Observing System Data and Information System, having multiple data centers and more than 7.5 petabytes of archived imagery (Hyspeed Computing, 2013); and the current NASA systems that record approximately 5 TB of remote-sensing-related data per day (Vatsavai et al., 2012). In addition, new data-capture technologies such as LiDAR are used routinely to produce multiple petabytes of 3D remotely sensed data representing topographic information (Sugumaran et al., 2011). These technologies have galvanized changes in the way remotely sensed data are collected, managed, and analyzed. On the sensor side, great progress has been made in optical, microwave, and hyperspectral remote sensing with (1) spatial resolutions extending from kilometers to submeters, (2) temporal resolutions ranging from weeks to 30 min, (3) spectral resolutions ranging from single bands to hundreds of bands, and (4) radiometric resolutions ranging from 8 to 16 bits. The platform side has also seen rapid development during the past three decades. Satellite and aerial platforms have continued to mature and are producing large quantities of remote-sensing data. Moreover, sensors deployed on unpiloted aerial vehicles (UAVs) have recently begun to produce massive quantities of very-high-resolution data.
... A large chunk of these works [3,18,19] focus on data interpolation, utilizing GPU-based parallelism to accelerate existing interpolation methods [15,17,24]. Apart from these, GPU acceleration has been applied to tasks such as LiDAR point cloud filtering [11], LiDAR data reduction [23] and simulated LiDAR scanning [16]. To the best of our knowledge, there are no existing works dedicated to GPU-accelerated (non-deep) feature extraction for topographical point cloud data. ...
... Developing GIS-embedded hydrological modelling tools that will assist in substantially improving the processes that coastal areas can use to protect their communities from potential natural disasters such as SLR, storm surges or flooding events. LiDAR technology is frequently utilized in real-world process modelling, analysis, simulation and visualization ( Oryspayev et al., 2012 ). Such technology is relied upon due to its ability to support forecasting, planning and decision support stages ( Sharifi et al., 2009 ). ...
Thesis
Full-text available
Coastal climate impact can affect coastal areas in a variety of ways, such as flooding, storm surges, reduction in beach sands and increased beach erosion. While each of these can have major impacts on the operation of coastal drainage systems, this thesis focuses on coastal and riverine
... Developing GIS-embedded hydrological modelling tools that will assist in substantially improving the processes that coastal areas can use to protect their communities from potential natural disasters such as SLR, storm surges or flooding events. LiDAR technology is frequently utilized in real-world process modelling, analysis, simulation and visualization (Oryspayev et al., 2012). Such technology is relied upon due to its ability to support forecasting, planning and decision support stages (Sharifi et al., 2009). ...
Thesis
Full-text available
Coastal climate impact can affect coastal areas in a variety of ways, such as flooding, storm surges, reduction in beach sands and increased beach erosion. While each of these can have major impacts on the operation of coastal drainage systems, this thesis focuses on coastal and riverine flooding in coastal areas. Coastal flood risk varies within Australia, with the northern parts in the cyclone belt most affected and high levels of risk similar to other Asian countries. However, in Australia, the responsibility for managing coastal areas is shared between the Commonwealth government, Australian states and territories, and local governments. Strategies for floodplain management to reduce and control flooding are best implemented at the land use planning stage. Local governments make local decisions about coastal flood risk management through the assessment and approval of planning permit applications. Statutory planning by local government is informed by policies related to coastal flooding and coastal erosion, advice from government departments, agencies, experts and local community experts. The West Gippsland Catchment Management Authority (WGCMA) works with local communities, Victorian State Emergency Services (VCSES), local government authorities (LGAs), and other local organizations to prepare the West Gippsland Flood Management Strategy (WGFMS). The strategy aims at identifying significant flood risks, mitigating those risks, and establishing a set of priorities for implementation of the strategy over a ten-year period. The Bass Coast Shire Council (BCSC) region has experienced significant flooding over the last few decades, causing the closure of roads, landslides and erosion. Wonthaggi was particularly affected during this period with roads were flooded causing the northern part of the city of Wonthaggi to be closed in the worst cases. Climate change and increased exposure through the growth of urban population have dramatically increased the frequency and the severity of flood events on human populations. Traditionally, while GIS has provided spatial data management, it has had limitations in modelling capability to solve complex hydrology problems such as flood events. Therefore, it has not been relied upon by decision-makers in the coastal management sector. Functionality improvements are therefore required to improve the processing or analytical capabilities of GIS in hydrology to provide more certainty for decision-makers. This research shows how the spatial data (LiDAR, Road, building, aerial photo) can be primarily processed by GIS and how by adopting the spatial analysis routines associated with hydrology these problems can be overcome. The aim of this research is to refine GIS-embedded hydrological modelling so they can be used to help communities better understand their exposure to flood risk and give them more control about how to adapt and respond. The research develops a new Spatial Decision Support System (SDSS) to improve the implementation of coastal flooding risk assessment and management in Victoria, Australia. It is a solution integrating a range of approaches including, Light Detection and Ranging (Rata et al., 2014), GIS (Petroselli and sensing, 2012), hydrological models, numerical models, flood risk modelling, and multi-criteria techniques. Bass Coast Shire Council is an interesting study region for coastal flooding as it involves (i) a high rainfall area, (ii) and a major river meeting coastal area affected by storm surges, with frequent flooding of urban areas. Also, very high-quality Digital Elevation Model (DEM) data is available from the Victorian Government to support first-pass screening of coastal risks from flooding. The methods include using advanced GIS hydrology modelling and LiDAR digital elevation data to determine surface runoff to evaluate the flood risk for BCSC. This methodology addresses the limitations in flood hazard modelling mentioned above and gives a logical basis to estimate tidal impacts on flooding, and the impact and changes in atmospheric conditions, including precipitation and sea levels. This study examines how GIS hydrological modelling and LiDAR digital elevation data can be used to map and visualise flood risk in coastal built-up areas in BCSC. While this kind of visualisation is often used for the assessment of flood impacts to infrastructure risk, it has not been utilized in the BCSC. Previous research identified terrestrial areas at risk of flooding using a conceptual hydrological model (Pourali et al., 2014b) that models the flood-risk regions and provides flooding extent maps for the BCSC. It examined the consequences of various components influencing flooding for use in creating a framework to manage flood risk. The BCSC has recognised the benefits of combining these techniques that allow them to analyse data, deal with the problems, create intuitive visualization methods, and make decisions about addressing flood risk. The SDSS involves a GIS-embedded hydrological model that interlinks data integration and processing systems that interact through a linear cascade. Each stage of the cascade produces results which are input into the next model in a modelling chain hierarchy. The output involves GIS-based hydrological modelling to improve the implementation of coastal flood risk management plans developed by local governments. The SDSS also derives a set of Coastal Climate Change (CCC) flood risk assessment parameters (performance indicators), such as land use, settlement, infrastructure and other relevant indicators for coastal and bayside ecosystems. By adopting the SDSS, coastal managers will be able to systematically compare alternative coastal flood-risk management plans and make decisions about the most appropriate option. By integrating relevant models within a structured framework, the system will promote transparency of policy development and flood risk management. This thesis focuses on extending the spatial data handling capability of GIS to integrate climatic and other spatial data to help local governments with coastal exposure develop programs to adapt to climate change. The SDSS will assist planners to prepare for changing climate conditions. BCSC is a municipal government body with a coastal boundary and has assisted in the development and testing of the SDSS and derived many benefits from using the SDSS developed as a result of this research. Local governments at risk of coastal flooding that use the SDSS can use the Google Earth data sharing tool to determine appropriate land use controls to manage long-term flood risk to human settlement. The present research describes an attempt to develop a Spatial Decision Support System (SDSS) to aid decision-makers to identify the proper location of new settlements where additional land development could be located based on decision rules. Also presented is an online decision-support tool that all stakeholders can use to share the results.
... In general, the primary methods for creating mesh models with different scales include simplifying the primitive mesh and refining the rough mesh. In the first method, the following algorithms are used: The vertex decimation algorithm [3][4][5], vertex clustering algorithm [6,7], wavelet transform algorithm [8,9], etc. In the second method, many algorithms are available, including the Loop-subdivision algorithm [10][11][12], butterfly subdivision algorithm [13,14], Point-Normal triangles algorithm [15,16], etc. ...
Article
Full-text available
Triangulated irregular networks (TINs) are widely used in terrain visualization due to their accuracy and efficiency. However, the conventional algorithm for multi-scale terrain rendering, based on TIN, has many problems, such as data redundancy and discontinuities in scale transition. To solve these issues, a method based on a detail-increment model for the construction of a continuous-scale hierarchical terrain model is proposed. First, using the algorithm of edge collapse, based on a quadric error metric (QEM), a complex terrain base model is processed to a most simplified model version. Edge collapse records at different scales are stored as compressed incremental information in order to make the rendering as simple as possible. Then, the detail-increment hierarchical terrain model is built using the incremental information and the most simplified model version. Finally, the square root of the mean minimum quadric error (MMQE), calculated by the points at each scale, is considered the smallest visible object (SVO) threshold that allows for the scale transition with the required scale or the visual range. A point cloud from Yanzhi island is converted into a hierarchical TIN model to verify the effectiveness of the proposed method. The results show that the method has low data redundancy, and no error existed in the topology. It can therefore meet the basic requirements of hierarchical visualization.
... Some authors also use GPGPU to rendering point clouds [102] or in segmentation processing stages [53]. Furthermore, some others combined CPU and GPGPU for the computational complexity of 3D points cloud processing such as [103,52]. ...
Thesis
Recent years, Light Detection And Ranging (LiDAR) is continually developed and applied in many fields. The improvement of hardware and methodology of LiDAR data acquisition makes data significantly increased both in volume and complexity. This poses a great challenge to effectively handle a vast amount and complexity of LiDAR data. Even though significant efforts have been developing approaches to handle massive LiDAR data, this topic is still a hot research topic. This study proposes the cloud comping-based approaches to processing a huge volume of spatial LiDAR data. To harness cloud computing's advantages, an Octree data structure is implemented to index LiDAR points cloud on the Apache Hadoop framework using MapReduce parallel programing model. Moreover, kNN algorithm is also manipulated to query the neighbourhoods of LiDAR points through the Apache Solr search engine on the cloud computing environment. Six data sets are employed to experimental test the capability of proposed algorithms on this environment. The response time for searching neighbouring points of a given point is almost real-time. For example, the data set with 2.88 million points is processed in just only around 0.4 hour on cloud computing environment while the same data set take about 196 hours to process on the sequential environment. The result of this study proves that cloud computing environment fully satisfy all the requirements to process efficiently a large volume of LiDAR data. In addition, the cloud also provides advantages such as scalability, cost-efficient and data safety.
... However, in general, because this method does not consider the significance of each nominee point in addition to areas of exact concern for an application, so it has low accuracy. Oryspayev et al. (2012) performed LIDAR data reduction using a vertex decimation algorithm and test parallel computation frameworks in processing the data. The authors have used last returns only (LRO) and all returns (AR) of points of LIDAR data taken from flat and undulating terrains to validate and verify the algorithm. ...
Article
Full-text available
Light detection and ranging (LIDAR) is a remote sensing method that scans the Earth’s surface with high density to construct the digital elevation model (DEM). In this paper, we present a point clouds reduction model based on two 3D feature extraction techniques, namely: the sharp feature detection algorithm and feature extraction technique-based LIDAR point attributes. These techniques are used as initial selection criteria and are compared with the maximum and the minimum elevation criterion that gives reduction with the highest accuracy. However, point clouds reduction algorithms lead to high consumption of time to generate a reduced file with high accuracy, which prompts the need to propose a new model that considers the trade-off between the processing time and the accuracy. The results showed that the proposed model significantly reduced the processing time at the expense of accuracy reduction by 0.7% and 1.3% for the two used techniques respectively, which is acceptable for realistic applications.
... However, in general, because this method does not consider the significance of each nominee point in addition to areas of exact concern for an application, so it has low accuracy. Oryspayev et al. (2012) performed LIDAR data reduction using a vertex decimation algorithm and test parallel computation frameworks in processing the data. The authors have used last returns only (LRO) and all returns (AR) of points of LIDAR data taken from flat and undulating terrains to validate and verify the algorithm. ...
Article
Full-text available
Light detection and ranging (LIDAR) is a remote sensing method that scans the Earth’s surface with high density to construct the digital elevation model (DEM). In this paper, we present a point clouds reduction model based on two 3D feature extraction techniques, namely: the sharp feature detection algorithm and feature extraction technique-based LIDAR point attributes. These techniques are used as initial selection criteria and are compared with the maximum and the minimum elevation criterion that gives reduction with the highest accuracy. However, point clouds reduction algorithms lead to high consumption of time to generate a reduced file with high accuracy, which prompts the need to propose a new model that considers the trade-off between the processing time and the accuracy. The results showed that the proposed model significantly reduced the processing time at the expense of accuracy reduction by 0.7% and 1.3% for the two used techniques respectively, which is acceptable for realistic applications.
... The effects of LiDAR data density on the accuracy of the generated DEMs and the extent to which LiDAR data can be reduced and still achieve DEMs with the required accuracy is studied in [11]. A method of vertex decimation i.e., selective removal of points from the LiDAR point cloud that do not convey enough information was introduced in [12]. Hegeman et al. proposed a method [13] in which each point was considered for deletion based on the z-variance of the point cloud in the small local region. ...
Article
Full-text available
Airborne Light Detection and Ranging (LiDAR) topographic data provide highly accurate digital terrain information, which is used widely in applications like creating flood insurance rate maps, forest and tree studies, coastal change mapping, soil and landscape classification, 3D urban modeling, river bank management, agricultural crop studies, etc. In this paper, we focus mainly on the use of LiDAR data in terrain modeling/Digital Elevation Model (DEM) generation. Technological advancements in building LiDAR sensors have enabled highly accurate and highly dense LiDAR point clouds, which have made possible high resolution modeling of terrain surfaces. However, high density data result in massive data volumes, which pose computing issues. Computational time required for dissemination, processing and storage of these data is directly proportional to the volume of the data. We describe a novel technique based on the slope map of the terrain, which addresses the challenging problem in the area of spatial data analysis, of reducing this dense LiDAR data without sacrificing its accuracy. To the best of our knowledge, this is the first ever landscape-driven data reduction algorithm. We also perform an empirical study, which shows that there is no significant loss in accuracy for the DEM generated from a 52% reduced LiDAR dataset generated by our algorithm, compared to the DEM generated from an original, complete LiDAR dataset. For the accuracy of our statistical analysis, we perform Root Mean Square Error (RMSE) comparing all of the grid points of the original DEM to the DEM generated by reduced data, instead of comparing a few random control points. Besides, our multi-core data reduction algorithm is highly scalable. We also describe a modified parallel Inverse Distance Weighted (IDW) spatial interpolation method and show that the DEMs it generates are time-efficient and have better accuracy than the one’s generated by the traditional IDW method.
... However, the huge amount of data retrieved by a high-density survey gives several issues in its storage and usage (e.g. Oryspayev et al. 2012). Besides data management concerns, the economic aspect is equally relevant with prices reaching values of hundreds Euro per square kilometer (Lovell et al. 2005;Johansen et al. 2010;Jakubowski et al. 2013). ...
Article
Full-text available
The aim of this work is to define a methodology for terraced areas survey and rapid mapping in a complex environment, like Ligurian (Northwestern Italy) one, where a remarkable percentage of its surface is estimated as terraced and where the canopy coverage makes their recognition very hard. Methodology steps are the definition of LiDAR survey parameters, morphometric filtering and GIS processing for final mapping. Each phase is oriented to provide a reliable terrace mapping, also practicable in canopy-covered areas due to a particular attention to land cover influence. The work considers a case study (Rupinaro basin) close to Cinque Terre, with a mixed land cover (terraces, forest and urbanized areas). The methodology provided encouraging results detecting 448 ha of terraces, 95% of them located under canopy cover. This finding pointed out that terraces mapping cannot rely only on photo-interpretation, as canopies will hamper their detection. Mapping of these areas, frequently characterized by abandonment, is crucial while identifying potential trigger factors for slope instabilities. This case study highlighted the importance of a carefully planned production chain, that should start from LiDAR survey parameter choice, providing the best input for analysis algorithm and providing the correct identification of terraces.
... This feature makes them comparatively difficult to design GPUavailable parallel algorithms for vector-based geocomputation. Some successful attempts include GPGPU algorithms used for reducing LiDAR data (Oryspayev et al. 2012), estimating the roofs' solar potential based on the LiDAR point cloud (Lukač and Žalik 2013), constructing circular cartogram (Tang 2013), and finding flock patterns from spatiotemporal trajectory dataset (Fort et al. 2014). ...
... Denser point clouds however demand more computational resources for efficient processing. This demand has also been addressed by consistent advancements of modern computational frameworks and algorithms for big data-both for efficient storage and retrieval of big geospatial data 58,59 as well as the parallel and distributed computing approaches for efficient processing 16,[60][61][62] . ...
Article
Full-text available
Airborne laser scanning (LiDAR) point clouds over large forested areas can be processed to segment individual trees and subsequently extract tree-level information. Existing segmentation procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of the occlusion effect of higher canopy layers. Although understory trees provide limited financial value, they are an essential component of ecosystem functioning by offering habitat for numerous wildlife species and influencing stand development. Here we model the occlusion effect in terms of point density. We estimate the fractions of points representing different canopy layers (one overstory and multiple understory) and also pinpoint the required density for reasonable tree segmentation (where accuracy plateaus). We show that at a density of ~170 pt/m² understory trees can likely be segmented as accurately as overstory trees. Given the advancements of LiDAR sensor technology, point clouds will affordably reach this required density. Using modern computational approaches for big data, the denser point clouds can efficiently be processed to ultimately allow accurate remote quantification of forest resources. The methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.
... A bareearth DEM is often the primary product of coastal surveying projects using TLS. To handle the massive LiDAR data and generate high-resolution DEMs, various parallel computing algorithms based on local or cloud high-performance computers have been developed (Agarwal et al. 2006;Guan and Wu 2010;Xue et al. 2011;Oryspayev et al. 2012; Guan et al. 2013). Numerous software 1 packages for handling LiDAR data are available in the research community (Varela-Gonzalez et al. 2013;Olsen et al. 2012). ...
Article
Full-text available
Terrestrial laser-scanning (TLS) techniques have been proven to be efficient tools for collecting three-dimensional high-density and high-accuracy point clouds for coastal research and resource management. However, TLS collects a massive number of surveying points. The processing and presenting of the large volumes of data sets is always a challenge for research when targeting a large area with high resolution. This article introduces a practical workflow using shell-scripting techniques to chain together tools from the Generic Mapping Tools (GMT), Geographic Resources Analysis Support System (GRASS), and other command-based open-source utilities for automating TLS data processing. TLS point clouds acquired in the beach and dune area near Freeport, Texas, in May 2015 were used for the case study. GMT is an open-source collection of programs designed for manipulating and displaying geographic data sets. Shell scripts for rotating the coordinate system, removing anomalous points, assessing data quality, generating high-accuracy bare-earth digital elevation models (DEMs), and quantifying changes of beach and sand dune features (shoreline, cross-shore section, dune ridge, toe, and volume) are presented in this article. This investigation indicated that GMT provides efficient and robust programs for regridding and filtering massive TLS point-cloud data sets, generating and displaying high-resolution DEMs, and, finally, producing publication-quality maps and graphs. The methods and scripts presented in this article will benefit a large research and application community of geomorphologists, geologists, geophysicists, engineers, and others who need to handle large volumes of topographic data sets and generate high-resolution DEMs.
... As a storage format in computers, raster is the most common form of terrain surface representation, and vector-based triangulated irregular network (TIN) is a more efficient struc- ture for elevation data with minimal variation. 1 The information extracted from DEM over a region of interest (such as slope, aspect, curvature, or topographic index) is an essential input to many scientific and engineering applications: flood and drainage modeling, landform analysis, climate and meteorological studies, landscape modeling and visualization, and the creation of maps. 2 DEM is typically acquired using remote sensing or direct surveying tech- niques such as photogrammetry, interferometry, laser surveying, and topographic surveying. 3 Because of its highly accurate and dense measurements of the earth's surface, light detection and ranging (LiDAR) has become a widely used technology for the acquisition of digital elevation data. ...
Article
With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes. © 2017 Society of Photo-Optical Instrumentation Engineers (SPIE).
... However, in general, this method has low accuracy because it does not consider the significance of each nominee point in addition to areas of exact concern for an application. D. Oryspayev et al [5] used a vertex decimation algorithm to perform LIDAR data reduction and test parallel computations frameworks in processing the data. With the aim of validating and verify the algorithm, the authors have used last returns only (LRO) and all returns (AR) of points of LIDAR data taken from flat and undulating terrains. ...
Conference Paper
Full-text available
Light detection and ranging (LIDAR) is a technology of remote imaging technologies. Currently, it is the most important technology for accruing elevation points with a high density in the form of digital elevation model (DEM) construction. However, the high-density data leads to time and memory consumption problems during data processing. In this paper, we depend on radial basis function (RBF) with Gaussian interpolation method to carry out LIDAR data reduction by select the most important points from the unprocessed data to remain the constructed DEMs with high accuracy as possible. Comparing the results with respect to the accuracy using Structural Similarity Index (SSIM) with Multiquadric and TPS interpolation methods. The results showing that Gaussian method is the most accurate method with 5.49% regardless each Multiquadric and TPS methods.
... However, the huge point clouds and the high quality of the model required to represent these type of dataset have been challenged in this knowledge area. The lack of integration between Geology, Geomatic and Computer Graphics has led researchers either to use non-specific applications to perform interpretations or to adapt interpretative routines considering the available software solutions that many times require discarding data or even applying techniques for reducing the number of points (ORYSPAYEV et al 2012). ...
Article
Full-text available
The use of LiDAR-based models for natural outcrops and surfaces studies has increased in the last few years. This technique has been found to be potential to represent digitally tridimensional data, thus it increases the quality and amount of data available for interpretation by geoscientists. Researchers, in computations, face diffi culties in handling the huge amount of the data acquired by LiDAR systems. It is diffi cult to visualize effi ciently the point cloud and convert it to high-quality digital models (DMs) with specifi c interpretation tools. Some in-house and commercial software solutions have been developed by some research groups and industries, respectively. However, all solutions must consider the large database as the pain point of the project. Outcrop Explorer has been developed to manage large point clouds, to provide interpretation tools, and to allow integration with other applications through data exporting. In terms of software architecture, view-dependent level of detail (LOD) and a hierarchical space-partitioning structure in the form of octree are integrated in order to optimize the data access and to promote a proper visualization and navigation in the DM. This paper presents a system developed for visualization, handling and interpretation of digital models obtained from point clouds of LiDAR surveys. This system was developed considering the free graphic resources, the necessities of the geoscientists and the limitations of the commercial tools for interpretation purposes. It provides an editing tool to remove noise or unnecessary portions of the point cloud and interpretation tools to identify lines and planes, as well as their orientations, and it has diff erent exporting formats. However, being an open source project much more collaborative development is necessary.
... To further improve the scalability in handling very large LiDAR datasets researchers have begun to investigate various parallelization techniques to support large-scale LiDAR data processing. Oryspayev et al (2012), for example, developed a parallel algorithm based on multicore CPU and GPU (Graphics Processing Unit) to reduce the size of vertices in a triangulated irregular network (TIN) data generated from LiDAR data. Venugopal and Kannan (2013) used a GPU-based platform to speed up the performance of LiDAR data processing by ray triangle intersection. ...
Article
Full-text available
Light detection and ranging (LiDAR) data are essential for scientific discoveries such as Earth and ecological sciences, environmental applications, and responding to natural disasters. While collecting LiDAR data over large areas is quite possible the subsequent processing steps typically involve large computational demands. Efficiently storing, managing, and processing LiDAR data are the prerequisite steps for enabling these LiDAR-based applications. However, handling LiDAR data poses grand geoprocessing challenges due to data and computational intensity. To tackle such challenges, we developed a general-purpose scalable framework coupled with a sophisticated data decomposition and parallelization strategy to efficiently handle ‘big’ LiDAR data collections. The contributions of this research were (1) a tile-based spatial index to manage big LiDAR data in the scalable and fault-tolerable Hadoop distributed file system, (2) two spatial decomposition techniques to enable efficient parallelization of different types of LiDAR processing tasks, and (3) by coupling existing LiDAR processing tools with Hadoop, a variety of LiDAR data processing tasks can be conducted in parallel in a highly scalable distributed computing environment using an online geoprocessing application. A proof-of-concept prototype is presented here to demonstrate the feasibility, performance, and scalability of the proposed framework.
... However, streaming algorithms are unable to reduce the time required for processing because of their inherently sequential processing scheme. A number of recent studies have considered leveraging the power of multicore and/or GPU (shared memory) platforms for processing LiDAR data for efficient DEM modeling (Guan and Wu, 2010;Oryspayev et al., 2012;Sten et al., 2016;Wu et al., 2011), or for 3D visualization (Bernardin et al., 2011;Li et al., 2013;Mateo Lázaro et al., 2014), although shared-memory platforms are also bounded in the amount of memory and the number of processing units. ...
Article
This paper presents a distributed approach that scales up to segment tree crowns within a LiDAR point cloud representing an arbitrarily large forested area. The approach uses a single-processor tree segmentation algorithm as a building block in order to process the data delivered in the shape of tiles in parallel. The distributed processing is performed in a master-slave manner, in which the master maintains the global map of the tiles and coordinates the slaves that segment tree crowns within and across the boundaries of the tiles. A minimal bias was introduced to the number of detected trees because of trees lying across the tile boundaries, which was quantified and adjusted for. Theoretical and experimental analyses of the runtime of the approach revealed a near linear speedup. The estimated number of trees categorized by crown class and the associated error margins as well as the height distribution of the detected trees aligned well with field estimations, verifying that the distributed approach works correctly. The approach enables providing information of individual tree locations and point cloud segments for a forest-level area in a timely manner, which can be used to create detailed remotely sensed forest inventories. Although the approach was presented for tree segmentation within LiDAR point clouds, the idea can also be generalized to scale up processing other big spatial datasets.
... However, the huge point clouds and the high quality of the model required to represent these type of dataset have been challenged in this knowledge area. The lack of integration between Geology, Geomatic and Computer Graphics has led researchers either to use non-specific applications to perform interpretations or to adapt interpretative routines considering the available software solutions that many times require discarding data or even applying techniques for reducing the number of points (Oryspayev et al 2012). ...
Conference Paper
Full-text available
The use of LIDAR-based models for natural outcrops and surfaces studies has increased in the last few years. This technique has been found to be potential to represent digitally tridimensional data, thus it increases the quality and amount of data available for interpretation by geoscientists. Researchers, in computations, face difficulties in handling the huge amount of the data acquired by LIDAR systems. It is difficult to visualize efficiently the point cloud and convert it to high-quality digital models (DMs) with specific interpretation tools. Some in-house and commercial software solutions have been developed by some research groups and industries, respectively. However, all solutions must consider the large database as the pain point of the project. Outcrop Explorer has been developed to manage large point clouds, to provide interpretation tools, and to allow integration with other applications through data exporting. In terms of software architecture, view-dependent level of detail (LOD) and a hierarchical space-partitioning structure in the form of octree are integrated in order to optimize the data access and to promote a proper visualization and navigation in the DM. This paper presents a system developed for visualization, handling and interpretation of digital models obtained from point clouds of LIDAR surveys. This system was developed considering the free graphic resources, the necessities of the geoscientists and the limitations of the commercial tools for interpretation purposes. It provides an editing tool to remove noise or unnecessary portions of the point cloud and interpretation tools to identify lines and planes, as well as their orientations, and it has different exporting formats. However, being an open source project much more collaborative development is necessary.
... This feature makes them comparatively difficult to design GPUavailable parallel algorithms for vector-based geocomputation. Some successful attempts include GPGPU algorithms used for reducing LiDAR data (Oryspayev et al. 2012), estimating the roofs' solar potential based on the LiDAR point cloud (Lukač and Žalik 2013), constructing circular cartogram (Tang 2013), and finding flock patterns from spatiotemporal trajectory dataset (Fort et al. 2014). ...
... Although this method performs very well, it is extremely time consuming (Lee, 1991). Thus, it is impractical to reduce LiDAR data with huge data volumes, especially under the common computation environment (Oryspayev et al., 2012). ...
Article
A new greedy-based multiquadric method (MQ-G) has been developed to perform LiDAR-derived ground data reduction by selecting a certain amount of significant terrain points from the raw dataset to keep the accuracy of the constructed DEMs as high as possible, while maximally retaining terrain features. In the process of MQ-G, the significant terrain points were selected with an iterative process. First, the points with the maximum and minimum elevations were selected as the initial significant points. Next, a smoothing MQ was employed to perform an interpolation with the selected critical points. Then, the importance of all candidate points was assessed by interpolation error (i.e. the absolute difference between the interpolated and actual elevations). Lastly, the most significant point in the current iteration was selected and used for point selection in the next iteration. The process was repeated until the number of selected points reached a pre-set level or no point was found to have the interpolation error exceeding a user-specified accuracy tolerance. In order to avoid the huge computing cost, a new technique was presented to quickly solve the systems of MQ equations in the global interpolation process, and then the global MQ was replaced with the local one when a certain amount of critical points were selected. Four study sites with different morphologies (i.e. flat, undulating, hilly and mountainous terrains) were respectively employed to comparatively analyze the performances of MQ-G and the classical data selection methods including maximum z-tolerance (Max-Z) and the random method for reducing LiDAR-derived ground datasets. Results show that irrespective of the number of selected critical points and terrain characteristics, MQ-G is always more accurate than the other methods for DEM construction. Moreover, MQ-G has a better ability of preserving terrain feature lines, especially for the undulating and hilly terrains.
... The bulky size of LiDAR data point cloud and complex file structure (especially for the foreseeable multi / hyperspectral LiDAR waveform data) would impose certain computational burden. Recently, initialization toward data compression (Lipuš andŽalik, 2012;Mongus andŽalik, 2011), data structure and file handling (Elseberg et al., 2013), high performance computing framework (Han et al., 2009;Lee et al., 2011) and GPU-based processing (Lukač andŽalik, 2013;Oryspayev et al., 2012) have been addressed and researched. Some other attempts have also been found to use compressed LiDAR data for land cover classification Toth et al., 2010) and digital 3D modeling (Jang et al., 2011). ...
Article
Full-text available
Distribution of land cover has a profound impact on the climate and environment; mapping the land cover patterns from global, regional to local scales are important for scientists and authorities to yield better monitoring of the changing world. Satellite remote sensing has been demonstrated as an efficient tool to monitor the land cover patterns for a large spatial extent. Nevertheless, the demand on land cover maps at a finer scale (especially in urban areas) has been raised with evidence by numerous biophysical and socio-economic studies. This paper reviews the small-footprint LiDAR sensor - one of the latest high resolution airborne remote sensing technologies, and its application on urban land cover classification. While most of the early researches focus on the analysis of geometric components of 3D LiDAR data point clouds, there has been an increasing interest in investigating the use of intensity data, waveform data and multi-sensor data to facilitate land cover classification and object recognition in urban environment. In this paper, the advancement of airborne LiDAR technology, including data configuration, feature spaces, classification techniques, and radiometric calibration/correction are reviewed and discussed. The review mainly focuses on the LiDAR studies conducted during the last decade with an emphasis on identification of the approach, analysis of pros and cons, investigating the overall accuracy of the technology, and how the classification results can serve as an input for different urban environmental analysis. Finally, several promising directions for future LiDAR research are highlighted, in hope that it will pave the way for the applications of urban environmental modeling and assessment at a finer scale and a greater extent.
... Ortega and Rueda (2010) used CUDA to detect the drainage network for identifying river networks from high resolution digital elevation models (DEM). To process massive volume Airborne Light Detection And Ranging (LiDAR) data (e.g., vertex decimation for data reduction), scientists have been incorporating GPUs with CPUs to augment the capabilities of these two types of computing devices Oryspayev et al., 2011). ...
Article
Visualizing 3D/4D environmental data is critical to understanding and predicting environmental phenomena for relevant decision making. This research explores how to best utilize graphics process units (GPUs) and central processing units (CPUs) collaboratively to speed up a generic geovisualization process. Taking the visualization of dust storms as an example, we developed a systematic 3D/4D geovisualization framework including preprocessing, coordinate transformation interpolation, and rendering. To compare the potential speedup of using GPUs versus that of using CPUs, we have implemented visualization components based on both multi-core CPUs and many-core GPUs. We found that (1) multi-core CPUs and many-core GPUs can improve the efficiency of mathematical calculations and rendering using multithreading techniques; (2) given the same amount of data, when increasing the size of blocks of GPUs for coordinate transformation, the executing time of interpolation and rendering drops consistently after reaching a peak; (3) the best performances obtained by GPU-based implementations in all the three major processes, are usually faster than CPU-based implementations whereas the best performance of rendering with GPUs is very close to that with CPUs; and (4) as the GPU on-board memory limits the capabilities of processing large volume data, preprocessing data with CPUs is necessary when visualizing large volume data which exceed the on-board memory of GPUs. However, the efficiency may be significantly hampered by the relative high-latency of the data exchange between CPUs and GPUs. Therefore, visualization of median size 3D/4D environmental data using GPUs is a better solution than that of using CPUs.
... Spatial processing using CUDA is also present in astrophysical data analysis (Jin et al., 2010). Oryspayev et al. (2012) proposed a GPU-based and multicore CPU-based method for LiDAR data vertices decimation, where the vertices are represented by a triangulated irregular network (TIN). Steinbach and Hammerling (2012) presented a GPU-based acceleration for raster operations during the batch-processing of raster data in GIS. ...
Article
Full-text available
Solar potential estimation using LiDAR data is an efficient approach for finding suitable roofs for photovoltaic systems' installations. As the amount of LiDAR data increases, the non-parallel methods take considerable time to accurately estimate the solar potential. Although supercomputing provides a possible solution, it is still too expensive and thus infeasible for general usage. Fortunately, the recent graphics processing units (GPUs) can now be utilized to ensure fast computations. This paper proposes a novel method for fast solar potential estimation using GPU-based CUDA technology. This method employs LiDAR data, irradiance measurements, multiresolutional shadowing from solid objects, and heuristic shadowing from vegetation. Experimental results demonstrate the method's effectiveness, in comparison with a multi-core CPU-based approach.
Article
LiDAR products are provided at fine spatial resolutions and the data volume can be huge even for a small study region. Therefore, we have developed a parallel computing toolset that is built on Graphics Processing Units (GPUs) computing techniques to speed up the computational processes on LiDAR products. The toolset provides a set of fundamental processing functions for LiDAR point cloud data, serving as a basic toolkit to derive terrain data products. With this toolset, scientists with limited access to high-end computing facilities can still perform efficient analysis of LiDAR products without dealing with the technical complexity of developing and deploying tools for these products. We have integrated data decomposition methods to handle files that exceed the memory capacity of GPU devices. Preliminary results show that GPU-based implementation yields high speedup ratios and can handle files with a maximum size of 8 GB.
Conference Paper
Full-text available
3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.
Article
Full-text available
Airborne laser scanning (lidar) point clouds can be process to extract tree-level information over large forested landscapes. Existing procedures typically detect more than 90% of overstory trees, yet they barely detect 60% of understory trees because of reduced number of lidar points penetrating the top canopy layer. Although understory trees provide limited financial value, they offer habitat for numerous wildlife species and are important for stand development. Here we model tree identification accuracy according to point cloud density by decomposing lidar point cloud into overstory and multiple understory canopy layers, estimating the fraction of points representing the different layers, and inspecting tree identification accuracy as a function of point density. We show at a density of about 170 pt/m2 understory tree identification accuracy likely plateaus, which we regard as the required point density for reasonable identification of understory trees. Given the advancements of lidar sensor technology, point clouds can feasibly reach the required density to enable effective identification of individual understory trees, ultimately making remote quantification of forest resources more accurate. The layer decomposition methodology can also be adopted for other similar remote sensing or advanced imaging applications such as geological subsurface modelling or biomedical tissue analysis.
Article
In recent years, general-purpose graphics processing units (GP-GPUs) have steadily risen in popularity for remote sensing data processing. Interest has been growing in using hybrid GPU/CPU architectures to realize the full potential of computing devices. This paper studies LiDAR data preprocessing, which is a typical data-intensive remote sensing application. It is proposed to develop an online task scheduler for hybrid GPU/CPU systems using reinforcement learning. At the core of the task scheduler is a Q-learning module that can create the optimal task execution path according to rewards accumulated over time. Constraints and preferences are also encapsulated in the scheduler to support automatic online resource scheduling. Quantitative evaluation on a typical LiDAR data set demonstrates the usefulness and potential of this online task scheduling approach for remote sensing applications.
Article
The increasingly available graphics processing units (GPU) hardware and the emerging general purpose computing on GPU (GPGPU) technologies provide an attractive solution to high-performance geospatial computing. In this study, we have proposed a parallel, primitive-based approach to quadtree construction by transforming a multidimensional geospatial computing problem into chaining a set of generic parallel primitives that are designed for one-dimensional (1D) arrays. The proposed approach is largely data-independent and can be efficiently implemented on GPGPUs. Experiments on 4096*4096 and 16384*16384 raster tiles have shown that the implementation can complete the quadtree constructions in 13.33 ms and 250.75 ms, respectively, on average on an NVidia GPU device. Compared with an optimized serial CPU implementation based on the traditional recursive depth-first search (DFS) tree traversal schema that requires 1191.87 ms on 4096*4096 raster tiles, a significant speedup of nearly 90X has been observed. The performance of the GPU-based implementation also suggests that an indexing rate in the order of more than one billion raster cells per second can be achieved on commodity GPU devices.
Article
The increasingly available Graphics Processing Units (GPU) hardware resources and the emerging General Purpose computing on GPU (GPGPU) technologies provide an alternative and complementary solution to existing cluster based high-performance geospatial computing. However, the complexities of the unique GPGPU hardware architectures and the steep learning curve of GPGPU programming have imposed signficant technical challenges on the geospatial computing community to develop efficient parallel geospatial data structures and algorithms that can make full use of the hardware capabilities to solve ever growing large and complex real world geospatial problems. In this study, we propose a practical approach to simplifying high-performance geospatial computing on GPGPUs by using parallel primitives. We take a case study of quadtree construction on large-scale geospatial rasters to demonstrate the effectiveness of the proposed approach. Comparing the proposed parallel primitives based implementation with a naïve CUDA implementation, a signficant reduction on coding complexity and a 10X speedup have been achieved. We believe that GPGPU based software development using generic parallel primitives can be a first step towards developing geospatial-specific and more efficient parallel primitives for high-performance geospatial computing in both personal and cluster computing environments and boost the performance of geospatial cyberinfrastructure.
Conference Paper
Lately, acquiring a large quantity of three-dimensional (3-D) spatial data particularly topographic information has become commonplace with the advent of new technology such as laser scanner or light detection and ranging (LiDAR) and techniques. Though both in the USA and around the globe, the pace of massive 3-D spatial data collection is accelerating, the provision of affordable technology for dealing with issues such as processing, management, archival, dissemination, and analysis of the huge data volumes has lagged behind. Single computers and generic high-end computing are not sufficient to process this massive data and researches started to explore other computing environments. Recently cloud computing environment showed very promising solutions due to availability and affordability. The main goal of this paper is to develop a web-based LiDAR data processing framework called "Cloud Computing-based LiDAR Processing System (CLiPS)" to process massive LiDAR data using cloud computing environment. The CLiPS framework implementation was done using ESRI's ArcGIS server, Amazon Elastic Compute Cloud (Amazon EC2), and several open source spatial tools. Some of the applications developed in this project include: 1) preprocessing tools for LiDAR data, 2) generation of large area Digital Elevation Model (DEMs) on the cloud environment, and 3) user-driven DEM derived products. We have used three different terrain types, LiDAR tile sizes, and EC2 instant types (large, Xlarge, and double Xlarge) to test for time and cost comparisons. Undulating terrain data took more time than other two terrain types used in this study and overall cost for the entire project was less than $100.
Article
Full-text available
The graphics processing unit (GPU) has become an integral part of today's mainstream computing systems. Over the past six years, there has been a marked increase in the performance and capabilities of GPUs. The modern GPU is not only a powerful graphics engine but also a highly parallel programmable processor featuring peak arithmetic and memory bandwidth that substantially outpaces its CPU counterpart. The GPU's rapid increase in both programmability and capability has spawned a research community that has successfully mapped a broad range of computationally demanding, complex problems to the GPU. This effort in general-purpose computing on the GPU, also known as GPU computing, has positioned the GPU as a compelling alternative to traditional microprocessors in high-performance computer systems of the future. We describe the background, hardware, and programming model for GPU computing, summarize the state of the art in tools and techniques, and present four GPU computing successes in game physics and computational biophysics that deliver order-of-magnitude performance gains over optimized CPU applications.
Article
Full-text available
The effects of land cover and surface slope on lidar-derived elevation data were examined for a watershed in the pied- mont of North Carolina. Lidar data were collected over the study area in a winter (leaf-off) overflight. Survey-grade elevation points (1,225) for six different land cover classes were used as reference points. Root mean squared error (RMSE) for land cover classes ranged from 14.5 cm to 36.1 cm. Land cover with taller canopy vegetation exhibited the largest errors. The largest mean error (36.1 cm RMSE) was in the scrub-shrub cover class. Over the small slope range (0° to 10°) in this study area, there was little evidence for an increase in elevation error with increased slopes. However, for low grass land cover, elevation errors do increase in a consistent manner with increasing slope. Slope errors increased with increasing surface slope, under-predicting true slope on surface slopes � 2°. On average, the lidar- derived elevation under-predicted true elevation regardless of land cover category. The under-prediction was significant, and ranged up to � 23.6 cm under pine land cover.
Article
Full-text available
Airborne laser scanning of forests has been shown to provide accurate terrain models and, at the same time, estimates of multiple resource inventory variables through active sensing of three-dimensional (3D) forest vegetation. Brief overviews of airborne laser scanning technology [often referred to as “light detection and ranging” (LIDAR)] and research findings on its use in forest measurement and monitoring are presented. Currently, many airborne laser scanning missions are flown with specifications designed for terrain mapping, often resulting in data sets that do not contain key information needed for vegetation measurement. Therefore, standards and specifications for airborne laser scanning missions are needed to insure their usefulness for vegetation measurement and monitoring, rather than simply terrain mapping (e.g., delivery of all return data with reflection intensity). Five simple, easily understood LIDAR-derived forest data products are identified that would help insure that forestry needs are considered when multiresource LIDAR missions are flown. Once standards are developed, there is an opportunity to maximize the value of permanent ground plot remeasurements by also collecting airborne laser data over a limited number of plots each year.
Article
Full-text available
In recent years, three-dimensional (3D) data has become increasingly available, in part as a result of significant technological progresses in Light Detection and Ranging (LiDAR). LiDAR provides longitude and latitude information delivered in conjunction with a GPS device, and elevation information generated by a pulse or phase laser scanner, which together provide an effective way of acquiring accurate 3D information of a terrestrial or manmade feature. The main advantages of LiDAR over conventional surveying methods lie in the high accuracy of the data and the relatively little time needed to scan large geographical areas. LiDAR scans provide a vast amount of data points that result in especially rich, complex point clouds. Spatial Information Systems (SISs) are critical to the hosting, querying, and analyzing of such spatial data sets. Feature-rich SISs have been well-documented. However, the implementation of support for 3D capabilities in such systems is only now being addressed. This paper analyzes shortcomings of current technology and discusses research efforts to provide support for the querying of 3D data records in SISs.
Article
Full-text available
Computer graphics applications routinely generate geometric models consisting of large numbers of triangles. We present an algorithm that significantly reduces the number of triangles required to model a physical or abstract object. The algorithm makes multiple passes over an existing triangle mesh, using local geometry and topology to remove vertices that pass a distance or angle criterion. The holes left by the vertex removal are patched using a local triangulation process. The decimation algorithm has been implemented in a general scientific visualization system as a general network filter. Examples from volume modeling and terrain modeling illustrate the results of the decimation algorithm.
Article
Full-text available
In this study, a parallel processing method using a PC cluster and a virtual grid is proposed for the fast processing of enormous amounts of airborne laser scanning (ALS) data. The method creates a raster digital surface model (DSM) by interpolating point data with inverse distance weighting (IDW), and produces a digital terrain model (DTM) by local minimum filtering of the DSM. To make a consistent comparison of performance between sequential and parallel processing approaches, the means of dealing with boundary data and of selecting interpolation centers were controlled for each processing node in parallel approach. To test the speedup, efficiency and linearity of the proposed algorithm, actual ALS data up to 134 million points were processed with a PC cluster consisting of one master node and eight slave nodes. The results showed that parallel processing provides better performance when the computational overhead, the number of processors, and the data size become large. It was verified that the proposed algorithm is a linear time operation and that the products obtained by parallel processing are identical to those produced by sequential processing.
Article
Full-text available
Traditional field-based lithological mapping can be a time-consuming, costly and challenging endeavour when large areas need to be investigated, where terrain is remote and difficult to access and where the geology is highly variable over short distances. Consequently, rock units are often mapped at coarse-scales, resulting in lithological maps that have generalised contacts which in many cases are inaccurately located. Remote sensing data, such as aerial photographs and satellite imagery are commonly incorporated into geological mapping programmes to obtain geological information that is best revealed by overhead perspectives. However, spatial and spectral limitations of the imagery and dense vegetation cover can limit the utility of traditional remote sensing products. The advent of Airborne Light Detection And Ranging (LiDAR) as a remote sensing tool offers the potential to provide a novel solution to these problems because accurate and high-resolution topographic data can be acquired in either forested or non-forested terrain, allowing discrimination of individual rock types that typically have distinct topographic characteristics. This study assesses the efficacy of airborne LiDAR as a tool for detailed lithological mapping in the upper section of the Troodos ophiolite, Cyprus. Morphometric variables (including slope, curvature and surface roughness) were derived from a 4 m digital terrain model in order to quantify the topographic characteristics of four principal lithologies found in the area. An artificial neural network (the Kohonen Self-Organizing Map) was then employed to classify the lithological units based upon these variables. The algorithm presented here was used to generate a detailed lithological map which defines lithological contacts much more accurately than the best existing geological map. In addition, a separate map of classification uncertainty highlights potential follow-up targets for ground-based verification. The results of this study demonstrate the significant potential of airborne LiDAR for lithological discrimination and rapid generation of detailed lithological maps, as a contribution to conventional geological mapping programmes.
Article
Full-text available
This paper reviews LiDAR ground filtering algorithms used in the process of creating Digital Elevation Models. We discuss critical issues for the development and application of LiDAR ground filtering algorithms, including filtering procedures for different feature types, and criteria for study site selection, accuracy assessment, and algorithm classification. This review highlights three feature types for which current ground filtering algorithms are suboptimal, and which can be improved upon in future studies: surfaces with rough terrain or discontinuous slope, dense forest areas that laser beams cannot penetrate, and regions with low vegetation that is often ignored by ground filters.
Article
Full-text available
This paper discusses the use of Airborne Light Detection And Ranging (LiDAR) equipment for terrain navigation. Airborne LiDAR is a relatively new technology used primarily by the geo-spatial mapping community to produce highly accurate and dense terrain elevation maps. In this paper, the term LiDAR refers to a scanning laser ranger rigidly mounted to an aircraft, as opposed to an integrated sensor system that consists of a scanning laser ranger integrated with Global Positioning System (GPS) and Inertial Measurement Unit (IMU) data. Data from the laser range scanner and IMU will be integrated with a terrain database to estimate the aircraft position and data from the laser range scanner will be integrated with GPS to estimate the aircraft attitude. LiDAR data was collected using NASA Dryden's DC-8 flying laboratory in Reno, NV and was used to test the proposed terrain navigation system. The results of LiDAR-based terrain navigation shown in this paper indicate that airborne LiDAR is a viable technology enabler for fully autonomous aircraft navigation. The navigation performance is highly dependent on the quality of the terrain databases used for positioning and therefore high-resolution (2 m post-spacing) data was used as the terrain reference.
Article
RESUMO MODELAGEM EM MULT1RESOLUÇÃO DE TERRENOS BASEADA EM MALHAS TRIANGULARES Modelagem em multiresolução permite a representação, manipulação e visualização de grandes volumes de dados espaciais em múltiplos níveis de detalhe e precisão. Em sistemas de informação geográfica (SIG), áreas menos relevantes de um terreno podem ser descritas através de uma representa-ção de menor resolução, enquanto alta resolução pode ser utlilizada apenas em áreas específicas de interesse. Este trabalho apresenta um mé-todo que permite a construção de uma sequência de malhas triangulares através de operações de refinamento e simplificação, onde características relevantes dos terrenos são preservadas a cada etapa do processo de construção. Várias amostras de terrenos são utilizadas para demonstrar o desempenho do método proposto. Palavras-chaves: Aproximação de superfícies, métodos de refinamento, simplificação de malhas, triangulação, múltiplos níveis de detalhe. ABSTRACT Multiresolution modeling provides an abstraction for representing, manipulating, and visualizing large volumes of spatial dafa at multiple levels of detail and accuracy. In geographic information systems (GIS), a coarse representation can be used to describe less relevant areas of a terrain, while high resolution can be focused on specific parts of interest. This work presents a method for constructing a sequence of triangular meshes in the context of terrain modeling, where meshes are created through a set of simplification and refinement operations, while preserving relevant terrain features. The method has been tested on several terrain data sets.
Article
Linear interpolation of irregularly spaced LIDAR elevation data sets is needed to develop realistic spatial models. We evaluated inverse distance weighting (IDW) and ordinary kriging (OK) interpolation techniques and the effects of LIDAR data density on the statistical validity of the linear interpolators. A series of 10 forested 1000‐ha LIDAR tiles on the Lower Coastal Plain of eastern North Carolina was used. An exploratory analysis of the spatial correlation structure of the LIDAR data set was performed. Weighted non‐linear least squares (WNLS) analysis was used to parameterize best‐fit theoretical semivariograms on the empirical data. Tile data were sequentially reduced through random selection of a predetermined percentage of the original LIDAR data set, resulting in data sets with 50%, 25%, 10%, 5% and 1% of their original densities. Cross‐validation and independent validation procedures were used to evaluate root mean square error (RMSE) and kriging standard error (SE) differences between interpolators and across density sequences. Review of errors indicated that LIDAR data sets could withstand substantial data reductions yet maintain adequate accuracy (30 cm RMSE; 50 cm SE) for elevation predictions. The results also indicated that simple interpolation approaches such as IDW could be sufficient for interpolating irregularly spaced LIDAR data sets.
Article
Airborne laser scanning represents a new and independent technology for the highly automated generation of digital terrain models (DTM) and surface models. The described technical features of airborne laser scanning outline the present fields of application. The primary application concerns the generation of high quality topographic DTMs, described by mostly regular grid patterns. It is the unique advantage of airborne laser scanning that is equally applicable to open terrain as well as to areas which are partly or completely by forest or other vegetation. Naturally, the interactive editing efforts in the latter case are higher. Another important application of laser scanning also concerns the generation of DTMs in coastal areas or wetlands.
Article
Methods and other embodiments associated with performing an in-memory triangulation of an arbitrarily large data set are described. One example method for performing in-memory triangulation of an arbitrarily large data set includes striping a data set into multiple stripes, selecting a first stripe to triangulate, and then performing an in-memory triangulation on the stripe. The method may also include removing certain triangles from the triangulated irregular network produced by the triangulation, merging another stripe with the leftover data, and repeating the process until the arbitrarily large data set has been triangulated piece-by-piece, with the triangulations occurring in memory.
Article
The TopoSys laser scanner system is designed to produce digital elevation models (DEMs) at a maximum accuracy of 0.5 m in x and y and 0.1 m in z. The regular scan pattern and the measurement frequency of 80 000 measurements per second (on average 5 measurements per m2) form the basis for high quality DEMs. The mainly automated data processing makes it possible to generate DEMs of large areas in a short production time. The DEMs produced come into common use as basic data for different applications, some of which are water resources management, shoreline control, planning of utility lines and urban planning (simulation of noise and pollution distributions). The performance of the system is illustrated with the help of DEM sections produced with the TopoSys system.
Article
Digital Elevation Models (DEMs) play an important role in terrain related applications, and their accuracy is crucial for DEM applications. There are many factors that affect the accuracy of DEMs, with the main factors including the accuracy, density and distribution of the source data, the interpolation algorithm, and the DEM resolution. Generally speaking, the more accurate and the denser the sampled terrain data are, the more accurate the produced DEM will be. Traditional methods such as field surveying and photogrammetry can yield high accuracy terrain data, but are very time consuming and labour intensive. Moreover, in some situations such as in densely forested areas, it is impossible to use these methods for collecting elevation data. Light Detection and Ranging (LiDAR) offers high density data capture. The high accuracy three dimensional terrain points prerequisite to very detailed high resolution DEMs generation offers exciting prospects to DEM builders. However, because there is no sampling density selection for different area during a LiDAR data collection mission, some terrains may be oversampled thereby imposing increases in data storage requirements and processing time. Improved efficiency in these terms can accrue if redundant data can be identified and eliminated from the input data set. With a reduction in data, a more manageable and operationally sized terrain dataset for DEM generation is possible (Anderson et al., 2005a). The primary objective of data reduction is to achieve an optimum balance between density of sampling and volume of data, hence optimizing cost of data collection (Robinson, 1994). Some studies on terrain data reduction have been conducted based on the analysis of the effects of data reduction on the accuracy of DEMs and derived terrain attributes. For example, Anderson et al. (2005b) evaluated the effects of LiDAR data density on the production of DEM at different resolution. They produced a series of DEMs at different horizontal resolutions along a LiDAR point-density gradient, and then compared each of these DEMs to a reference DEM produced from the original LiDAR data, this having been acquired at the highest available density. Their results showed that higher resolution DEM generation is more sensitive to data density than is lower resolution DEM generation. It was also demonstrated that LiDAR datasets could withstand substantial data reductions yet still maintain adequate accuracy for elevation predictions (Anderson et al., 2005a) This study explored the effects of LiDAR point density on DEM accuracy and examined to scope for data volume reduction compatible with maintaining efficiency in data storage and processing. Something of the relationship between data density, data file size, and processing time also emerges from this study. The study area (113 km²) falls within the Corangamite Catchment Management Authority (CCMA) region, (south western Victoria, Australia). LiDAR data points were first randomly selected and separated to two datasets: 90% for training data and 10% for check points. Training datasets were used for subsequent reduction to produce a series of datasets with different data density, representing the 100%, 75%, 50%, and 25%, 10%, 5%, 1% of the original training dataset. Reduced datasets were used to produce correspondent DEMs with 5 m resolution. Results show that there is no significant difference in DEM accuracy if data points are reduced to 50% of the original point density. Processing time for DEM generation can thus be reduced to half of the time needed when using the original dataset.
Article
One of the main advantages of the airborne laser scanner systems is the high degree of detail that a portion of the land can be mapped. This overdetailed description is caused by the high number of acquired points, which makes it easier to identify objects and modeling the topography. However, the large amount of collected points becomes redundant in plain regions, where fewer points are needed to describe the surface. An algorithm aimed primarily at the reduction of the number of points within a TIN model produced using LIDAR data was implemented in C# language using ArcObjects and both 3D and Spatial Analyst ArcGIS extensions. The method is based on the faces of the triangulation, where the redundant points are eliminated by a neighborhood vertex importance analysis. The obtained results with different thresholds are presented and map algebra calculations on raster created with two generalized subsets are used for evaluation.
Article
This tutorial paper gives an introduction and overview of various topics related to airborne laser scanning (ALS) as used to measure range to and reflectance of objects on the earth surface. After a short introduction, the basic principles of laser, the two main classes, i.e., pulse and continuous-wave lasers, and relations with respect to time-of-flight, range, resolution, and precision are presented. The main laser components and the role of the laser wavelength, including eye safety considerations, are explained. Different scanning mechanisms and the integration of laser with GPS and INS for position and orientation determination are presented. The data processing chain for producing digital terrain and surface models is outlined. Finally, a short overview of applications is given.
Article
A comparison between data acquisition and processing from passive optical sensors and airborne laser scanning is presented. A short overview and the major differences between the two technologies are outlined. Advantages and disadvantages with respect to various aspects are discussed, like sensors, platforms, flight planning, data acquisition conditions, imaging, object reflectance, automation, accuracy, flexibility and maturity, production time and costs. A more detailed comparison is presented with respect to DTM and DSM generation. Strengths of laser scanning with respect to certain applications are outlined. Although airborne laser scanning competes to a certain extent with photogrammetry and will replace it in certain cases, the two technologies are fairly complementary and their integration can lead to more accurate and complete products, and open up new areas of application.
Article
Today, the surface of our planet can be sampled extremely accurately. Also, the need for information about the topography within specified regions becomes increasingly important in geographic information systems. Meaning can often only be extracted from certain data in combination with the digital terrain models. In particular, visualization techniques and image mapping methods in geographic information systems require this kind of information. And now there are very densely sampled grids of height data available—too dense in many areas of the terrain, since most sampling techniques are nonadaptive.We have developed algorithms to cope with the complexity of such digital terrain models. They analyze each given model and reduce the number of points while preserving the accuracy as good as possible. In our research, we compared three basic approaches and implemented methods to minimize emerging errors. This paper describes the necessary steps for reducing regularly sampled height grids or given triangular meshes to meet specified quality or quantity criteria.
Conference Paper
We present a method for solving the following problem: Given a set of data points scattered in three dimensions and an initial triangular mesh M0, produce a mesh M, of the same topological type as M0 , that fits the data well and has a small number of vertices. Our approach is to minimize an energy function that explicitly models the competing desires of conciseness of representation and fidelity to the data. We show that mesh optimization can be effectively used in at least two applications: surface reconstruction from unorganized points, and mesh simplification (the reduction of the number of vertices in an initially dense mesh of triangles).
Conference Paper
Cloud computing is increasingly considered as an additional computational resource platform for scientific workflows. The cloud offers opportunity to scale-out applications from desktops and local cluster resources. Each platform has different properties (e.g., queue wait times in high performance systems, virtual machine startup overhead in clouds) and characteristics (e.g., custom environments in cloud) that makes choosing from these diverse resource platforms for a workflow execution a challenge for scientists. Scientists are often faced with deciding resource platform selection trade-offs with limited information on the actual workflows. While many workflow planning methods have explored resource selection or task scheduling, these methods often require fine-scale characterization of the workflow that is onerous for a scientist. In this paper, we describe our early exploratory work in using blackbox characteristics for a cost-benefit analysis of using different resource platforms. In our blackbox method, we use only limited high-level information on the workflow length, width, and data sizes. The length and width are indicative of the workflow duration and parallelism. We compare the effectiveness of this approach to other resource selection models using two exemplar scientific workflows on desktop, local cluster, HPC center, and cloud platforms. Early results suggest that the blackbox model often makes the same resource selections as a more fine-grained whitebox model. We believe the simplicity of the blackbox model can help inform a scientist on the applicability of a new resource platform, such as cloud resources, even before porting an existing workflow.
Article
We study the accuracy of data on some local topographic attributes derived from digital elevation models (DEMs). First, we carry out a test for the precision of four methods for calculation of partial derivatives of elevations. We found that the Evans method is the most precision algorithm of this kind. Second, we produce formulae for root mean square errors of four local topographic variables (gradient, aspect, horizontal and vertical landsurface curvatures), pro- vided that these variables are evaluated with the Evans method. Third, we demon- strate that mapping is the most convenient and pictorial way for the practical implementation of the formulae derived. A DEM of a part of the Kursk Region (Russia) is used as an example. We ® nd that high errors of data on local topo- graphic variables are typical forat areas. Results of the study can be used to improve landscape investigations with digital terrain models.
Article
For representation of terrain, an efficient alternative to dense grids is the Triangulated Irregular Network (TIN), which represents a surface as a set of non-overlapping contiguous triangular facets, of irregular size and shape. The source of digital terrain data is increasingly dense raster models produced by automated orthophoto machines or by direct sensors such as synthetic aperture radar. A method is described for automatically extracting a TIN model from dense raster data. An initial approximation is constructed by automatically triangulating a set of feature points derived from the raster model. The method works by local incremental refinement of this model by the addition of new points until a uniform approximation of specified tolerance is obtained. Empirical results show that substantial savings in storage can be obtained
Light detection and ranging-based terrain navigation—a concept exploration Airborne Lidar data processing and information extraction
  • J Campbell
  • M U De Haag
  • F Van Graas
  • S Young
Campbell, J., de Haag, M.U., van Graas, F., Young, S., 2003. Light detection and ranging-based terrain navigation—a concept exploration. In: Proceedings of the 16th International Technical Meeting of the Satellite Division of The Institute of Navigation, 9–12 September, Portland, Oregon. Chen, Q., 2007. Airborne Lidar data processing and information extraction. Photogrammetric Engineering & Remote Sensing 73 (2), 109–112.
Terrapoint Aerial Services-A White Paper on LIDAR Mapping
  • Ambercore
Ambercore, 2008. Terrapoint Aerial Services—A White Paper on LIDAR Mapping. URL: /http://www.ambercore.com/files/TerrapointWhitePaper.pdfS (accessed 4 May, 2011).
Ambercore Terrapoint Aerial Services-A White Paper on LIDAR Mapping
  • Ambercore