Article

DEM generation from laser scanner data using adaptive TIN models

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Specifically, when the local neighborhood is defined with a set of points, the typical way is to find a planar surface, calculate geometrical features (e.g., slope or elevation difference), and consider outlying points from the planar surface as non-ground points [ 41 ]. With TIN representation, triangular facets approximate local surfaces, and non-ground points are filtered based on slope [ 42 ]. In DSM representation, kernel-based morphological operations filter non-ground pixels [ 43 ], [ 48 ], [ 50 ]. ...
... The DTM generated by our proposed method, denoted as "OUR", was benchmarked against five other DTMs originating from different methods. These include three variants based on local neighborhood (slope)-based approaches, namely, "LAS" [ 42 ], the progressive morphological filter ("PMF") [ 43 ], and the simple morphological filter ("SMRF") [ 48 ]. The LAS method, developed and implemented through LASTools, employs TIN approximation of the ground. ...
... In contrast, other DTMs classify it as non-ground. [ 42 ]. c [ 57 ]. d [ 43 ]. e [ 48 ]. ...
Thesis
Full-text available
This dissertation explores the integration of remote sensing and artificial intelligence (AI) in geospatial mapping, specifically through the development of knowledge-based mapping systems. Remote sensing has revolutionized Earth observation by providing data that far surpasses traditional in-situ measurements. Over the last decade, significant advancements in inferential capabilities have been achieved through the fusion of geospatial sciences and AI (GeoAI), particularly with the application of deep learning. Despite its benefits, the reliance on data-driven AI has introduced challenges, including unpredictable errors and biases due to imperfect labeling and the opaque nature of the processes involved. The research highlights the limitations of solely using data-driven AI methods for geospatial mapping, which tend to produce spatially heterogeneous errors and lack transparency, thus compromising the trustworthiness of the outputs. In response, it proposes novel knowledge-based mapping systems that prioritize transparency and scalability. This research has developed comprehensive techniques to extract key Earth and urban features and has introduced a 3D urban land cover mapping system, including a 3D Landscape Clustering framework aimed at enhancing urban climate studies. The developed systems utilize universally applicable physical knowledge of targets, captured through remote sensing, to enhance mapping accuracy and reliability without the typical drawbacks of data-driven approaches. The dissertation emphasizes the importance of moving beyond mere accuracy to consider the broader implications of error patterns in geospatial mappings. It demonstrates the value of integrating generalizable target knowledge, explicitly represented in remote sensing data, into geospatial mapping to address the trustworthiness challenges in AI mapping systems. By developing mapping systems that are open, transparent, and scalable, this work aims to mitigate the effects of spatially heterogeneous errors, thereby improving the trustworthiness of geospatial mapping and analysis across various fields. Additionally, the dissertation introduces methodologies to support urban pathway accessibility and flood management studies through dependable geospatial systems. These efforts aim to establish a robust foundation for informed urban planning, efficient resource allocation, and enriched environmental insights, contributing to the development of more sustainable, resilient, and smart cities.
... These systems provide advantages compared with traditional photogrammetric methods. Thus, airborne LiDAR has been widely used in various applications, such as the reconstruction of digital terrain models (DTMs) [2][3][4][5][6], forest surveying [7,8], power line patrolling [9,10], and 3D city modeling [11,12]. The ground and nonground points in the original LiDAR data must be separated before these applications, which is referred to as point cloud filtering [13,14]. ...
... Surface-based methods construct a surface model that approximates the ground surface and extract points close to the surface as ground points [27]. The common algorithms used for surface construction include the triangulated irregular network (TIN) [2,[28][29][30], cloth simulation [31], and thin plate spline (TPS) [5,13,32]. Progressive TIN densification filtering (PTDF) is used to extract ground points by densifying a TIN constructed from selected seeds. ...
... Progressive TIN densification filtering (PTDF) is used to extract ground points by densifying a TIN constructed from selected seeds. This algorithm was first proposed by Axelsson et al. [2] and achieved the best results among eight methods in an experimental comparison conducted by Sihole and Vosselman [15]. Zhao et al. [28] improved the performance of PTDF in forested areas by optimizing seed selection using the morphological method. ...
Article
Full-text available
The complexity of terrain features poses a substantial challenge in the effective processing and application of airborne LiDAR data, particularly in regions characterized by steep slopes and diverse objects. In this paper, we propose a novel multiscale filtering method utilizing a modified 3D alpha shape algorithm to increase the ground point extraction accuracy in complex terrain. Our methodology comprises three pivotal stages: preprocessing for outlier removal and potential ground point extraction; the deployment of a modified 3D alpha shape to construct multiscale point cloud layers; and the use of a multiscale triangulated irregular network (TIN) densification process for precise ground point extraction. In each layer, the threshold is adaptively determined based on the corresponding α. Points closer to the TIN surface than the threshold are identified as ground points. The performance of the proposed method was validated using a classical benchmark dataset provided by the ISPRS and an ultra-large-scale ground filtering dataset called OpenGF. The experimental results demonstrate that this method is effective, with an average total error and a kappa coefficient on the ISPRS dataset of 3.27% and 88.97%, respectively. When tested in the large scenarios of the OpenGF dataset, the proposed method outperformed four classical filtering methods and achieved accuracy comparable to that of the best of learning-based methods.
... Especially in areas with complex topography, the continuity and inconsistency of topographic undulations and the inconsistency of the density and height of vegetation, particularly the presence of low vegetation, increase the complexity of terrain modeling. Rapid and accurate removal of vegetation areas is a key issue in DEM construction [6]. ...
... To measure the final terrain-modeling accuracy of this method, cloth simulation filtering (CSF), triangulated irregular network (TIN) filtering, and progressive morphological filtering (PMF) were used to model the terrain in the two study areas. The parameter settings of the three methods referred to the references [6,8,38] to optimize their filtering results and, thus, the comparison with our method is reasonable. These three methods were implemented in the software packages CloudCompare, Photoscan, and PCL, respectively. ...
... cal filtering (PMF) were used to model the terrain in the two study areas. The parameter settings of the three methods referred to the references [6,8,38] to optimize their filtering results and, thus, the comparison with our method is reasonable. These three methods were implemented in the software packages CloudCompare, Photoscan, and PCL, respectively. ...
Article
Full-text available
The removal of low vegetation is still challenging in UAV photogrammetry. According to the different topographic features expressed by point-cloud data at different scales, a vegetation-filtering method based on multiscale elevation-variation coefficients is proposed for terrain modeling. First, virtual grids are constructed at different scales, and the average elevation values of the corresponding point clouds are obtained. Second, the amount of elevation change at any two scales in each virtual grid is calculated to obtain the difference in surface characteristics (degree of elevation change) at the corresponding two scales. Third, the elevation variation coefficient of the virtual grid that corresponds to the largest elevation variation degree is calculated, and threshold segmentation is performed based on the relation that the elevation variation coefficients of vegetated regions are much larger than those of terrain regions. Finally, the optimal calculation neighborhood radius of the elevation variation coefficients is analyzed, and the optimal segmentation threshold is discussed. The experimental results show that the multiscale coefficients of elevation variation method can accurately remove vegetation points and reserve ground points in low- and densely vegetated areas. The type I error, type II error, and total error in the study areas range from 1.93 to 9.20%, 5.83 to 5.84%, and 2.28 to 7.68%, respectively. The total error of the proposed method is 2.43–2.54% lower than that of the CSF, TIN, and PMF algorithms in the study areas. This study provides a foundation for the rapid establishment of high-precision DEMs based on UAV photogrammetry.
... represents the total number of pixels with a valid elevation value DTM denotes digital terrain model. Kilian et al. (1996) Work well in rugged areas with high efficiency Unsuitable for terrains with diverse objects Surface-based Kraus and Pfeifer (1998) Effective in most terrains Unsatisfied performance in handling terrain details Slope-based Axelsson (2000) Work well in flat areas with high efficiency Unsuitable for terrains with abrupt changes Cluster-based Tóvári and Pfeifer (2005) Effective in terrains with diverse objects Limited by the segmentation performance Statistic-based Bartels and Wei (2006a) Effective in flat terrains without diverse objects terrain models (DTMs) and the reference DTMs (Hu and Yuan, 2016;Qin et al., 2021Qin et al., , 2023. In addition, several classical classification metrics (e.g., , , 1-score) have been employed directly in recent GF studies (Nurunnabi et al., 2021;Li et al., 2022). ...
... The progressive TIN densification (PTD) method (Axelsson, 2000), known for its robustness, has gained significant attention and has been widely adopted in commercial software. However, the PTD method is often time-consuming because of the iterative TIN construction (Chen et al., 2016a). ...
... The ''lasground.exe'' tool is a closed-source tool for ground extraction, which is based on the PTD algorithm (Axelsson, 2000). ...
... Local neighborhood (slope)-based methods [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33] set a local neighborhood for a point of interest and then classify whether each point is ground or non-ground based on the relative coordinates of the points belonging to the local neighborhood. The point and its local neighborhood could be other types of point cloud representations such as a triangulated irregular network (TIN) and a window or kernel in a digital surface model (DSM). ...
... Specifically, when the local neighborhood is defined with a set of points, the typical way is to find a planar surface, calculate geometrical features (e.g., slope or elevation difference), and consider outlying points from the planar surface as non-ground points [19]. With TIN representation, triangular facets approximate local surfaces, and non-ground points are filtered based on slope [20]. In DSM representation, kernel-based morphological operations filter non-ground pixels [21,26,28]. ...
... The DTM generated by our proposed method, denoted as "OUR", was benchmarked against five other DTMs originating from different methods. These include three variants based on local neighborhood (slope)-based approaches, namely, "LAS" [20], the progressive morphological filter ("PMF") [21], and the simple morphological filter ("SMRF") [26]. The LAS method, developed and implemented through LASTools, employs TIN approximation of the ground. ...
Article
Full-text available
Digital terrain model (DTM) creation is a modeling process that represents the Earth’s surface. An aptly designed DTM generation method tailored for intended study can significantly streamline ensuing processes and assist in managing errors and uncertainties, particularly in large-area projects. However, existing methods often exhibit inconsistent and inexplicable results, struggle to clearly define what an object is, and often fail to filter large objects due to their locally confined operations. We introduce a new DTM generation method that performs object-based ground filtering, which is particularly beneficial for urban topography. This method defines objects as areas fully enclosed by steep slopes and grounds as smoothly connected areas, enabling reliable “object-based” segmentation and filtering, extending beyond the local context. Our primary operation, controlled by a slope threshold parameter, simplifies tuning and ensures predictable results, thereby reducing uncertainties in large-area modeling. Uniquely, our method considers surface water bodies in modeling and treats connected artificial terrains (e.g., overpasses) as ground. This contrasts with conventional methods, which often create noise near water bodies and behave inconsistently around overpasses and bridges, making our approach particularly beneficial for large-area 3D urban mapping. Examined on extensive and diverse datasets, our method offers unique features and high accuracy, and we have thoroughly assessed potential artifacts to guide potential users.
... The first operation performed was the creation of a TIN model (Triangulated Irregular Network) [69,70] from the point cloud generated by the multibeam acquisition. The TIN model was produced from the point cloud in the Open Source SAGA Gis v.9.2.0 software using TIN tools. ...
Article
Full-text available
Coastal and underwater archaeological sites pose significant challenges in terms of investigation, conservation, valorisation, and management. These sites are often at risk due to climate change and various human-made impacts such as urban expansion, maritime pollution, and natural deterioration. However, advances in remote sensing (RS) and Earth observation (EO) technologies applied to cultural heritage (CH) sites have led to the development of various techniques for underwater cultural heritage (UCH) exploration. The aim of this work was the evaluation of an integrated methodological approach using ultra-high-resolution (UHR) bathymetric data to aid in the identification and interpretation of submerged archaeological contexts. The study focused on a selected area of the submerged Archaeological Park of Baia (Campi Flegrei, south Italy) as a test site. The study highlighted the potential of an approach based on UHR digital bathymetric model (DBM) derivatives and the use of machine learning and statistical techniques to automatically extract and discriminate features of archaeological interest from other components of the seabed substrate. The results achieved accuracy rates of around 90% and created a georeferenced vector map similar to that usually drawn by hand by archaeologists.
... The accuracy of these methods depends significantly on the size of the structural element [17]. Interpolation-based methods [6], including Triangulated Irregular Networks (TINs) [18] and grid-based techniques like Kriging and Inverse Distance Weighting (IDW), as well as adaptive methods. Ref. [19] play a pivotal role in estimating the height of unmeasured areas based on the elevation of nearby measured locations. ...
Article
Full-text available
The combination of Remote Sensing and Deep Learning (DL) has brought about a revolution in converting digital surface models (DSMs) to digital terrain models (DTMs). DTMs are used in various fields, including environmental management, where they provide crucial topographical data to accurately model water flow and identify flood-prone areas. However, current DL-based methods require intensive data processing, limiting their efficiency and real-time use. To address these challenges, we have developed an innovative method that incorporates a physically informed autoencoder, embedding physical constraints to refine the extraction process. Our approach utilizes a normalized DSM (nDSM), which is updated by the autoencoder to enable DTM generation by defining the DTM as the difference between the DSM input and the updated nDSM. This approach reduces sensitivity to topographical variations, improving the model’s generalizability. Furthermore, our framework innovates by using subtractive skip connections instead of traditional concatenative ones, improving the network’s flexibility to adapt to terrain variations and significantly enhancing performance across diverse environments. Our novel approach demonstrates superior performance and adaptability compared to other versions of autoencoders across ten diverse datasets, including urban areas, mountainous regions, predominantly vegetation-covered landscapes, and a combination of these environments.
... The improved progressive TIN densification (IPTD) [35] filtering algorithm was applied to the cropped data to conduct ground point classification and point cloud normalization with separated ground points, as follow: ...
Article
Full-text available
The living vegetation volume (LVV) can accurately describe the spatial structure of greening trees and quantitatively represent the relationship between this greening and its environment. Because of the mostly line shape distribution and the complex species of street trees, as well as interference from artificial objects, current LVV survey methods are normally limited in their efficiency and accuracy. In this study, we propose an improved methodology based on vehicle-mounted LiDAR data to estimate the LVV of urban street trees. First, a point-cloud-based CSP (comparative shortest-path) algorithm was used to segment the individual tree point clouds, and an artificial objects and low shrubs identification algorithm was developed to extract the street trees. Second, a DBSCAN (density-based spatial clustering of applications with noise) algorithm was utilized to remove the branch point clouds, and a bottom-up slicing method combined with the random sampling consistency iterative method algorithm (RANSAC) was employed to calculate the diameters of the tree trunks and obtain the canopy by comparing the variation in trunk diameters in the vertical direction. Finally, an envelope was fitted to the canopy point cloud using the adaptive AlphaShape algorithm to calculate the LVVs and their ecological benefits (e.g., O2 production and CO2 absorption). The results show that the CSP algorithm had a relatively high overall accuracy in segmenting individual trees (overall accuracy = 95.8%). The accuracies of the tree height and DBH extraction based on vehicle-mounted LiDAR point clouds were 1.66~3.92% (rRMSE) and 4.23~15.37% (rRMSE), respectively. For the plots on Zijin Mountain, the LVV contribution by the maple poplar was the highest (1049.667 m³), followed by the sycamore tree species (557.907 m³), and privet’s was the lowest (16.681 m³).
... Therefore, we use the radius outlier removal method to perform denoising operation on the point cloud data. Then, we implement ground point classification based on the Progressive TIN densification filtering algorithm [30]. In order to facilitate the subsequent point cloud processing, it is also necessary to normalize the point cloud data according to the DEM (Digital Elevation Model). ...
Article
Full-text available
Banana phenotypic parameters are one of the important elements in the study of banana growth and development. Through the measurement of banana phenotypic parameters, we can obtain information about the growth status, nutritional status and quality indexes of banana plants, and pseudo-stem parameters are significant indicators in banana phenotypic parameters. This research proposes a two-stage approach combining morphological features and deep learning point cloud segmentation to extract banana pseudo-stem parameters. Specifically, in the first step, seed points are extracted using the DBSCAN clustering algorithm, and banana individual plant segmentation is accomplished using the region growing algorithm based on seed points. Its precision, recall and F1-score were 97.73%, 97.36% and 97.54%, respectively. This indicates that the DBSCAN clustering algorithm and the seed point based region growing algorithm can effectively realize the plant count of banana plants and initially realize the individual plant segmentation of banana. The second step is to use PointNet++, PointNet, and DGCNN for pseudo-stem and canopy segmentation of individual banana plants. All three models perform well in segmentation, with PointNet++ performing the best. Its precision, recall, F1-score, Matthews correlation coefficient and Dice coefficient reached 0.9956, 0.9709, 0.9831, 0.9670 and 0.9831. this shows that deep learning has a better applicability in segmenting banana plants. In the results of segmentation, we measure the banana pseudostem circumference and pseudo-stem height. The correlation between the extracted pseudo-stem height and pseudo-stem circumference compared to the measured values was 96.70% and 82.32%, respectively. The above two-stage method of extracting banana pseudo-stem parameters overcomes the difficulties of point cloud individual plant segmentation associated with intensive banana cultivation. It makes the management of individual banana plants possible and provides accurate phenotypic parameter information for banana plantation management. It lays the foundation for further assessment of banana growth and nutritional status.
... The IPC were then normalized to ground level using previously acquired low-density airborne laser scanning (ALS) data, which belong to the open data of the National Land Survey of Finland (National Land Survey of Finland 2022). A triangulated irregular network (TIN) was created from ALS echoes classified as ground (Axelsson 2000), and the TIN model was subtracted from the point heights of the IPC to compute the above-ground heights. ...
Article
Full-text available
A common task in forestry is to determine the value of a forest property, and timber is the most valuable component of that property. Remotely sensed data collected by an unoccupied aerial vehicle (UAV) are suited for this purpose as most forest properties are of a size that permits the efficient collection of UAV data. These UAV data, when linked to a probability sample of field plots, enable the model-assisted (MA) estimation of the timber value and its associated uncertainty. Our objective was to estimate the value of timber (€/ha) in a 40-ha forest property in Finland. We used a systematic sample of field plots (n = 160) and 3D image point cloud data collected by an UAV. First, we studied the effects of spatial autocorrelation on the variance estimates associated with the timber value estimates produced using a field data-based simple expansion (EXP) estimator. The variance estimators compared were simple random sampling, Matérn, and a variant of the Grafström–Schelin estimator. Second, we compared the efficiencies of the EXP and MA estimators under different sampling intensities. The sampling intensity was varied by subsampling the systematic sample of 160 field plots. In the case of the EXP estimator, the simple random sampling variance estimator produced the largest variance estimates, whereas the Matérn estimator produced smaller variance estimates than the Grafström–Schelin estimator. The MA estimator was more efficient than the EXP estimator, which suggested that the reduction of sampling intensity from 160 to 60 plots is possible without deterioration in precision. The results suggest that the use of UAV data improves the precision of timber value estimates compared to the use of field data only. In practice, the proposed application improves the cost-efficiency of the design-based appraisal of a forest property because expensive field workload can be reduced by means of UAV data.
... To enhance data quality, noise was filtered, and duplicates were removed from the point clouds. Subsequently, the point clouds were classified into ground and non-ground (vegetation) categories using a refined version of the triangulated irregular network (TIN) from Axelsson [35] implemented in LAStools [36]. A digital elevation model (DEM) with a 0.10 m grid resolution was then generated using the weighted linear least squares interpolation-based method [37]. ...
Article
Full-text available
Pine species are a key social and economic component in Mediterranean ecosystems, where insect defoliations can have far-reaching consequences. This study aims to quantify the impact of pine processionary moth (PPM) on canopy structures, examining its evolution over time at the indi-vidual tree level using high-density drone LiDAR-derived point clouds. Focusing on 33 individuals of black pine (Pinus nigra)—a species highly susceptible to PPM defoliation in the Mediterranean environment—bitemporal LiDAR scans were conducted to capture the onset and end of the major PPM feeding period in winter. Canopy crown delineation performed manually was compared with LiDAR-based methods. Canopy metrics from point clouds were computed for trees exhibit-ing contrasting levels of defoliation. The structural differences between non-defoliated and defoli-ated trees were assessed by employing parametric statistical comparisons, including analysis of variance along with post hoc tests. Our analysis aimed to distinguish structural changes resulting from PPM defoliation during the winter feeding period. Outcomes revealed substantive altera-tions in canopy cover, with an average reduction of 22.92% in the leaf area index for defoliated trees, accompanied by a significant increase in the number of returns in lower tree crown branch-es. Evident variations in canopy density were observed throughout the feeding period, enabling the identification of two to three change classes using LiDAR-derived canopy density metrics. Manual and LiDAR-based crown delineations exhibited minimal differences in computed canopy LiDAR metrics, showcasing the potential of LiDAR delineations for broader applications. PPM in-festations induced noteworthy modifications in canopy morphology, affecting key structural pa-rameters. Drone LiDAR data emerged as a comprehensive tool for quantifying these transfor-mations. This study underscores the significance of remote sensing approaches in monitoring in-sect disturbances and their impacts on forest ecosystems.
... These initial conditions were assumed to be valid for the August 2015 event in the Passeier valley, since the pre-event precipitation showed wetter than average conditions for the 2 months leading up to the event (see also Zieher et al. (2017). Publicly available airborne laser scanning (ALS) point clouds covering the province of South Tyrol (Geokatalog, 2022) were automatically classified into ground and nonground points using the algorithm proposed by Axelsson (2000). Based on the classified ground points, a digital terrain model (DTM) with a spatial resolution of 1 m was computed, aggregating the mean elevation per raster cell. ...
Article
Full-text available
The development of better, more reliable and more efficient susceptibility assessments for shallow landslides is becoming increasingly important. Physically based models are well‐suited for this, due to their high predictive capability. However, their demands for large, high‐resolution and detailed input datasets make them very time‐consuming and costly methods. This study investigates if a spatially transferable model calibration can be created with the use of parameter ensembles and with this alleviate the time‐consuming calibration process of these methods. To investigate this, the study compares the calibration of the model TRIGRS in two different study areas. The first study area was taken from a previous study where the dynamic physically based model TRIGRS was calibrated for the Laternser valley in Vorarlberg, Austria. The calibrated parameter ensemble and its performance from this previous study are compared with a calibrated parameter ensemble of the model TRIGRS for the Passeier valley in South Tyrol, Italy. The comparison showed very similar model performance and large similarities in the calibrated geotechnical parameter values of the best model runs in both study areas. There is a subset of calibrated geotechnical parameter values that can be used successfully in both study areas and potentially other study areas with similar lithological characteristics. For the hydraulic parameters, the study did not find a transferable parameter subset. These parameters seem to be more sensitive to different soil types. Additionally, the results of the study also showed the importance of the inclusion of detailed information on the timing of landslide initiation in the calibration of the model.
... The progressive TIN densification (PTD) algorithm proposed by Axelsson [36] was initially one of the most commonly used filtering algorithms. This algorithm constructs an initial TIN model based on the initial ground points and iteratively densifies these ground points. ...
Article
Full-text available
Accurately quantifying individual tree parameters is a critical step for assessing carbon sequestration in forest ecosystems. However, it is challenging to gather comprehensive tree point cloud data when using either unmanned aerial vehicle light detection and ranging (UAV-LiDAR) or terrestrial laser scanning (TLS) alone. Moreover, there is still limited research on the effect of point cloud filtering algorithms on the extraction of individual tree parameters from multiplatform LiDAR data. Here, we employed a multifiltering algorithm to increase the accuracy of individual tree parameter (tree height and diameter at breast height (DBH)) extraction with the fusion of TLS and UAV-LiDAR (TLS-UAV-LiDAR) data. The results showed that compared to a single filtering algorithm (improved progressive triangulated irregular network densification, IPTD, or a cloth simulation filter, CSF), the multifiltering algorithm (IPTD + CSF) improves the accuracy of tree height extraction with TLS, UAV-LiDAR, and TLS-UAV-LiDAR data (with R2 improvements from 1% to 7%). IPTD + CSF also enhances the accuracy of DBH extraction with TLS and TLS-UAV-LiDAR. In comparison to single-platform LiDAR (TLS or UAV-LiDAR), TLS-UAV-LiDAR can compensate for the missing crown and stem information, enabling a more detailed depiction of the tree structure. The highest accuracy of individual tree parameter extraction was achieved using the multifiltering algorithm combined with TLS-UAV-LiDAR data. The multifiltering algorithm can facilitate the application of multiplatform LiDAR data and offers an accurate way to quantify individual tree parameters.
... − Triangulated irregular network (TIN)-based refinement: An initial TIN is created based on the points with the lowest elevation in each grid cell. Gradually, other points are added by establishing reference thresholds [21]. This approach may encounter challenges when it comes to detecting discontinuous terrains, such as sharp ridges, and is time-consuming [16]. ...
Article
Full-text available
Airborne Laser Scanning (ALS) point cloud classification in ground and non-ground points can be accurately performed using various algorithms, which rely on a range of information, including signal analysis, intensity, amplitude, echo width, and return number, often focusing on the last return. With its high point density and the vast majority of points (approximately 99%) measured with the first return, filtering LiDAR-UAS data proves to be a more challenging task when compared to ALS point clouds. Various algorithms have been proposed in the scientific literature to differentiate ground points from non-ground points. Each of these algorithms has advantages and disadvantages, depending on the specific terrain characteristics. The aim of this research is to obtain an enhanced Digital Terrain Model (DTM) based on LiDAR-UAS data and to qualitatively and quantitatively compare three filtering approaches, i.e., hierarchical robust, volume-based, and cloth simulation, on a complex terrain study area. For this purpose, two flights over a residential area of about 7.2 ha were taken at 60 m and 100 m, with a DJI Matrice 300 RTK UAS, equipped with a Geosun GS-130X LiDAR sensor. The vertical and horizontal accuracy of the LiDAR-UAS point cloud, obtained via PPK trajectory processing, was tested using Check Points (ChPs) and manually extracted features. A combined approach for ground point classification is proposed, using the results from a hierarchic robust filter and applying an 80% slope condition for the volume-based filtering result. The proposed method has the advantage of representing with accuracy man-made structures and sudden slope changes, improving the overall accuracy of the DTMs by 40% with respect to the hierarchical robust filtering algorithm in the case of a 60 m flight height and by 28% in the case of a 100 m flight height when validated against 985 ChPs.
... This setup resulted in a nominal density of approximately 3700 pulses per m 2 . After the 10 flight lines were merged, the lidar echoes were classified as ground and non-ground with the method proposed by [26]. A triangulated irregular network (TIN) was created from ground lidar echoes. ...
Article
Full-text available
We evaluated the performance of unmanned aerial systems (UAS) airborne light detection and ranging (lidar) data in the species classification of pine, spruce, and broadleaf trees. Classifications were conducted with three machine learning (ML) approaches (multinomial logistic regression, random forest, and multilayer perceptron) using features computed from automatically segmented point clouds that represent individual trees. Trees were segmented from the point cloud using a marker-controlled watershed algorithm, and two types of features were computed for each segment: intensity and texture. Textural features were computed from gray-level co-occurrence matrices built from horizontal cross-sections of the point cloud. Intensity features were computed as the average intensity values within voxels. The classification accuracies were validated on 39 rectangular 30 m x 30 m field plots using leave-one-plot out cross-validation. The results showed only very small differences in the classification performance between the different ML approaches. Intensity features provided greater classification accuracy (kappa 0.73-0.77) than textural features (kappa 0.60-0.64). However, the best classification results (kappa 0.81) were achieved when both intensity and textural features were used. Feature importance in the different ML approaches was also similar. We conclude that the accurate classification of the three tree species considered in this study is possible using single sensor UAS lidar data.
... As the common points of the TLS-based and GPR-derived point clouds were almost distributed on the ground, it was essential to extract the ground points from the TLS-based point clouds before assessing the overall accuracy. A progressive triangular irregular network (TIN) densification (PTD) algorithm was selected to extract the ground points and non-ground points from the TLS-based point clouds using the Terrasolid Suite 2019 software [55][56][57]. ...
Article
Full-text available
Integrated TLS and GPR data can provide multisensor and multiscale spatial data for the comprehensive identification and analysis of surficial and subsurface information, but a reliable systematic methodology associated with data integration of TLS and GPR is still scarce. The aim of this research is to develop a methodology for the data integration of TLS and GPR for detailed, three-dimensional (3D) virtual reconstruction. GPR data and high-precision geographical coordinates at the centimeter level were simultaneously gathered using the GPR system and the Global Navigation Satellite System (GNSS) signal receiver. A time synchronization algorithm was proposed to combine each trace of the GPR data with its position information. In view of the improved propagation model of electromagnetic waves, the GPR data were transformed into dense point clouds in the geodetic coordinate system. Finally, the TLS-based and GPR-derived point clouds were merged into a single point cloud dataset using coordinate transformation. In addition, TLS and GPR (250 MHz and 500 MHz antenna) surveys were conducted in the Litang fault to assess the feasibility and overall accuracy of the proposed methodology. The 3D realistic surface and subsurface geometry of the fault scarp were displayed using the integration data of TLS and GPR. A total of 40 common points between the TLS-based and GPR-derived point clouds were implemented to assess the data fusion accuracy. The difference values in the x and y directions were relatively stable within 2 cm, while the difference values in the z direction had an abrupt fluctuation and the maximum values could be up to 5 cm. The standard deviations (STD) of the common points between the TLS-based and GPR-derived point clouds were 0.9 cm, 0.8 cm, and 2.9 cm. Based on the difference values and the STD in the x, y, and z directions, the field experimental results demonstrate that the GPR-derived point clouds exhibit good consistency with the TLS-based point clouds. Furthermore, this study offers a good future prospect for the integration method of TLS and GPR for comprehensive interpretation and analysis of the surficial and subsurface information in many fields, such as archaeology, urban infrastructure detection, geological investigation, and other fields.
... Nurunnabi et al. (2016a) proposed a segmentbased approach using robust locally weighted regression to remove the ground points. There are many more methods exist in the literature including the well-known surface-based (Kraus and Pfeifer, 1998) and progressive densification-based (Axelsson, 2000) approaches. Recently, DL-based methods are frequently applied. ...
Article
Full-text available
Pole-like object (PLO) detection and segmentation are important in many applications, such as 3D city modelling, urban planning, road assets monitoring, intelligent transportation, road safety, and forest monitoring. Arguably, vehicle-based mobile laser scanning (MLS) is the best on-road data acquisition system, because it is fast, precise and non-invasive. As part of that, laser scanning georeferenced data (i.e., point clouds) provide detailed structural morphology of the scanned objects. However, point clouds are not free from outliers and noise. Critically, many of the object extraction methods that depend on local saliency features (e.g., normals)-based segmentation use principal component analysis (PCA). PCA can provide the local features but struggle to produce robust results in the presence of outliers and noise. To reduce the influence of outliers for saliency features estimation and in segmentation, this paper employs Robust distance-based Diagnostic PCA (RD-PCA) coupled with the well-known DBSCAN clustering algorithm. This study contributes to a better understanding of object detection and segmentation by (i) exploring the problems of local saliency features estimation in the presence of outliers and noise; (ii) understanding problems with PCA and why RD-PCA is important; and (iii) introducing a novel method for PLOs detection and segmentation following a robust segmentation approach. The performance of the new algorithm is demonstrated through MLS data acquired in an urban road setup.
... Specifically, the cloth simulation filtering algorithm has frequently been compared with other filtering approaches in previous studies. Serifoglu Yilmaz et al. [48] evaluated the performance of seven commonly used ground-filtering algorithms [45,[49][50][51][52][53][54] for UAVbased point clouds. Their results showed that the cloth simulation filtering algorithm [45] produces the best results since it has the advantage of requiring only a few, easily adjustable parameters. ...
Article
Full-text available
The utilization of remote sensing technologies for archaeology was motivated by their ability to map large areas within a short time at a reasonable cost. With recent advances in platform and sensing technologies, uncrewed aerial vehicles (UAV) equipped with imaging and Light Detection and Ranging (LiDAR) systems have emerged as a promising tool due to their low cost, ease of deployment/operation, and ability to provide high-resolution geospatial data. In some cases, archaeological sites might be covered with vegetation, which makes the identification of below-canopy structures quite challenging. The ability of LiDAR energy to travel through gaps within vegetation allows for the derivation of returns from hidden structures below the canopy. This study deals with the development and deployment of a UAV system equipped with imaging and LiDAR sensing technologies assisted by an integrated Global Navigation Satellite System/Inertial Navigation System (GNSS/INS) for the archaeological mapping of Dana Island, Turkey. Data processing strategies are also introduced for the detection and visualization of underground structures. More specifically, a strategy has been developed for the robust identification of ground/terrain surface in a site characterized by steep slopes and dense vegetation, as well as the presence of numerous underground structures. The derived terrain surface is then used for the automated detection/localization of underground structures, which are then visualized through a web portal. The proposed strategy has shown a promising detection ability with an F1-score of approximately 92%.
... For instance, Xiao et al. (2019) located terrain points from WV-2/3 stereo images by adopting a cloth simulation filtering (CSF) algorithm [20], which was initially proposed by Zhang et al. (2016) in reconstructing DTMs from lidar point clouds [21]. A recent study [22] shows that CSF achieves higher accuracies in identifying ground areas from unmanned aerial vehicle-based point clouds than several other lidar-based point cloud filtering approaches, e.g., curvaturebased multiscale curvature classification (MCC) [23], surface-based filtering (FUSION software, Version 4.40), and progressive triangulated irregular network (TIN)-based (Las-Tool software, Version 1.4) [24] methods. Perko et al. (2019) extended the multi-directional ground filtering method by Meng et al. (2009) [25] to a slope-dependent version (hereinafter referred to as MSD) for searching ground candidates from tri-stereo Pléiades images [26]. ...
Article
Full-text available
ArcticDEM provides the public with an unprecedented opportunity to access very high-spatial resolution digital elevation models (DEMs) covering the pan-Arctic surfaces. As it is generated from stereo-pairs of optical satellite imagery, ArcticDEM represents a mixture of a digital surface model (DSM) over a non-ground areas and digital terrain model (DTM) at bare grounds. Reconstructing DTM from ArcticDEM is thus needed in studies requiring bare ground elevation, such as modeling hydrological processes, tracking surface change dynamics, and estimating vegetation canopy height and associated forest attributes. Here we proposed an automated approach for estimating DTM from ArcticDEM in two steps: (1) identifying ground pixels from WorldView-2 imagery using a Gaussian mixture model (GMM) with local refinement by morphological operation, and (2) generating a continuous DTM surface using ArcticDEMs at ground locations and spatial interpolation methods (ordinary kriging (OK) and natural neighbor (NN)). We evaluated our method at three forested study sites characterized by different canopy cover and topographic conditions in Livengood, Alaska, where airborne lidar data is available for validation. Our results demonstrate that (1) the proposed ground identification method can effectively identify ground pixels with much lower root mean square errors (RMSEs) (<0.35 m) to the reference data than the comparative state-of-the-art approaches; (2) NN performs more robustly in DTM interpolation than OK; (3) the DTMs generated from NN interpolation with GMM-based ground masks decrease the RMSEs of ArcticDEM to 0.648 m, 1.677 m, and 0.521 m for Site-1, Site-2, and Site-3, respectively. This study provides a viable means of deriving high-resolution DTM from ArcticDEM that will be of great value to studies focusing on the Arctic ecosystems, forest change dynamics, and earth surface processes.
... Рис. 3. Результат фильтрации изолированных точек а) до фильтрации; б) после фильтрации На следующем этапе выполнялось автоматическое распознавание точек поверхности земли с помощью метода Аксельсона [12] и разделение массива ТЛО по уровням от этой поверхности (рис. 4). ...
... The DTMs used for height-normalization were generated from the points classified as ground. In study I, the ALS dataset was ground-classified using a slightly modified version of Axelsson (2000) in the Terrascan software (Terrasolid Ltd., Espoo, Finland). In study II, the ALS and ULS datasets were ground-classified with the method by Zhang et al. (2003). ...
Article
Full-text available
The global crises – climate change and biodiversity loss – have created a need for precise and wide-scale information of forests. Airborne laser scanning (ALS) provides a means for collecting such information, as it enables mapping large areas efficiently with a resolution sufficient for object-level information extraction. Deadwood is an important component of the forest environment, as it stores carbon and provides a habitat for a wide variety of species. Mapping deadwood provides information about the valuable areas regarding biodiversity, which can be used in, e.g., conservation and restoration planning. The aim of this thesis was to develop automated methodology for detecting individual fallen and standing dead trees from ALS data. Studies I and II presented a line detection based method for detecting fallen trees and evaluated its performance on a moderate-density ALS dataset (point density approx. 15 points/m2) and a high point density unmanned aerial vehicle borne laser scanning (ULS) dataset (point density approx. 285 points/m2). In addition, the studies inspected the dataset, methodology, and forest structure related factors affecting the performance of the method. The studies found that the length and diameter of fallen trees significantly impact their detection probability, and that the majority of large fallen trees can be identified from ALS data automatically. Furthermore, study I found that the amount and type of undergrowth and ground vegetation, as well as the size of surrounding living trees determine how accurately fallen trees can be mapped from ALS data. Moreover, study II found that increasing the point density of the laser scanning dataset does not automatically improve the performance of fallen tree detection, unless the methodology is adjusted to consider the increase in noise and detail in the point cloud. Study III inspected the feasibility of high-density discrete return ULS data for mapping individual standing dead trees. The individual tree detection method developed in the study was based on a three-step process consisting of individual tree segmentation, feature extraction, and machine learning based classification. The study found that, while some of the large standing dead trees could be identified from the ULS dataset, basing detection on discrete return data and the geometrical properties of trees did not suffice for acquiring applicable deadwood information. Thus, spectral information acquired with multispectral laser scanners or aerial imaging, or full-waveform laser scanning is necessary for detecting individual standing dead trees with a sufficient accuracy. The findings of this thesis contribute to the existing deadwood detection methodology and improve the understanding of factors to take into account when utilizing ALS for detecting dead trees at a single-tree-level. Although remotely sensed deadwood mapping is still far from a resolved topic, these contributions are a step towards operationalizing remotely sensed biodiversity monitoring.
... The same data covered both field data sets. The ALS echoes were classified into ground and vegetation hits through the approach presented by Axelsson (2000). The ground echoes were used to create a digital terrain model (DTM). ...
Article
Full-text available
In Finland, interest in continuous cover forestry (CCF) has increased rapidly in recent years. During those years CCF has been examined from various viewpoints but not from the perspective of forest inventories. This holds especially true for applications based on remote sensing. Conversely, airborne laser scanning (ALS) data have been widely used to predict forest characteristics such as size distribution and vertical forest structure, which are closely related to the forest information needs of CCF. In this study we used the area-based approach to predict a set of stand attributes from ALS data (5 pulses per m2) in a CCF forest management experiment in Katajamäki, eastern Finland. In addition to the CCF stands, the experiment included shelterwood stands and untreated stands. The predicted attributes included volume, biomass, basal area, number of stems, mean diameter, Lorey’s height, dominant height, standing dead wood volume, parameters of the theoretical stem diameter distribution model, understory height and number of understory stems. Our main aim was to test whether the same model could be used across different management systems. The accuracy of the attributes predicted for the CCF stands was compared with the predictions for the other management systems in the same experiment. We also compared and discussed our results in relation to the even-aged stand attribute predictions that were conducted by using separate operational forest data collected from sites surrounding Katajamäki. The results showed that forest data from the different management systems could be combined into a single model of a stand attribute, i.e., ALS metrics were found to be suitable for comparing different management systems in regard to differences in forest structure. The accuracy of the predicted attributes in the CCF plots was comparable to that of the other management alternatives in the experiment. The accuracy was also comparable to that of even-aged forests. The results of this study were promising; the stand attributes of CCF-managed forests could be predicted analogously to those of other management systems. This indicates that for the purposes of forest inventories there may not be a need to stratify forest lands by management system. It should be noted, however, that the study area was relatively small, that the forest stands were harvested in the 1980 s, and that the attributes may not have been completely exhaustive for CCF.
... The acquired point clouds include topographic information (or ground points) and non-ground information, such as canopy, trees and vegetations. There are several approaches presented to capture the ground while filtering out the non-ground points, which include the slope-based filter [49][50][51][52], morphological filter [53][54][55][56], surface-based filter [57,58] and progressive morphological filter [54]. This study employs the cloth simulation filter (CSF; CloudCompare software, v.2.10.2) [59]. ...
Article
Full-text available
High-resolution topographic information of landslide-prone areas plays an important role in accurate prediction and characterization of potential landslides and mitigation of landslides-associated hazards. This study presents an advanced geomorphological surveying system that integrates the light detection and ranging (LiDAR) with an unmanned aerial vehicle (UAV), a multi-rotor aerial vehicle in specific, for prediction, monitoring and forensic analysis of landslides, and for maintenance of debris-flow barriers. The test-flight over a vegetated area demonstrates that the integrated UAV-LiDAR system can provide high-resolution, three-dimensional (3D) LiDAR point clouds below canopy and vegetation in forest environments, overcoming the limitation of aerial photogrammetry and terrestrial LiDAR platforms. An algorithm is suggested to delineate the topographic information from the acquired 3D LiDAR point clouds, and the accuracy and performance of the developed UAV-LiDAR system are examined through field demonstration. Finally, two field demonstrations are presented: the forensic analysis of the recent Gokseong landslide event, and the sediment deposition monitoring for debris-flow barrier maintenance in South Korea. The developed surveying system is expected to contribute to geomorphological field surveys in vegetated, forest environments, particularly in a site that is not easily accessible.
... Compared to filtering methods for multibeam bathymetric PCD, there have been more mature ways available for Airborne LiDAR PCD. For instance, various methods have been employed for Airborne LiDAR PCD processing, including using the difference in slope between adjacent points as a criterion for outlier rejection [11][12][13], employing mathematical morphology methods from images processing [14][15][16], utilizing irregular triangle networks [17,18], deep learning approaches for filtering [19][20][21], and applying physical model-based cloth simulation filtering (CSF) as demonstrated by Zhang et al. [22]. Given the many similarities between these two types of PCD, techniques suitable for filtering Airborne LiDAR PCD can also be applied to that produced by multibeam sonar. ...
Article
Full-text available
The utilization of multibeam sonar systems has significantly facilitated the acquisition of underwater bathymetric data. However, efficiently processing vast amounts of multibeam point cloud data remains a challenge, particularly in terms of rejecting massive outliers. This paper proposes a novel solution by implementing a cone model filtering method for multibeam bathymetric point cloud data filtering. Initially, statistical analysis is employed to remove large-scale outliers from the raw point cloud data in order to enhance its resistance to variance for subsequent processing. Subsequently, virtual grids and voxel down-sampling are introduced to determine the angles and vertices of the model within each grid. Finally, the point cloud data was inverted, and the custom parameters were redefined to facilitate bi-directional data filtering. Experimental results demonstrate that compared to the commonly used filtering method the proposed method in this paper effectively removes outliers while minimizing excessive filtering, with minimal differences in standard deviations from human-computer interactive filtering. Furthermore, it yields a 3.57% improvement in accuracy compared to the Combined Uncertainty and Bathymetry Estimator method. These findings suggest that the newly proposed method is comparatively more effective and stable, exhibiting great potential for mitigating excessive filtering in areas with complex terrain.
... Ground points were classified using Terrascan software (version 016.013), which uses an adaptive triangular irregular networks densification algorithm 58 . The algorithm settings were optimised to remove only the vegetation cover, leaving the remains of past human activities as intact as possible ( Table 2). ...
Article
Full-text available
In our study, we set out to collect a multimodal annotated dataset for remote sensing of Maya archaeology, that is suitable for deep learning. The dataset covers the area around Chactún, one of the largest ancient Maya urban centres in the central Yucatán Peninsula. The dataset includes five types of data records: raster visualisations and canopy height model from airborne laser scanning (ALS) data, Sentinel-1 and Sentinel-2 satellite data, and manual data annotations. The manual annotations (used as binary masks) represent three different types of ancient Maya structures (class labels: buildings, platforms, and aguadas – artificial reservoirs) within the study area, their exact locations, and boundaries. The dataset is ready for use with machine learning, including convolutional neural networks (CNNs) for object recognition, object localization (detection), and semantic segmentation. We would like to provide this dataset to help more research teams develop their own computer vision models for investigations of Maya archaeology or improve existing ones.
... The ALS flight was operated at an altitude of 2,200 m above ground level, which yielded nominal ALS echoes with a density of 0.8/m 2 . The ALS points were classified as ground and non-ground points following the methodology of Axelsson (2000). A 1 m digital terrain model (DTM) was created from the ground points, and the DTM was used to normalize the heights (z values) of the ALS points. ...
Article
Full-text available
This study assessed the prediction accuracy of the forest aboveground biomass (AGB) model using remotely sensed data sources (i.e. airborne laser scanning (ALS), RapidEye, Landsat), and the combination of ALS with RapidEye/Landsat using parametric weighted least squares (WLS) regression. We also analysed the AGB model using random forests, extremely randomized trees, and deep learning stacked autoencoder (SAE) network from nonparametric statistics to compare the performance with WLS regression. We also compared the widths of the 95% confidence intervals for estimates of the mean AGB per unit area using the model-based estimator. The study site in the Terai Arc Landscape, Nepal, comprised 14 protected areas extending from the southern part of Nepal to India and encompassed mosaics of continuous dense forest and tall grassland. The ALS data provided the largest prediction accuracy (0.30-0.35 relative root mean squared error (rRMSE)), whereas RapidEye and Landsat had smaller prediction accuracies (0.48-0.54 and 0.47-0.55 rRMSE, respectively) for the estimation of AGB. The combined use of ALS and RapidEye predictors in the AGB model reduced the rRMSE and narrowed the confidence interval compared with ALS alone, but the improvements were minor. The SAE prediction technique provided the largest prediction accuracy, with inputs of combined ALS and RapidEye predictors that yielded an R 2 of 0.80, an rRMSE of 0.30, and a confidence interval of 176-184 compared to other tested prediction techniques. The SAE prediction technique can become more powerful than other tested prediction techniques if properly adjusted and tuned for accurate forest AGB mapping applications. To our knowledge, this is the first study assessing the performance of the SAE in AGB modelling with a range of hyper-parameter values. ARTICLE HISTORY
... One famous surface-based filtering method is the progressive TIN densification (PTD) filtering method, which was first proposed by Axelsson [16]. In this method, a sparse TIN was created and densified iteratively. ...
Article
Full-text available
Filtering from airborne LiDAR datasets in urban area is one important process during the building of digital and smart cities. However, the existing filters encounter poor filtering performance and heavy computational burden when processing large-scale and complicated urban environments. To tackle this issue, a self-adaptive filtering method based on object primitive global energy minimization is proposed in this paper. In this paper, mode points were first acquired for generating the mode graph. The mode points were the cluster centers of the LiDAR data obtained in a mean shift algorithm. The graph constructed with mode points was named “mode graph” in this paper. By defining the energy function based on the mode graph, the filtering process is transformed to iterative global energy minimization. In each iteration, the graph cuts technique was adopted to achieve global energy minimization. Meanwhile, the probability of each point belonging to the ground was updated, which would lead to a new refined ground surface using the points whose probabilities were greater than 0.5. This process was iterated until two successive fitted ground surfaces were determined to be close enough. Four urban samples with different urban environments were adopted for verifying the effectiveness of the filter developed in this paper. Experimental results indicate that the developed filter obtained the best filtering performance. Both the total error and the Kappa coefficient are superior to those of the other three classical filtering methods.
... As shown in Figure 7, ICSF achieved the lowest average value and standard deviation of total error, which demonstrated the superiority of ICSF in terms of accuracy and robustness. Additionally, the progressive TIN densification filtering (PTDF) algorithm proposed by Axelsson [42] performed better than other comparative methods. PTDF constructs a reference terrain using TIN, and then progressively adds ground points to update the reference terrain, based on the distances between points and the terrain. ...
Article
Full-text available
Ground filtering is an essential step in airborne light detection and ranging (LiDAR) data processing in various applications. The cloth simulation filtering (CSF) algorithm has gained popularity because of its ease of use advantage. However, CSF has limitations in topographically and environmentally complex areas. Therefore, an improved CSF (ICSF) algorithm was developed in this study. ICSF uses morphological closing operations to initialize the cloth, and estimates the cloth rigidness for providing a more accurate reference terrain in various terrain characteristics. Moreover, terrain-adaptive height difference thresholds are developed for better filtering of airborne LiDAR point clouds. The performance of ICSF was assessed using International Society for Photogrammetry and Remote Sensing urban and rural samples and Open Topography forested samples. Results showed that ICSF can improve the filtering accuracy of CSF in the samples with various terrain and non-ground object characteristics, while maintaining the ease of use advantage of CSF. In urban and rural samples, ICSF obtained an average total error of 4.03% and outperformed another eight reference algorithms in terms of accuracy and robustness. In forested samples, ICSF produced more accuracy than the well-known filtering algorithms (including the maximum slope, progressive morphology, and cloth simulation filtering algorithms), and performed better with respect to the preservation of steep slopes and discontinuities and vegetation removal. Thus, the proposed algorithm can be used as an efficient tool for LiDAR data processing.
... To further verify the generalizability of the proposed methods, we also performed wall and protrusion separation in naturallike scenes using 3 classic ground filtering methods, including maximum slope filter (MSF) [39], progressive morphology filter (PMF) [40], and progressive triangular irregular network densification filter (PTDF) [41]. MSF and PMF were implemented using the C++ programming language, and PTDF was implemented using Terrisolid software. ...
Article
Full-text available
As a critical prerequisite for semantic facade reconstruction, accurately separating wall and protrusion points from facade point clouds is required. The performance of traditional separation methods is severely limited by facade conditions, including wall shapes (e.g., nonplanar walls), wall compositions (e.g., walls composed of multiple noncoplanar point clusters), and protrusion structures (e.g., protrusions without regularity, repetitive, or self-symmetric features). This study proposes a more widely applicable wall and protrusion separation method. The major principle underlying the proposed method is to transform the wall and protrusion separation problem as a ground filtering problem and to separate walls and protrusions using ground filtering methods, since the 2 problems can be solved using the same prior knowledge, that is, protrusions (nonground objects) protrude from walls (ground). After transformation problem, cloth simulation filter was used as an example to separate walls and protrusions in 8 facade point clouds with various characteristics. The proposed method was robust to the facade conditions, with a mean intersection over union of 90.7%, and had substantially higher accuracy compared with the traditional separation methods, including region growing-, random sample consensus-, multipass random sample consensus-based, and hybrid methods, with mean intersection over union values of 69.53%, 49.52%, 63.93%, and 47.07%, respectively. Besides, the proposed method was general, since existing ground filtering methods (including the maximum slope, progressive morphology, and progressive triangular irregular network densification filters) can also perform well.
... The resulting compressed point cloud was then processed with lasground and las2dem tools that are part of the software. The lasground tool uses an algorithm proposed by Axelsson [55] for ground classification and bare-earth extraction. The las2dem tool uses Delaunay triangulation to convert the LAZ file into a temporary Triangular Irregular Network (TIN) and rasterizes the ground-resulting points using linear interpolation [56]. ...
Article
Full-text available
Due to the current energetic transition, new geological exploration technologies are needed to discover mineral deposits containing critical materials such as lithium (Li). The vast majority of European Li deposits are related to Li–Cs–Ta (LCT) pegmatites. A review of the literature indicates that conventional exploration campaigns are dominated by geochemical surveys and related exploration tools. However, other exploration techniques must be evaluated, namely, remote sensing (RS) and geophysics. This work presents the results of the INOVMINERAL4.0 project obtained through alternative approaches to traditional geochemistry that were gathered and integrated into a webGIS application. The specific objectives were to: (i) assess the potential of high-resolution elevation data; (ii) evaluate geophysical methods, particularly radiometry; (iii) establish a methodology for spectral data acquisition and build a spectral library; (iv) compare obtained spectra with Landsat 9 data for pegmatite identification; and (v) implement a user-friendly webGIS platform for data integration and visualization. Radiometric data acquisition using geophysical techniques effectively discriminated pegmatites from host rocks. The developed spectral library provides valuable insights for space-based exploration. Landsat 9 data accurately identified known LCT pegmatite targets compared with Landsat 8. The user-friendly webGIS platform facilitates data integration, visualization, and sharing, supporting potential users in similar exploration approaches.
... Grid resolutions were set to 0.05 or 0.1 m for ULS data according to the point density in the specific plots. The Delaunay triangulation (TIN) algorithm was used for spatial interpolation and DTM generation(Axelsson, 2000). The pit-free algorithm developed byF I G U R E 1 Plots location in the study area (a). ...
Article
Full-text available
Light detection and ranging (LiDAR) data can provide 3D structural information of objects and are ideal for extracting individual tree parameters, and individual tree segmentation (ITS) is a vital step for this purpose. Various ITS methods have been emerging from airborne LiDAR scanning (ALS) or unmanned aerial vehicle LiDAR scanning (ULS) data. Here, we propose a new individual tree segmentation method, which couples the classical and efficient watershed algorithm (WS) and the newly developed connection center evolution (CCE) clustering algorithm in pattern recognition. The CCE is first used in ITS and comprehensively optimized by considering tree structure and point cloud characteristics. Firstly, the amount of data is greatly reduced by mean shift voxelization. Then, the optimal clustering scale is automatically determined by the shapes in the projection of three different directions. We select five forest plots in Saihanba, China and 14 public plots in Alpine region, Europe with ULS or ALS point cloud densities from 11 to 3295 pts/m2. Eleven ITS methods were used for comparison. The accuracy of tree top detection and tree height extraction is estimated by five and two metrics, respectively. The results show that the matching rate (R match) of tree tops is up to 0.92, the coefficient of determination (R 2) of tree height estimation is up to .94, and the minimum root mean square error (RMSE) is 0.6 m. Our method outperforms the other methods especially in the broadleaf forests plot on slopes, where the five evaluation metrics for tree top detection outperformed the other algorithms by at least 11% on average. Our ITS method is both robust and efficient and has the potential to be used especially in coniferous forests to extract the structural parameters of individual trees for forest management, carbon stock estimation, and habitat mapping.
... The resulting compressed point-cloud was then processed with lasground and las2dem tools that are part of the software. The lasground tool uses an algorithm proposed by Axelsson [55] for ground classification and bare-Earth extraction. The las2dem tool uses Delaunay triangulation to convert the LAZ file into a temporary Triangular Irregular Network (TIN) and rasterizes the ground-resulting points using linear interpolation [56]. ...
Preprint
Full-text available
Due to the energetic transition at course, new geological exploration technologies are needed to discover mineral deposits containing critical materials such as lithium (Li). The vast majority of European Li deposits are related to Li–Cs–Ta (LCT) pegmatites. Literature review indicates that conventional exploration campaigns are dominated by geochemical surveys and related exploration tools. However, other exploration techniques must be evaluated namely remote sensing (RS) and geophysics. This work presents the results of the INOVMINERAL4.0 project obtained through alternative approaches to traditional geochemistry that were gathered and integrated into a webGIS application. The specific objectives were to: (i) assess the potential of high-resolution elevation data; (ii) evaluate geophysical methods, particularly radiometry; (iii) establish a methodology for spectral data acquisition and build a spectral library; (iv) compare obtained spectra with Landsat 9 data for pegmatite identification; and (v) implement a user-friendly webGIS for data integration and visualization. Radiometric data acquisition using geophysical techniques effectively discriminated pegmatites from host rocks. The developed spectral library provided valuable insights for space-based exploration. Landsat 9 data accurately identified known LCT pegmatite targets, compared to Landsat 8. The user-friendly webGIS facilitated data integration, visualization, and sharing, supporting potential users in similar exploration approaches.
Article
Full-text available
Building detection plays an important role in urban applications and is usually a prerequisite for contour extraction and building modeling. Over the last decades, airborne LiDAR data have been used due to its capability to represent terrestrial surfaces and objects with high geometric quality. In this paper, it is proposed a novel building detection approach based on geometric/morphological object characteristics. The proposed strategy is divided into three main stages: 1) selection of candidate points based on height; 2) building detection using the geometric feature (omnivariance) and K-means clustering algorithm; and 3) refinement based on majority filter and mathematical morphology. The experiments were conducted using airborne LiDAR datasets with varying point density acquired in different urban environments. The results indicated the robustness of the proposed approach for all datasets and environmental complexities, presenting average Fscore of around 96%. In addition, the results pointed out that point density can impact the building detection, producing better results for higher point density datasets. Compared with related approaches, the proposed strategy results in better performance in terms of completeness, producing an omission error rate smaller than 3%.
Article
This paper introduces a novel Bayesian filtering technique for the filtration of ground points in complex terrain and steep inclines in remote sensing applications. The technique integrates LAStools and statistical techniques, generating a posterior distribution using prior probability and likelihood functions. It is applied to point cloud data from UAV aerial images and DSM formats. The study shows that the Bayesian method improves the outcome in sloping regions compared to other algorithms like LAStools, Statistical, and CSF. In flat terrain, the CSF approach produced the highest F1 score, while the Bayesian method showed degradation but outperformed statistical and LAStools approaches.
Article
Full-text available
The canopy height model (CHM) derived from LiDAR point cloud data is usually used to accurately identify the position and the canopy dimension of single tree. However, local invalid values (also called data pits) are often encountered during the generation of CHM, which results in low-quality CHM and failure in the detection of treetops. For this reason, this paper proposes an innovative method, called “pixels weighted differential gradient”, to filter these data pits accurately and improve the quality of CHM. First, two characteristic parameters, gradient index (GI) and Z-score value (ZV) are extracted from the weighted differential gradient between the pit pixels and their eight neighbors, and then GIs and ZVs are commonly used as criterion for initial identification of data pits. Secondly, CHMs of different resolutions are merged, using the image processing algorithm developed in this paper to distinguish either canopy gaps or data pits. Finally, potential pits were filtered and filled with a reasonable value. The experimental validation and comparative analysis were carried out in a coniferous forest located in Triangle Lake, United States. The experimental results showed that our method could accurately identify potential data pits and retain the canopy structure information in CHM. The root-mean-squared error (RMSE) and mean bias error (MBE) from our method are reduced by between 73% and 26% and 76% and 28%, respectively, when compared with six other methods, including the mean filter, Gaussian filter, median filter, pit-free, spike-free and graph-based progressive morphological filtering (GPMF). The average F1 score from our method could be improved by approximately 4% to 25% when applied in single-tree extraction.
Preprint
Full-text available
Developing algorithms for generating accurate Digital Terrain Model (DTM) of rivers is necessary due to the limitations of traditional field survey methods, which are time-consuming and costly and do not provide continuous data. The objective of this study was to develop an advanced algorithm for generating high-quality DTM of rivers using Structur from Motion (SfM) data. A leveling survey was conducted on four cross-sections of the Bokha stream in Icheon City, S. Korea, and SfM-based DTM was produced using the Pix4Dmapper program and Phantom 4 multispectral drone. Two vegetation filters (NDVI and ExG) and two morphological filters (ATIN and CSF) were applied to the data, and the best filter combination was identified based on MAE and RMSE analyses. The integration of NDVI and CSF showed the best performance for the vegetation area, while a single application of NDVI showed the lowest MAE for the bare area. The effectiveness of the SfM method in eliminating waterfront vegetation was confirmed, with an overall MAE of 0.299 m RMSE of 0.375 m. These findings suggest that generating DTMs of riparian zones can be achieved efficiently with a limited budget and time using the proposed methodology.
Chapter
Terrain aided navigation systems use barometric altimeters and radio altimeters to measure terrain profile elevation, which is prone to a decrease in accuracy due to the limitations of terrain distribution characteristics and data quantity. This paper introduces airborne LiDAR as a measurement sensor for terrain aided navigation systems, establishing point cloud digital elevation maps which are used to be compared with a prior map in order to obtain the optimal matching position and correct the position errors. The proposed method is verified by using an actual digital elevation map and real flight data. The results show that the proposed method can effectively improve the terrain elevation detection ability and map matching accuracy of the terrain aided navigation system, and ensure the availability of the system under complex terrain conditions.
Conference Paper
This abstract deals with the combination and complementation of the three existing research disciplines: Character Computing, Conversational Theory and Fuzzy Logic or Fuzzy Classification, so that the vision pursued by the researchers of Character Computing - computer systems that can autonomously adapt to the behaviour of their human interlocutors - is achieved. This primarily involves chat- and voicebots, also called conversational AI systems. These types of systems only rarely have access to sensor data. As a rule, they only exchange text or voice messages with the user.
Article
The technique of UAV-based airborne laser scanning data preliminary processing is proposed. Its main steps, such as filtering the data, determining the calibration parameters of the survey system, strip adjustment of trajectories and point clouds are described and analyzed. The high density of a point cloud, which is the basic feature of the measurements made at low altitude, is taken into account. The study of the technique was carried out on the example of urban area data obtained with the laser scanner AGM-MS3. This information served to develop filtering and relative orientation algorithms. The initial data AGM-MS3 were also analyzed for the need to adjust them when the distance between adjacent strips is short. It is concluded that the technique can be used to process data on any territory.
ResearchGate has not been able to resolve any references for this publication.