Figure - available from: Remote Sensing
This content is subject to copyright.
Source publication
Accepting the ecological necessity of a drastic reduction of resource consumption and greenhouse gas emissions in the building industry, the Institute for Lightweight Structures and Conceptual Design (ILEK) at the University of Stuttgart is developing graded concrete components with integrated concrete hollow spheres. These components weigh a fract...
Citations
... Algorithms based on parameterization methods, such as geometric primitives [39] and B-spline surfaces [32], parameterize the surface of point clouds to calculate variations in parameters and estimate the range. To utilize point clouds that include geometric primitive elements or those capable of detecting parameter variations, prior information on global or local parameters is required. ...
This study utilized unmanned aerial systems (UAS) and terrestrial laser scanners (TLS) to develop a 3D numerical model of slope anchors and conduct a comprehensive analysis. Initial data were collected using a UAS with 4 K resolution, followed by a second dataset captured 6 months later with 8 K resolution after artificially damaging the anchor. The model analyzed damage factors such as cracks, destruction, movement, and settlement. Cracks smaller than 0.3 mm were detected with an error margin of ±0.05 mm. The maximum damaged area on the anchor head was within 3% of the designed value, and the volume of damaged regions was quantified. A combination analysis examined elevation differences on the anchor’s irregular bottom surface, resulting in an average difference at 20 points, reflecting ground adhesion. The rotation angle (<1°) and displacement of the anchor head were also measured. The study successfully extracted quantitative damage data, demonstrating the potential for an accurate assessment of anchor performance. The findings highlight the value of integrating UAS and TLS technologies for slope maintenance. By organizing these quantitative metrics into a database, this approach offers a robust alternative to traditional visual inspections, especially for inaccessible facilities, providing a foundation for enhanced safety evaluations.
... In other commercially available software, this information is likewise sparse or inexistent. This may be an issue if the sphere center coordinates are needed for deformation analysis (Yang et al., 2021), where the stochastic properties of center coordinates are used for statistically based decisions. The user may also use the points on the sphere and conduct an adjustment independent of the TLS software to obtain a measure for uncertainty estimation. ...
... However, usually, the high number of points on the sphere leads to results that are, in most cases, too optimistic (cf. Yang et al., 2021). In any case, results will be treated as unrealistic. ...
... If correlations are considered, there are cases where they are 10 times larger (e.g., T1_K3, or in some cases, the differences are relatively small (e.g., T1_K2 in y direction) of a few µm. Even if they may be irrelevant for some TLS tasks, these kinds of changes make a difference in the decisions of deformation analysis, as demonstrated by Yang et al. (2021). Next, the more relevant parameter ̂0 for the complete adjustment, is analyzed. ...
High-end Terrestrial Laser Scanners (TLSs) are used for many applications that require precise geometry of the captured object. Dimensions are frequently extracted directly from the point cloud or from estimated primitives. However, the uncertainty information attributed to each point and correlations between points are often neglected. Generally, TLS observations may be highly correlated for reasons such as similarities in the surface properties, instrument optical-mechanical misalignments, overlap of laser footprints, or similarities in the measurement environment. The current contribution demonstrates the relevance of correlations in tasks usually performed directly with the point cloud, such as distance measurements between two points, target segmentation based on point clouds (e.g., spheres), and registration. Tests were conducted using the variance-covariance propagation law and elementary error theory for simple distance measurements between highly correlated points (e.g., ρ=0.8). Firstly, simulation results are used to show that precision estimations for measured distances are up to 55% better with correlations than without. The same analysis is done with real data, and an improvement of the precision estimate of 20% was reached; however, degradation is also possible if negative correlations occur. Additionally, the impact of correlations on the sphere-based registration between two TLS station points is shown. The spheres were segmented, and center coordinates were estimated using different versions of a stochastic model. Finally, they were used in the registration. Conclusions about correlations in TLS point clouds are drawn based on these tasks encountered in almost all TLS applications.
... The parameter-based methods are used to estimate parametric changes. The types of these parametric changes depend on the definitions of estimated parameters, which can be, for example, real displacements of geometric primitives when parameters are their 3D positions (Yang et al., 2021) or spatial distances between two approximating B-spline surfaces (Harmening et al., 2021). Due to the complexity of natural topographies, a complete and accurate parameterization of these surfaces is still challenging. ...
Multi-temporal acquisitions of 3D point clouds for geomonitoring tasks allow the quantification and analysis of geometric changes of monitored objects by advanced processing algorithms, further revealing the underlying deformation mechanism. Among numerous approaches proposed in the geoscientific domain for point cloud-based deformation analysis, multiscale model-to-model cloud comparison (M3C2) has been widely applied to quantify the distances between two point clouds with high surface roughness. Deformations under complex topographies, however, are still challenging to be accurately quantified and analyzed by a statistical significance test when using standard M3C2, for (1) average positions in the cylindrical neighborhoods may deviate from the actual surface and (2) empirical uncertainties represented by local roughness are overestimated in highly variable areas. Besides, the spatial resolution of derived deformations is limited by original point densities and algorithm limitations. In this article, we propose an alternative called patch-based M3C2, which inherits the basic framework of standard M3C2 for its simplicity. This novel data-driven approach does not need surface meshing and the identification of semantic or instance correspondences in point clouds. Lower uncertainty is achieved by generating locally planar patches and projecting measurements on associated patch planes, allowing better detection of small deformations in complex 3D topographies. Besides, patch-based M3C2 could assign a deformation value to any position within the overlapping areas, enabling a higher spatial resolution of deformation analysis. Our approach is demonstrated and evaluated on three datasets. The experimental results indicate that patch-based M3C2 exhibits higher accuracy on distance calculations between two surfaces.
... the user can only evaluate the fit quality by a few global indicators, the mean error, and a standard deviation. This may be an issue if the sphere center coordinates are needed for deformation analysis (Yang et al., 2021) where the stochastic properties of the estimated coordinates are used for statistically based decisions, or in other cases where the georeferencing uncertainty must be taken into account. The user may also use the points on the sphere and conduct an adjustment independent of the TLS software in order to obtain a measure for the uncertainty estimation, but usually, the high number of points on the sphere leads to results that are in most cases too optimistic (cf. ...
... The user may also use the points on the sphere and conduct an adjustment independent of the TLS software in order to obtain a measure for the uncertainty estimation, but usually, the high number of points on the sphere leads to results that are in most cases too optimistic (cf. Yang et al., 2021). The opposite may also be possible; in any case, results will be treated as unrealistic. ...
... The change happens only if the fully populated SVCM is used in case c) where there is an increase in the standard deviations up to the submillimeter level. They may still be considered too small, but these kind of changes make a difference in the decisions of deformation analysis (Yang et al., 2021). ...
This work presents a method to define a stochastic model in form of a synthetic variance-covariance matrix (SVCM) for TLS observations. It relies on the elementary error theory defined by Bessel and Hagen at the beginning of the 19th century and adapted for geodetic observations by Pelzer and Schwieger at the end of the 20th century. According to this theory, different types of errors that affect TLS measurements are classified into three groups: non-correlating, functional correlating, and stochastic correlating errors.
For each group, different types of errors are studied based on the error sources that affect TLS observations. These types are classified as instrument-specific errors, environment-related errors, and object surface-related errors. Regarding instrument errors, calibration models for high-end laser scanners are studied. For the propagation medium of TLS observations, the effects of air temperature, air pressure and vertical temperature gradient on TLS distances and vertical angles are studied. An approach based on time series theory is used for extracting the spatial correlations between observation lines. For the object’s surface properties, the effect of surface roughness and reflectivity on the distance measurement is considered. Both parameters affect the variances and covariances in the stochastic model. For each of the error types, examples based on own research or literature are given.
After establishing the model, four different study cases are used to exemplify the utility of a fully populated SVCM. The scenarios include real objects measured under laboratory and field conditions and simulated objects. The first example outlines the results from the SVCM based on a simulated wall with an analysis of the variance and covariance contribution. In the second study case, the role of the SVCM in a sphere adjustment is highlighted. A third study case presents a deformation analysis of a wooden tower. Finally, the fourth example shows how to derive an optimal TLS station point based on the SVCM trace.
All in all, this thesis brings a contribution by defining a new stochastic model based on the elementary error theory in the form a SVCM for TLS measurements. It may be used for purposes such as analysis of error magnitude on scanned objects, adjustment of surfaces, or finding an optimal TLS station point position with regard to predefined criteria.
... The necessary models describing the outflow dynamics are based on those derived in [59]. For both online-and offline-calculated input trajectories, it is advisable to check after each layer if buoyancy can be avoided and whether layer height is being achieved to a sufficient degree of accuracy, thus necessitating another control point [60]. ...
... The values for MC were derived from reports [60] and the literature [61]. There are some values, which evaluate the feature without providing information for the individual underlying parameters (marked with [b]). ...
This paper describes a holistic quality model (HQM) and assessment to support decision-making processes in construction. A graded concrete slab serves as an example to illustrate how to consider technical, environmental, and social quality criteria and their interrelations. The evaluation of the design and production process of the graded concrete component shows that it has advantages compared to a conventional solid slab, especially in terms of environmental performance. At the same time, the holistic quality model identifies potential improvements for the technology of graded concrete. It will be shown that the holistic quality model can be used to (a) consider the whole life cycle in decision-making in the early phases and, thus, make the complexity of construction processes manageable for quality and sustainability assessments and (b) make visible interdependencies between different quality and sustainability criteria, to help designers make better-informed decisions regarding the overall quality. The results show how different quality aspects can be assessed and trade-offs are also possible through the understanding of the relationships among characteristics. For this purpose, in addition to the quality assessment of graded concrete, an overview of the interrelations of different quality characteristics is provided. While this article demonstrates how a HQM can support decision-making in design, the validity of the presented evaluation is limited by the data availability and methodological challenges, specifically regarding the quantification of interrelations.
... To avoid failure during on-site construction, efforts should be made towards performing a comparison between the dimensional conformance of precast elements in as-designed status and as-built status. Research efforts have covered a wide range of precast elements, such as concrete elements (walls [56,71], columns [153], stairs [71], slabs [45, 79,92], hollow spheres [155], bridge piers [66]), steel structures [47,156], pipes [154,157], and joinery products [152]. Obviously, the key point in dimensional quality inspection is that the fabrication model of precast elements constructed from point cloud data should be compared with the corresponding as-designed BIM model in a common coordinate system in order to identify dimensional discrepancies. ...
As a revolutionary technology, terrestrial laser scanning (TLS) is attracting increasing interest in the fields of architecture, engineering and construction (AEC), with outstanding advantages, such as highly automated, non-contact operation and efficient large-scale sampling capability. TLS has extended a new approach to capturing extremely comprehensive data of the construction environment, providing detailed information for further analysis. This paper presents a systematic review based on scientometric and qualitative analysis to summarize the progress and the current status of the topic and to point out promising research efforts. To begin with, a brief understanding of TLS is provided. Following the selection of relevant papers through a literature search, a scientometric analysis of papers is carried out. Then, major applications are categorized and presented, including (1) 3D model reconstruction, (2) object recognition, (3) deformation measurement, (4) quality assessment, and (5) progress tracking. For widespread adoption and effective use of TLS, essential problems impacting working effects in application are summarized as follows: workflow, data quality, scan planning, and data processing. Finally, future research directions are suggested, including: (1) cost control of hardware and software, (2) improvement of data processing capability, (3) automatic scan planning, (4) integration of digital technologies, (5) adoption of artificial intelligence.
Purpose
Reality capture technologies, such as laser scanning, photogrammetry and video capture, are revolutionizing the construction industry. The vast field of reality capture has numerous applications in different areas of the construction industry. Therefore, this paper aims to provide a systematic literature review of the current research on the role and the potential of reality capture technology in the construction industry. It highlights the benefits and the challenges of using reality capture technology, especially laser scanning, and discusses the necessary technological infrastructure.
Design/methodology/approach
The systematic literature review adheres to the PRISMA approach for conducting this research. To ensure that our search was comprehensive and up-to-date, we conducted a thorough search for journal articles published within the last five years (2019–2023) across two major databases: Scopus and Web of Science.
Findings
The findings revealed current capabilities and limitations of reality capture techniques applied to tasks such as progress monitoring, quality control, mapping of as-built environments, detection of faults, visualization and simulation of construction processes and integration with building information modeling (BIM). Key challenges included processing large, complex data; as-built modeling; handling missing data; noise in captured data and cost.
Originality/value
There is a need to develop more robust systems for automated reality capture analysis to support construction professionals. This review will serve as a foundation and identify promising future research and development directions at the intersection of reality capture and the construction industry.
The interaction between laser beams and backscattering object surfaces lies at the fundamental working principle of any Terrestrial Laser Scanning (TLS) system. Optical properties of surfaces such as concrete, metals, wood, etc., which are commonly encountered in structural health monitoring of buildings and structures, constitute an important category of systematic and random TLS errors. This paper presents an approach for considering the random errors caused by object surfaces. Two surface properties are considered: roughness and reflectance. The effects on TLS measurements are modeled stepwise in form of a so-called synthetic variance-covariance matrix (SVCM) based on the elementary error theory. A line of work is continued for the TLS stochastic model by introducing a new approach for determining variances and covariances in the SVCM. Real measurements of cast stone façade elements of a tall building are used to validate this approach and show that the quality of the estimation can be improved with the appropriate SVCM.
Over recent decades, 3D point clouds have been a popular data source applied in automatic change detection in a wide variety of applications. Compared with 2D images, using 3D point clouds for change detection can provide an alternative solution offering different modalities and enabling a highly detailed 3D geometric and attribute analysis. This article provides a comprehensive review of point-cloud-based 3D change detection for urban objects. Specifically, in this study, we had two primary aims: (i) to ascertain the critical techniques in change detection, as well as their strengths and weaknesses, including data registration, variance estimation, and change analysis; (ii) to contextualize the up-to-date uses of point clouds in change detection and to explore representative applications of land cover and land use monitoring, vegetation surveys, construction automation, building and indoor investigations, and traffic and transportation monitoring. A workflow following the PRISMA 2020 rules was applied for the search and selection of reviewed articles, with a brief statistical analysis of the selected articles. Additionally, we examined the limitations of current change detection technology and discussed current research gaps between state-of-the-art techniques and engineering demands. Several remaining issues, such as the reliability of datasets, uncertainty in results, and contribution of semantics in change detection, have been identified and discussed. Ultimately, this review sheds light on prospective research directions to meet the urgent needs of anticipated applications.