Article

Automatic Generation of High-Quality Building Models from Lidar Data

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Automating data acquisition for 3D city models is an important research topic in photogrammetry. In addition to techniques that rely on aerial images, generating 3D building models from point clouds provided. by light detection and ranging (Lidar) sensors is gaining importance. The progress in sensor technology has triggered this development. Airborne laser scanners can deliver dense point clouds with densities of up to one point per square meter. Using this information, it's possible to detect buildings and their approximate outlines and also to extract planar roof faces and create models that correctly resemble the roof structures. The author presents a method for automatically generating 3D building models from point clouds generated by the Lidar sensing technology.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Approaches for reconstructing 3D city models can be categorized as follows: (1) aerial-image-based techniques (Frère, Vandekerckhove, Moons, & Goo, 1998;Früh & Zakhor, 2001, 2003Grün et al., 1997Grün et al., , 1995Moons, Frère, Vandekerckhove, & Van Gool, 1998); (2) ground-or airborne-based laser sensors (Poullis & You, 2009;Rottensteiner, 2003;Sun & Salvaggio, 2013); and (3) combinations (i.e. hybrid) of aerial-image-based and groundbased techniques (Abayowa, Yilmaz, & Hardie, 2015;Debevec et al., 1996;Ding, Lyngbaek, & Zakhor, 2008;Lafarge & Mallet, 2012;Mastin, Kepner, & Fisher, 2009;Rottensteiner, 2003;Turlapaty, Gokaraju, Du, Younan, & Aanstoos, 2012;Wang & Neumann, 2009;Yoo, 2013;Yong & Huayi, 2008). ...
... Approaches for reconstructing 3D city models can be categorized as follows: (1) aerial-image-based techniques (Frère, Vandekerckhove, Moons, & Goo, 1998;Früh & Zakhor, 2001, 2003Grün et al., 1997Grün et al., , 1995Moons, Frère, Vandekerckhove, & Van Gool, 1998); (2) ground-or airborne-based laser sensors (Poullis & You, 2009;Rottensteiner, 2003;Sun & Salvaggio, 2013); and (3) combinations (i.e. hybrid) of aerial-image-based and groundbased techniques (Abayowa, Yilmaz, & Hardie, 2015;Debevec et al., 1996;Ding, Lyngbaek, & Zakhor, 2008;Lafarge & Mallet, 2012;Mastin, Kepner, & Fisher, 2009;Rottensteiner, 2003;Turlapaty, Gokaraju, Du, Younan, & Aanstoos, 2012;Wang & Neumann, 2009;Yoo, 2013;Yong & Huayi, 2008). ...
... Approaches for reconstructing 3D city models can be categorized as follows: (1) aerial-image-based techniques (Frère, Vandekerckhove, Moons, & Goo, 1998;Früh & Zakhor, 2001, 2003Grün et al., 1997Grün et al., , 1995Moons, Frère, Vandekerckhove, & Van Gool, 1998); (2) ground-or airborne-based laser sensors (Poullis & You, 2009;Rottensteiner, 2003;Sun & Salvaggio, 2013); and (3) combinations (i.e. hybrid) of aerial-image-based and groundbased techniques (Abayowa, Yilmaz, & Hardie, 2015;Debevec et al., 1996;Ding, Lyngbaek, & Zakhor, 2008;Lafarge & Mallet, 2012;Mastin, Kepner, & Fisher, 2009;Rottensteiner, 2003;Turlapaty, Gokaraju, Du, Younan, & Aanstoos, 2012;Wang & Neumann, 2009;Yoo, 2013;Yong & Huayi, 2008). Recently, the hybrid approach has become popular because terrestrial laser scanners can improve texture quality and produce detailed structural and optical information (Kremer & Hunter, 2007), whereas aerial images provide texture images that are used to create photorealistic images. ...
Article
Texture mapping generates photorealistic representations of three‐dimensional (3D) geometric objects and enhances the spatial perception of areas of interest. Over the past two decades, even though various approaches for 3D urban models have been investigated, their use has been limited because of the lack of spatial accuracy, details, and the complex processes. It is difficult to maintain highly detailed texture information without using a hybrid of aerial image and ground‐based imaging techniques, which are costly. Furthermore, it is hard to develop a fully automated process for 3D urban mapping that achieves high spatial accuracy. With regard to the issues, this research aims to develop a semi‐automated process for 3D building models that would help image‐based approaches. It helps acquire qualified texture information and improve the appearance of building façades in a large city. In particular, this research first investigates an optimal overlap of consecutive aerial images that generates sufficient information to texture each façade, thus making this process more cost‐effective. Second, this research develops an application to semi‐automatically build 3D buildings and textured 3D buildings. The application is developed in C++. The textured 3D building models are quantitatively and qualitatively assessed to determine the usability of the semi‐automated process.
... Adding semantic information to the geometric objects of the model can not only improve the effectiveness of specific applications, but also broaden the range of applications of the model [26]. CityGML provides a general geometric model and a semantic model [20], and the use of the surface model makes it suitable for point clouds [27][28][29] and aerial images [30]. As a result, it has become the dominant standard for 3D city models in the geospatial industry [31]. ...
... Adding semantic information to the geometric objects of the can not only improve the effectiveness of specific applications, but also broaden the of applications of the model [26]. CityGML provides a general geometric model semantic model [20], and the use of the surface model makes it suitable for point [27][28][29] and aerial images [30]. As a result, it has become the dominant standard city models in the geospatial industry [31]. ...
Article
Full-text available
CityGML (City Geography Markup Language) is the most investigated standard in the integration of building information modeling (BIM) and the geographic information system (GIS), and it is essential for digital twin and smart city applications. The new CityGML 3.0 has been released for a while, but it is still not clear whether its new features bring new challenges or opportunities to this research topic. Therefore, the aim of this study is to understand the state of the art of CityGML in BIM/GIS integration and to investigate the potential influence of CityGML3.0 on BIM/GIS integration. To achieve this aim, this study used a systematic literature review approach. In total, 136 papers from Web of Science (WoS) and Scopus were collected, reviewed, and analyzed. The main findings of this review are as follows: (1) There are several challenging problems in the IFC-to-CityGML conversion, including LoD (Level of Detail) mapping, solid-to-surface conversion, and semantic mapping. (2) The ‘space’ concept and the new LoD concept in CityGML 3.0 can bring new opportunities to LoD mapping and solid-to-surface conversion. (3) The Versioning module and the Dynamizer module can add dynamic semantics to the CityGML. (4) Graph techniques and scan-to-BIM offer new perspectives for facilitating the use of CityGML in BIM/GIS integration. These findings can further facilitate theoretical studies on BIM/GIS integration.
... The lack of features, on the other hand, is unavoidable. The generic modelling method should take care of recovering features that were lost during the reconstruction phase [8][9][10][11]. In a generic modelling approach, Sohn et al. (2008) presented a BSP (Binary Space Partitioning) tree for handling the missing data problem. It proved its ability to create building models while also discussing its drawbacks, such as erroneously establishing topological relationships between modelling features. ...
... The generated vectors from BSP are referred as nosy model boundaries. Based on the Minimum Description Length also known as MDL, the proposed approach gradually corrects them [8][9][10][11][12]. ...
Article
3D city models enable us to gain a better grasp of how various city components interact with one another. Advances in geosciences now allow for the automatic creation of high-quality, realistic 3D city models. It is not limited to visualization and navigation, however, also for shadow and solar potential analysis. Solar radiation is an example of a 3D GIS tool that is in high demand. The calculation of solar radiation that reaches 3D objects can be simple, but the shadow effect of nearby buildings is a considerably more challenging issue because some facades or roofs are only partially shadowed. The present study is analyzed into two approaches. The first approach is considered as Visualization (client-side) approach to visualize the 3D city models on the website using NodeJS and CesiumJS. The second approach is considered as Analyzation (Server-side) approach to analyze the solar potential using python for faster processing and deeming the future development aspects.
... Generally, fine-scale building height can be estimated by three types of data: 1) Light Detection and Ranging (LiDAR), 2) radar, and 3) highresolution optical imagery. LiDAR allows high accuracy measurements of building height (Baltsavias, 1999), and thus is widely applied to 3D building modelling (Rottensteiner, 2003;Sun and Salvaggio, 2013;Verma et al., 2006). However, the coverage of LiDAR is still limited due to its high acquisition cost. ...
... Moreover, the building height of the whole Shenzhen city acquired from airborne LiDAR data in 2017 was used as the reference to evaluate the quality of the produced building height. Note that airborne LiDAR can provide highly accurate elevation measurements and has been widely adopted as the building height reference data (Bonczak and Kontokosta, 2019;Rottensteiner, 2003;Wang and Li, 2020;Yu et al., 2010). The acquired reference data was originally provided in the vector form, i.e., building footprints with height, and was then converted to its raster form with a spatial resolution of 2.5 m (Fig. 10(b)), for a comparison with the predicted height map. ...
Article
Knowledge of building height is critical for understanding the urban development process. High-resolution optical satellite images can provide fine spatial details within urban areas, while they have not been applied to building height estimation over multiple cities and the feasibility of mapping building height at a fine scale (< 5 m) remains understudied. Multi-view satellite images can describe vertical information of buildings, due to the inconsistent response of buildings (e.g., spectral and structural variations) to different viewing angles, but they have not been employed to deep learning-based building height estimation. In this context, we introduce high-resolution ZY-3 multi-view images to estimate building height at a spatial resolution of 2.5 m. We propose a multi-spectral, multi-view, and multi-task deep network (called M³Net) for building height estimation, where ZY-3 multi-spectral and multi-view images are fused in a multi-task learning framework. A random forest (RF) method using multi-source features is also carried out for comparison. We select 42 Chinese cities with diverse building types to test the proposed method. Results show that the M³Net obtains a lower root mean square error (RMSE) than the RF, and the inclusion of ZY-3 multi-view images can significantly lower the uncertainty of building height prediction. Comparison with two existing state-of-the-art studies further confirms the superiority of our method, especially the efficacy of the M³Net in alleviating the saturation effect of high-rise building height estimation. Compared to the vanilla single/multi-task models, the M³Net also achieves a lower RMSE. Moreover, the spatial-temporal transferability test indicates the robustness of the M³Net to imaging conditions and building styles. The test of our method on a relatively large area (covering about 14,120 km²) further validates the scalability of our method from the perspectives of both efficacy and quality. The source code will be made available at https://github.com/lauraset/BuildingHeightModel.
... In data-driven methods, the segmentation of the initial point cloud for the definition of roof faces is the crucial step Remote Sens. 2020, 12, 1972 4 of 24 in modelling to obtain an accurate model [11]. The algorithms that are most frequently used within data-driven methods for detecting the roof parts in the form of planes are random sample consensus (RANSAC) [29,30], the 3D Hough transform [31,32], and the region growing algorithm [33]. The defined roof planes, together with the extracted outlines of roof parts, are further used to reconstruct a roof in a 3D environment. ...
... The previously published studies on data-driven 3D building modelling from the point cloud were mainly designed for LiDAR data [11,33]. In our case, we decided to use a UAV photogrammetric point cloud, because it has not yet been widely used for 3D building modelling. ...
Article
Full-text available
This paper provides the innovative approach of using a spatial extract, transform, load (ETL) solution for 3D building modelling, based on an unmanned aerial vehicle (UAV) photogrammetric point cloud. The main objective of the paper is to present the holistic workflow for 3D building modelling, emphasising the benefits of using spatial ETL solutions for this purpose. Namely, despite the increasing demands for 3D city models and their geospatial applications, the generation of 3D city models is still challenging in the geospatial domain. Advanced geospatial technologies provide various possibilities for the mass acquisition of geospatial data that is further used for 3D city modelling, but there is a huge difference in the cost and quality of input data. While aerial photogrammetry and airborne laser scanning involve high costs, UAV photogrammetry has brought new opportunities, including for small and medium-sized companies, by providing a more flexible and low-cost source of spatial data for 3D modelling. In our data-driven approach, we use a spatial ETL solution to reconstruct a 3D building model from a dense image matching point cloud which was obtained beforehand from UAV imagery. The results are 3D building models in a semantic vector format consistent with the OGC CityGML standard, Level of Detail 2 (LOD2). The approach has been tested on selected buildings in a simple semi-urban area. We conclude that spatial ETL solutions can be efficiently used for 3D building modelling from UAV data, where the data process model developed allows the developer to easily control and manipulate each processing step.
... Using LiDAR point clouds to model buildings received a ention in the literature [5,6,[8][9][10]. However, their focus was on standalone algorithms and on the accuracy of the models, not on running times. ...
... Our system copes with largescale planning tasks like deployment of 5G networks. To that end, our modeling provides a favorable tradeo between accuracy and Figure 1: A point-cloud snippet-the real-world distance between each two adjacent pixels is 15 centimeters scalability, and is di erent from that of [5,6,[8][9][10]. Our main contributions are as follows. ...
Conference Paper
Full-text available
Three-dimensional models of buildings have a variety of applications, e.g., in urban planning, for making decision where to locate power lines, solar panels, cellular antennas, etc. Often, 3D models are created from a LiDAR point cloud, however, this presents three challenges. First, to generate maps at a nationwide scale or even for a large city, it is essential to effectively store and process the data. Second, there is a need to produce a compact representation of the result, to avoid representing each building as thousands of points. Third, it is often required to seamlessly integrate computed models with non-geospatial features of the geospatial entities. In this paper, we demonstrate an end-to-end automation of a large-scale 3D-model creation for buildings. The tool compacts the point cloud and allows to effortlessly integrate the results with information stored in a database. The main motivation for our tool is 5G network planning, where antenna locations require careful consideration, given that buildings and trees could obstruct or reflect high-frequency cellular transmissions.
... Ohtake et al., (2004), in the fastening edge on the triangle meshes, realized by analyzing base curves and derivatives [13]. Rottensteiner (2003), applied a region growing model on normal vectors in the production of 3-dimensional building models [14]. Wang et al. (2013) considered the normal vectors as points on unit sphere and then segmented planes and other orderly surfaces for detection [15]. ...
... Ohtake et al., (2004), in the fastening edge on the triangle meshes, realized by analyzing base curves and derivatives [13]. Rottensteiner (2003), applied a region growing model on normal vectors in the production of 3-dimensional building models [14]. Wang et al. (2013) considered the normal vectors as points on unit sphere and then segmented planes and other orderly surfaces for detection [15]. ...
Article
Full-text available
In this study, the lattice roof model was designed in 3 dimensions using the finite element method in the Ansys package program. This method can be understood by making the complex engineering applications is a method to provide the solution can be controlled by the system. The element type of the model, the outside diameter of the truss pipe, the thickness of the meat, the modulus of elasticity and the poisson ratio parameters are defined as material properties. The material is a steel isotropic material. A total of mechanical and elastic stress analyzes were carried out under the constant load of 25000 and 20000 kN applied to Fx, Fy, Fz beam axes and 2 MPa pressure on the 3D lattice roof model. Deformations of the model against the load were investigated by analyzing the stresses against the mechanical forces and the elastic stresses in the x, y, z axes. As a result of the analyses, it was observed that the mechanical and elastic stresses in the beam axes against the applied load increased.
... Ohtake et al. (2004) performed an edge detection on triangular meshes by analyzing the principal curvatures and their derivatives [9]. Rottensteiner (2003) applied a region growing model on normal vectors in the production of the 3-dimensional building models [10]. Wang et al. (2013) considered the normal vectors as points on unit sphere, and then segmented planes and other orderly surfaces for detection [11]. ...
... Ohtake et al. (2004) performed an edge detection on triangular meshes by analyzing the principal curvatures and their derivatives [9]. Rottensteiner (2003) applied a region growing model on normal vectors in the production of the 3-dimensional building models [10]. Wang et al. (2013) considered the normal vectors as points on unit sphere, and then segmented planes and other orderly surfaces for detection [11]. ...
Article
Full-text available
St 37 and St 70 steels are the materials used in the manufacturing of general building materials, which are produced by processing the hot-formed steel further through a cold drawing process. Finite element method helps to simplify the complex engineering problems and to solve them with controllable parts. The roof lattice model simulated in the present study is a 4-surface pyramidal roof which is 4 mm in diameter, 0.5 mm in thickness and it was designed in 3D in Ansys software by using the finite element method. The bottom corner nodes of the roof lattice model were stabilized and the vector stress effects of 65.000 N force was applied in Fx, Fz directions and 75.000 N force was applied in Fy direction on the top node truss axes, 65.000 N.m moment was applied in Mx, Mz directions and 75.000 N.m moment was applied in My direction on middle truss nodes were investigated. According to the test results in Ansys software, vector stress increase due to both force and moment effect in truss axes of the St 70 lattice roof steel compared to the St 37 steel.
... The third operation is planarity-based filtering, leveraging the relatively consistent height feature of buildings in NDHM. While previous methods have employed co-occurrence matrixbased [ 182 ], [ 183 ], eigenvalue-based [ 184 ], [ 185 ], and Entrophy-based [ 162 ] strategies, we've The fourth operation is boundary refining. We apply a dilation kernel of size K3 to refine LiDAR's underestimated building boundaries. ...
Thesis
Full-text available
This dissertation explores the integration of remote sensing and artificial intelligence (AI) in geospatial mapping, specifically through the development of knowledge-based mapping systems. Remote sensing has revolutionized Earth observation by providing data that far surpasses traditional in-situ measurements. Over the last decade, significant advancements in inferential capabilities have been achieved through the fusion of geospatial sciences and AI (GeoAI), particularly with the application of deep learning. Despite its benefits, the reliance on data-driven AI has introduced challenges, including unpredictable errors and biases due to imperfect labeling and the opaque nature of the processes involved. The research highlights the limitations of solely using data-driven AI methods for geospatial mapping, which tend to produce spatially heterogeneous errors and lack transparency, thus compromising the trustworthiness of the outputs. In response, it proposes novel knowledge-based mapping systems that prioritize transparency and scalability. This research has developed comprehensive techniques to extract key Earth and urban features and has introduced a 3D urban land cover mapping system, including a 3D Landscape Clustering framework aimed at enhancing urban climate studies. The developed systems utilize universally applicable physical knowledge of targets, captured through remote sensing, to enhance mapping accuracy and reliability without the typical drawbacks of data-driven approaches. The dissertation emphasizes the importance of moving beyond mere accuracy to consider the broader implications of error patterns in geospatial mappings. It demonstrates the value of integrating generalizable target knowledge, explicitly represented in remote sensing data, into geospatial mapping to address the trustworthiness challenges in AI mapping systems. By developing mapping systems that are open, transparent, and scalable, this work aims to mitigate the effects of spatially heterogeneous errors, thereby improving the trustworthiness of geospatial mapping and analysis across various fields. Additionally, the dissertation introduces methodologies to support urban pathway accessibility and flood management studies through dependable geospatial systems. These efforts aim to establish a robust foundation for informed urban planning, efficient resource allocation, and enriched environmental insights, contributing to the development of more sustainable, resilient, and smart cities.
... On the other hand, building administrators can rely on building reports or on sensor-driven pervasive computing systems to facilitate this process. For example, LiDAR sensors and RGB-D cameras can achieve the automatic generation of building models [36], [37]. ...
... The third operation is planarity-based filtering, leveraging the relatively consistent height feature of buildings in NDHM. While previous methods have employed co-occurrence matrixbased [63], [64], eigenvalue-based [65], [66], and Entrophybased [40] strategies, we've used a more efficient approach to determine local height variations. Firstly, our workflow rounds the NDHM to integer values and counts the unique integers within a square kernel (K2) on the NDHM, creating a surface roughness layer as illustrated in Figure 1. ...
Article
Full-text available
This study introduces an automated, open-source workflow for large-scale 2D and 3D building mapping using airborne LiDAR data. Uniquely, our workflow operates entirely unsupervised, eliminating the need for any training procedures. We have integrated a specially tailored digital terrain model generation algorithm into our workflow to prevent errors in complex urban landscapes, especially around highways and overpasses. Through fine rasterization of LiDAR point clouds, we've enhanced building-tree differentiation. Additionally, we've reduced errors near water bodies and augmented computational efficiency by introducing a new planarity calculation. Our workflow offers a practical and scalable solution for the mass production of rasterized 2D and 3D building maps from raw airborne LiDAR data. Our method's robustness has been rigorously validated across a diverse dataset in comparison with deep learning-based and hand-digitized products. Through these extensive comparisons, we provide a valuable analysis of building maps generated via different methodologies. We anticipate that our highly scalable building mapping workflow will facilitate the production of reliable 2D and 3D building maps, fostering advances in large-scale urban analysis. The source code for our workflow is publicly accessible at: https://github.com/hunsoosong/airborne-lidar-building-mapping .
... In general, fine-scale building heights applicable to individual residential buildings can be estimated from three types of data [12]: (i) Li-DAR, (ii) radio detection and ranging (RADAR), and (iii) high-resolution optical images. However, LiDAR has a relatively high accuracy [13][14][15], and the coverage area of LiDAR is small. In particular, the acquisition cost of airborne LiDAR is high, while the measurement density of spaceborne LiDAR is not sufficient to cover all houses. ...
Article
Full-text available
With the challenges brought about by the COVID-19 pandemic, China’s real-estate market has been facing new bottlenecks. The solution lies in an in-depth understanding of regional real-estate conditions. In the study of housing, remote sensing technology can help to extract building height as well as to calculate the number of floors and estimate the total amount of housing. It is more efficient and accurate compared to conventional statistical and sampling methods. Remote sensing is widely used in real-estate research and building height estimation, whereas it is less frequently used for the total estimation of urban housing. In this context, we used Chinese satellite GF-7 stereopair images, point of interest (POI) data, and other data combined with the digital surface model (DSM) and shadow methods to calculate the height of residential buildings. An efficient and accurate method system was then established for estimating the total housing and per capita living area (PCLA). According to the calculation of the PCLA of each district in Ningbo City (China), it was found that different regions were suitable for different development paths. Based on this, the driving factor model was derived and the real-estate development potential of Ningbo city was quantitatively analyzed. The results showed that Ningbo City, a first-tier city with a large population inflow, still has potential for real-estate development.
... As abordagens metodológicas propostas para a modelagem de telhados também variam. Rottensteiner (2003) propôs detectar planos na nuvem de pontos e depois buscar a interseção de planos para modelar o telhado. Outros autores propuseram aplicar métodos de crescimento de regiões em uma estrutura TIN (SAMPATH; SHAN, 2010) ou em uma grade raster (JOCHEM et al., 2012). ...
Article
Full-text available
LiDAR se mostrou valioso na análise do meio urbano, pois permite a captura de informações tridimensionais para detectar e modelar edifícios. O desenvolvimento do LiDAR móvel terrestre abriu novas possibilidades para a modelagem 3D em áreas urbanas. Neste artigo é apresentada uma metodologia semiautomática que visa calcular modelos tridimensionais de edifícios a partir de nuvens de pontos obtidas com LiDAR terrestres móveis. Para tanto, a nuvem de pontos é segmentada em blocos de planos uniformes analisando a variação da densidade de pontos ao longo das principais direções da fachada. Isso permite segmentar a nuvem de pontos em regiões planas que posteriormente são combinadas para construir o modelo 3D, mesmo quando a fachada possui diferentes planos paralelos. O principal diferencial da metodologia é a utilização dos histogramas de frequência para segmentar a nuvem de pontos e detectar as bordas da fachada, garantindo economia de tempo em relação aos métodos tradicionais.
... In other words, the point cloud is segmented at the semantic information level, that is, point cloud semantics segmentation, hereafter referred to as "point cloud segmentation". In the research of automating the creation of polyhedral building models, Rottensteiner [6] used the method of curvature segmentation to detect the point cloud in the entire area of the building, then extracted the roof plane and grouped the planes to complete the modeling. Pu S et al. [7] proposed aplanar surface growing algorithm to segment point clouds, set a threshold in the point set and select seed points, and then judge whether the points that meet the growth rules have grown to achieve segmentation of the same type of point cloud. ...
Article
Full-text available
As the construction industry is shifting from the construction of new buildings to the maintenance and use of existing buildings in recent years, the demand for automated building information models (BIM) creation is increasing. This paper uses the deep learning network PointNet to perform semantic segmentation on the public S3DIS point cloud data set, which means to assign the same type of point cloud building components in the data set to the same label, and the bounding box algorithm is been used to obtain the outer contour parameters of the segmented point cloud building components. Finally, the Dynamo, which is one of the Revit plug-in, is used to perform parametric modeling according to the obtained parameters, and generates the BIM corresponding to the point cloud data set. The experimental results show that the method proposed in this paper can complete the parametric creation of BIM with high completeness based on the efficient segmentation of point clouds.
... Nowadays, LIDAR scanners are commonly used to capture 3D models of indoor environments for applications like redesign, visualization, monitoring and simulations [21]- [23]. To accurately capture the indoor environment, the LIDAR scanner is placed at multiple locations [24]. In this paper, we used this common approach to capture the 3D environment with a Leica RTC 360 scanner. ...
Article
Full-text available
We present a new, accurate, low complexity channel modelling methodology for LiFi in realistic indoor scenarios. A LIDAR scanner is used to capture the 3D environment in which the LiFi system is to be deployed. Next, the generated 3D point cloud data is pre-processed to determine the reflectance parameters of the walls and objects in the room. This is easier and more realistic than the manual definition of the environment, which is the current state of the art. As an additional innovation, the complexity of the channel modelling is reduced by: 1) modelling the line-of-sight and initial reflections precisely in the frequency domain; 2) using a well-established analytical model based on the integrating sphere for all higher-order diffuse reflections. All steps together yield a substantially simplified channel modelling approach and model the links between multiple optical frontends and multiple mobile devices realistically. As a validation of our new approach, we compare measurements and simulations in two indoor scenarios: an empty room and a conference room with furniture. Simulations and measurements show excellent agreement with a mean square error below 3 percent. Moreover, we evaluate the performance of a distributed multiuser multiple-input multiple-output (MIMO) link and found excellent agreement between the model and measurements. Finally, we discuss the fundamental trade-off between complexity and model error, which depends on the scenario.
... Some pipelines for the process of building modeling have been developed in the past, but in some cases the reconstruction process may fail because of the scene complexity, even if the generalization rate is high [11]. However, for the majority of these pipelines the input data is ALS point clouds [12][13][14] not UAS point clouds, which are very noisy compare to ALS or TLS point clouds and subject to occlusions due to vegetation and angle of incidence of the digital camera optical axis. Even so, the advantages of using UAS images for 3D building modeling are multiple: low cost compared to traditional aerial photogrammetry, low flight altitude resulting in high-resolution images and a small Ground Sample Distance (GSD) of less than 3 cm, and fast and flexible image acquisition and processing. ...
Article
Full-text available
3D modelling of urban areas is an attractive and active research topic, as 3D digital models of cities are becoming increasingly common for urban management as a consequence of the constantly growing number of people living in cities. Viewed as a digital representation of the Earth’s surface, an urban area modeled in 3D includes objects such as buildings, trees, vegetation and other anthropogenic structures, highlighting the buildings as the most prominent category. A city’s 3D model can be created based on different data sources, especially LiDAR or photogrammetric point clouds. This paper’s aim is to provide an end-to-end pipeline for 3D building modeling based on oblique UAS images only, the result being a parametrized 3D model with the Open Geospatial Consortium (OGC) CityGML standard, Level of Detail 2 (LOD2). For this purpose, a flight over an urban area of about 20.6 ha has been taken with a low-cost UAS, i.e., a DJI Phantom 4 Pro Professional (P4P), at 100 m height. The resulting UAS point cloud with the best scenario, i.e., 45 Ground Control Points (GCP), has been processed as follows: filtering to extract the ground points using two algorithms, CSF and terrain-mark; classification, using two methods, based on attributes only and a random forest machine learning algorithm; segmentation using local homogeneity implemented into Opals software; plane creation based on a region-growing algorithm; and plane editing and 3D model reconstruction based on piece-wise intersection of planar faces. The classification performed with ~35% training data and 31 attributes showed that the Visible-band difference vegetation index (VDVI) is a key attribute and 77% of the data was classified using only five attributes. The global accuracy for each modeled building through the workflow proposed in this study was around 0.15 m, so it can be concluded that the proposed pipeline is reliable.
... After calculating the building label image and modelling the outer roof boundary polygon, the next step toward 2D roof modelling is the detection of the inner roof plane boundaries. In the literature (Ameri, 2000;Rottensteiner, 2003), it is suggested to use Voronoï diagram to achieve this task. However, this solution is unsatisfactory because it creates distortions not only on the actual position of the plane boundaries, but also on adjacency relationships between planes. ...
Article
Full-text available
Abstract This article suggests a new approach to automatic building footprint modeling using exclusively airborne LiDAR data. The first part of the suggested approach is the filtering of the building point cloud using the bias of the Z-coordinate histogram. This operation aims to detect the points of roof class from the building point cloud. Hence, eight rules for histogram interpretation are suggested. The second part of the suggested approach is the roof modeling algorithm. It starts by detecting the roof planes and calculating their adjacency matrix. Hence, the roof plane boundaries are classified into four categories: (1) outer boundary; (2) inner plane boundaries; (3) roof detail boundaries; and (4) boundaries related to the missing planes. Finally, the junction relationships of roof plane boundaries are analyzed for detecting the roof vertices. With regard to the resulting accuracy quantification, the average values of the correctness and the completeness indices are employed in both approaches. In the filtering algorithm, their values are respectively equal to 97.5 and 98.6%, whereas they are equal to 94.0 and 94.0% in the modeling approach. These results reflect the high efficacy of the suggested approach.
... However, due to the complexity of urban objects in spatial and spectral aspects, the application of remote sensing in urban vegetation extraction still facing great challenges [12]. For example, trees are usually surrounded by other urban elements, such as buildings, roads, and some tree canopies might overhang the buildings, making it difficult to extract trees accurately [13,14]. In addition, extraction of vegetation in building shadows is also difficult using optical images alone. ...
Article
Full-text available
Urban vegetation extraction is very important for urban biodiversity assessment and protection. However, due to the diversity of vegetation types and vertical structure, it is still challenging to extract vertical information of urban vegetation accurately with single remotely sensed data. Airborne light detection and ranging (LiDAR) can provide elevation information with high-precision, whereas hyperspectral data can provide abundant spectral information on ground objects. The complementary advantages of LiDAR and hyperspectral data could extract urban vegetation much more accurately. Therefore, a three-dimensional (3D) vegetation extraction workflow is proposed to extract urban grasses and trees at individual tree level in urban areas using airborne LiDAR and hyperspectral data. The specific steps are as follows: (1) airborne hyperspectral and LiDAR data were processed to extract spectral and elevation parameters, (2) random forest classification method and object-based classification method were used to extract the two-dimensional distribution map of urban vegetation, (3) individual tree segmentation was conducted on a canopy height model (CHM) and point cloud data separately to obtain three-dimensional characteristics of urban trees, and (4) the spatial distribution of urban vegetation and the individual tree delineation were assessed by validation samples and manual delineation results. The results showed that (1) both the random forest classification method and object-based classification method could extract urban vegetation accurately, with accuracies above 99%; (2) the watershed segmentation method based on the CHM could extract individual trees correctly, except for the small trees and the large tree groups; and (3) the individual tree segmentation based on point cloud data could delineate individual trees in three-dimensional space, which is much better than CHM segmentation as it can preserve the understory trees. All the results suggest that two- and three-dimensional urban vegetation extraction could play a significant role in spatial layout optimization and scientific management of urban vegetation.
... Considerable efforts have been devoted to the data-driven approaches. Aiming at segmenting building from other objects, the data-driven studies achieve it in the manner of segmenting and reconstructing buildings directly from data without prior templates of buildings [11], [12]. The region growing method randomly specifies a seed point and subsequently measures its similarity with the neighbors to determine a match. ...
Article
Full-text available
Airborne light detection and ranging (LiDAR) data are widely applied in building reconstruction, with studies reporting success in typical buildings. However, the reconstruction of curved buildings remains an open research problem. To this end, we propose a new framework for curved building reconstruction via assembling and deforming geometric primitives. The input LiDAR point clouds are first converted into contours where individual buildings are identified. After recognizing geometric units (primitives) from building contours, we get initial models by matching the basic geometric primitives to these primitives. To polish assembly models, we employ a warping field for model refinements. Specifically, an embedded deformation (ED) graph is constructed via downsampling the initial model. Then, the point to model displacements is minimized by adjusting node parameters in the ED graph based on our objective function. The presented framework is validated on several highly curved buildings collected by various LiDAR in different cities. The experimental results, as well as accuracy comparison, demonstrate the advantage and effectiveness of our method. The new insight attributes to an efficient reconstruction manner. Moreover, we prove that the primitive-based framework significantly reduces the data storage to 10%-20% of classical mesh models.
... Although LiDAR has improved the level of automation in the building detection process [9], the use of raw or interpolated data alone suffers from poor horizontal accuracy of building boundaries [10]. Given the pros and cons of LiDAR and HSRI, it has been suggested that these data be fused to improve the degree of automation and the robustness of automatic building extraction [11], [12]. Data fusion-based methods, using both HSRI and LiDAR data have attracted more attention, but questions remain. ...
Article
Full-text available
Extracting buildings from remotely sensed data is a fundamental task in many geospatial applications. However, this task is resistant to automation due to variability in building shapes and the environmental complexity surrounding buildings. To solve this problem, this paper introduces a novel automatic building extraction method that integrates LiDAR data and high spatial resolution imagery (HSRI) using adaptive iterative segmentation and hierarchical overlay analysis based on data-fusion. An adaptive iterative segmentation method overcomes over- and under-segmentation based on the globalized probability of boundary (gPb) contour detection algorithm. A data-fusion based hierarchical overlay analysis extracts building candidate regions based on segmentation results. A morphological operation optimizes a building candidate region to obtain final building results. Experiments were conducted on the ISPRS Vaihingen benchmark dataset. The extracted building footprints were compared with those extracted using the state-of-the-art methods. Evaluation results show that the proposed method achieved the highest area-based quality compared to results from the other tested methods on the ISPRS website. A detailed comparison with four state-of-the-art methods shows that the proposed method requiring no samples achieves competitive extraction results. Furthermore, the proposed method achieved a completeness of 94.1%, a correctness of 90.3%, and a quality of 85.5% over the whole Vaihingen dataset, indicating that the method is robust, with great potential in practical applications.
... Considerable efforts have been devoted to the data-driven approaches. Aiming at segmenting building from other objects, the data-driven studies achieve it in the manner of segmenting and reconstructing buildings directly from data without prior templates of buildings [11], [12]. The region growing method randomly specifies a seed point and subsequently measures its similarity with the neighbors to determine a match. ...
... Approaches using region growing algorithm: Alharthy and Bethel (2004), and Elaksher and Bethel (2002) developed an algorithm that gathers all pixels fitting a plane in raster data. Rottensteiner (2003) extracted roof planes by using seed regions and applied a region-growing algorithm in a regularized DSM. Then, the homogeneity relationships between the neighbour points are evaluated by calculating point normals. ...
Article
Full-text available
ABSTRACT Despite the large number of studies conducted during the last three decades concerning 3D building modelling starting from Light detection and ranging (Lidar) data, two persistent problems still exist. The first one is the absence of some roof details, which will not only disappear in the building roof model due to their small areas regarding the point density but are also considered as undesirable noise among the modelling procedures. The second problem consists in that the involved segmentation algorithms do not perform well in the presence of noise in the building point cloud data. These two problems generate undesirable deformation in the final 3D building model. This paper proposes a new automatic approach for detecting and modelling the missing roof details in addition to improving the building roof segments. In this context, the error map matrix, which presents the deviations of points to their fitting planes, is considered. Moreover, this matrix is analysed in order to deduce the mask of missing roof details. At this stage, a new numeric factor is defined for estimating the roof segmentation accuracy in addition to the validity of the roof segmentation result. Then, the building point cloud is enhanced in order to decrease the negative noise influence and, consequently, to improve the building roof segments. Finally, the functionality and the accuracy of the proposed approach are tested and discussed.
... The analysis of the position of these two 3D lines will define the relationship between the two neighbouring planes. According to Rottensteiner [9], three types of mutual relationships can be defined: intersection, step, and step-intersection (see Fig. 4). Equation (1) permits us to determine the type of mutual relationships between two adjacent planes, where í µí± í µí±–í µí±— is mentioned in Fig3. ...
Conference Paper
Full-text available
Although much effort has been spent in developing a stable algorithm for 3D building modelling from Lidar data, this topic still attracts a lot of attention in the literature. A key task of this problem is the automatic building roof segmentation. Due to the great diversity of building typology, and the noisiness and heterogeneity of point cloud data, the building roof segmentation result needs to be verified/rectified with some geometric constrains before it is used to generate the 3D building models. Otherwise, the generated building model may suffer from undesirable deformations. This paper suggests the generation of 3D building model from Lidar data in two steps. The first step is the automatic 2D building modelling and the second step is the automatic conversion of a 2D building model into 3D model. This approach allows the 2D building model to be refined before starting the 3D building model generation. Furthermore, this approach allows getting the 2D and 3D building models simultaneously. The first step of the proposed algorithm is the generation of the 2D building model. Then after enhancing and fitting the roof planes, the roof plane boundaries are converted into 3D by analysing the relationships between neighbouring planes. This is followed by the adjustment of the 3D roof vertices. Experiment indicated that the proposed algorithm is accurate and robust in generating 3D building models from Lidar data.
... After detection of roof planes and calculating building label image (Figure 1e), the first step toward 2D roof modelling is the detection of the roof plane boundaries. In the literature (Ameri, 2000) and (Rottensteiner, 2003), it is suggested to use of Voronoï diagram to achieve this task. However, this solution is unsatisfactory because it creates distortions not only on the actual position of the planes boundaries, but also on adjacency relationships between planes. ...
Article
Full-text available
Despite the large quantity of researches and publications achieved during the last three decades about 3D building modelling by using Lidar data, the question of inner roof plane boundaries modelling needs to be more extracted in detail. This paper focuses on detection and 2D modelling of building inner roof plane boundaries. This operation presents an imperative junction between roof planes detection and 3D building model generation. Therefore, it presents key procedure in data driven approaches. For achieving this purpose, roof boundaries are classified in four categories: outer building boundaries, inner roof plane boundaries, roof details (chimneys and windows) boundaries and boundaries related to non-detectable roof details. This paper concentrates on detection and modelling of inner roof plane boundaries and roof details (chimneys and windows) boundaries. Moreover, it details the modelling procedures step by step that is envisaged rarely in the literature. The proposed approach starts by analysing the adjacency relationship between roof planes. Then, the inner roof plane boundaries are detected. Finally, the junction relationships between boundaries are analysed before detecting the roof vertices. Once the 2D roof model is calculated, the visual deformations in addition to modelling accuracy are discussed.
... ALS is sufficient for creating city models at lower LODs (corresponding to CityGML LOD1 and LOD2). To model higher LODs, terrestrial laser scanning (TLS) and mobile laser scanning (MLS) with high point density and geometrical accuracy are required (Baltsavias 1999, Rottensteiner 2003, Gröger and Plümer 2012, Tutzauer and Haala 2015, Tomljenovic et al. 2015. One shortcoming of the use of airborne observations for updating city models is the long update cycle, especially in high-density building areas. ...
Article
Full-text available
Airborne Laser Scanning (ALS) is used to acquire three-dimensional (3D) city model data over large areas. However, because of the long ALS update cycle, building information models (BIM) could be utilized to maintain city models. In this study, we designed, implemented, and evaluated a methodology to formalize the integration of BIM data into city models. CityGML models were created from BIM data and ALS/ footprint data based on common modelling guidelines. Both CityGML building models are modelled in a similar way and the relative differences between the models are on the order of decimetres.
... ALS is sufficient for creating city models at lower LODs (corresponding to CityGML LOD1 and LOD2). To model higher LODs, terrestrial laser scanning (TLS) and mobile laser scanning (MLS) with high point density and geometrical accuracy are required (Baltsavias 1999, Rottensteiner 2003, Gröger and Plümer 2012, Tutzauer and Haala 2015, Tomljenovic et al. 2015. One shortcoming of the use of airborne observations for updating city models is the long update cycle, especially in high-density building areas. ...
Article
Full-text available
Airborne Laser Scanning (ALS) is used to acquire three-dimensional (3D) city model data over large areas. However, because of the long ALS update cycle, building information models (BIM) could be utilized to maintain city models. In this study, we designed, implemented, and evaluated a methodology to formalize the integration of BIM data into city models. CityGML models were created from BIM data and ALS/ footprint data based on common modelling guidelines. Both CityGML building models are modelled in a similar way and the relative differences between the models are on the order of decimetres.
... ALS is sufficient for creating city models at lower LODs (corresponding to CityGML LOD1 and LOD2). To model higher LODs, terrestrial laser scanning (TLS) and mobile laser scanning (MLS) with high point density and geometrical accuracy are required (Baltsavias 1999, Rottensteiner 2003, Gröger and Plümer 2012, Tutzauer and Haala 2015, Tomljenovic et al. 2015. One shortcoming of the use of airborne observations for updating city models is the long update cycle, especially in high-density building areas. ...
... LiDAR has the advantages of high accuracy, rapid acquisition, and high resolution, which are required for large-scale elevation collection and object extraction. External building reconstruction can be achieved using LiDAR alone (Wang and Schenk 2000), or through the integration with aerial imagery (Chen et al. 2005;Chen and Teo 2004;Rottensteiner 2003) satellite imagery (Sohn and Dowman 2007), or close-range photogrammetry (Habib, Ghanma and Tait 2004). ...
... The second task in this paper is the regularization of the building boundary after the building detection. In terms of the boundary regularization, multiple methods [32][33][34][35][36][37][38] have been proposed. ...
Article
Full-text available
Aerial images are widely used for building detection. However, the performance of building detection methods based on aerial images alone is typically poorer than that of building detection methods using both LiDAR and image data. To overcome these limitations, we present a framework for detecting and regularizing the boundary of individual buildings using a feature-level-fusion strategy based on features from dense image matching (DIM) point clouds, orthophoto and original aerial images. The proposed framework is divided into three stages. In the first stage, the features from the original aerial image and DIM points are fused to detect buildings and obtain the so-called blob of an individual building. Then, a feature-level fusion strategy is applied to match the straight-line segments from original aerial images so that the matched straight-line segment can be used in the later stage. Finally, a new footprint generation algorithm is proposed to generate the building footprint by combining the matched straight-line segments and the boundary of the blob of the individual building. The performance of our framework is evaluated on a vertical aerial image dataset (Vaihingen) and two oblique aerial image datasets (Potsdam and Lunen). The experimental results reveal 89% to 96% per-area completeness with accuracy above almost 93%. Relative to six existing methods, our proposed method not only is more robust but also can obtain a similar performance to the methods based on LiDAR and images.
... Other initial data used often are LIDAR-sensor data, by creating dense point clouds and detecting planes (Rottensteiner, 2003;Rottensteiner et al., 2007). Point clouds from LIDAR sensors, in contrast with dense cloud created by photogrammetric methods, contain fewer outliers, forming more easily detected geometrical shapes. ...
Article
Full-text available
In this paper a method of detecting buildings in dense populated city areas using a three-dimensional model, produced by aerial images, is described. Further to the detection of the outline of the building, we exact information about the buildings height. The study area is the wider centre of Athens, Greece. Our aim is to exact 3D information for large area, in minimum time and minimum cost, in order to support opensource data bases, such as openstreetmap.org. The proposed methodology consists of three main stages. In the first part of the procedure, aerial images are used to produce a point cloud, using the Semi-Global dense matching algorithm. Following, we classify the objects in the point cloud by remote sensing and photogrammetric methods. The classification’s results are divided in three main classes: ground, vegetation and buildings. Having detected the buildings and their complexes we attempt to find the outlines of each separate building, depending on its level; different levels are considered as different buildings. After detecting individual buildings in the point cloud, a polygon is created around their outline. All polygons were compared to the building polygons available on openstreetmap.org, in order to evaluate the results. The number of levels of 100 buildings, in different parts of the city, was measured manually in order to evaluate the Z-dimension’s results, and openstreetmap.org was updated with that information. Further update and combination of the database created in the current process, with the one available on openstreetmap.org is yet under study.
... existence of large numbers of points which should be excluded from further processing), frequent lack of data, as well as strong variability of local point density and data accuracy inside a single point set. To a certain extent, the data in this form can be used in order to recreate the surface of simple objects [5] and in some cases also buildings [22] [20]. However, in case of more complex and varied objects, obtaining satisfactory results is often much more difficult to achieve [14]. ...
Article
Full-text available
The technologies of sonar and laser scanning are an efficient and widely used source of spatial information with regards to underwater and over ground environment respectively. The measurement data are usually available in the form of groups of separate points located irregularly in three-dimensional space, known as point clouds. This data model has known disadvantages, therefore in many applications a different form of representation, i.e. 3D surfaces composed of edges and facets, is preferred with respect to the terrain or seabed surface relief as well as various objects shape. In the paper, the authors propose a new approach to 3D shape reconstruction from both multibeam and LiDAR measurements. It is based on a multiple-step and to some extent adaptive process, in which the chosen set and sequence of particular stages may depend on a current type and characteristic features of the processed data. The processing scheme includes: 1) pre-processing which may include noise reduction, rasterization and pre-classification, 2) detection and separation of objects for dedicated processing (e.g. steep walls, masts), and 3) surface reconstruction in 3D by point cloud triangulation and with the aid of several dedicated procedures. The benefits of using the proposed methods, including algorithms for detecting various features and improving the regularity of the data structure, are presented and discussed. Several different shape reconstruction algorithms were tested in combination with the proposed data processing methods and the strengths and weaknesses of each algorithm were highlighted.
... In recent years, a number of building detection methodologies were developed utilising LiDAR data, which uses remote sensing technologies to automatically estimate elevation of objects on the earth surface (Li and Guan, 2011;Sampath and Shan 2010;Siddiqui et al., 2013;Wu et al., 2017a). Huang et al. (2017) separate the aim of such works into two main categories, building characteristics extraction (Chen et al., 2005;Rottensteiner, 2003;Rottensteiner and Briese, 2002) and building classification (Abella´n and Moral, 2003;Tobergte and Curtis, 2013;Yan et al., 2016). However, only a limited number of studies were carried out to estimate the floor count in specific buildings. ...
Article
Full-text available
The research presented in this paper addresses a current gap in the availability of building geometry data and provides estimates of individual building characteristics at city scale. Such data are crucial for a wide range of subjects such as modelling building energy consumption as well as regional housing market studies. However, such data are currently not available in the UK. In this work, a new approach was developed to automatically estimate the geometric characteristics of buildings, including height and floor count. A wide range of datasets have been brought together including high-resolution light detection and ranging data to accurately estimate building elevation and to obtain the external dimension of buildings. In the UK, most of the datasets required for this model are available for urban areas, allowing the model to be widely applied both in cities and beyond. The paper presents the results of building height and floor count determined from this model and compares these with the actual data obtained from a survey of 108 representative buildings in the city of Southampton. The results show good accuracy of the model with 97% of the estimates having an error under ±1 floor and an absolute mean error of 0.3 floors. These results provide confidence in utilising this model for future building studies at a city scale.
... LiDAR, an active remote sensor, can acquire highly accurate data without an external source of illumination. Since LiDAR sensors collect elevation information, they have been widely applied to detect the objects for which the most important characteristic is height [Rottensteiner 2003;Meng et al. 2009]. For instance, buildings and parking lots that contain the same spectral characteristics but different height levels can be easily distinguished by this type of data. ...
Article
Full-text available
In this study, the fusion of Light Detection and Ranging (LiDAR) and hyperspectral data was used to propose a method for building detection. The number of hyperspectral bands was first reduced from 144 to 8 layers using the Linear Discriminant Analysis (LDA) algorithm to remove highly redundant bands and reduce computational costs. Then, these layers were integrated with 4 layers of heights and intensities obtained from the LiDAR data. The fused layers (12 layers) were applied to a Random Forest (RF) algorithm to extract the boundaries of buildings. Finally, two morphological operators were applied to remove the holes on the buildings’ roofs and repair their boundaries. A comparison was also performed between the results obtained by the proposed method and the reference study in this field [Debes et al. 2014]. The proposed method demonstrated a better accuracy for building detection in a much shorter time compared to the reference method. The values of 97% and 96% were obtained for producer and user accuracies, respectively. Overall, the method presented in this study proved to have a high potential for building extraction.
... In the photogrammetry and remote sensing communities, 3D building modelling using ALS and TLS has been longstanding challenges (Vosselman and Suveg, 2001;Rottensteiner, 2003). In photogrammetry and remote sensing, the goal is to produce polygonal meshes with less data compared to LiDAR and the images usually have better visual information (Wang, 2013). ...
Thesis
Indoor navigation is important for various applications such as disaster management, building modelling and safety analysis. In the last decade, the indoor environment has been a focus of extensive research that includes the development of indoor data acquisition techniques, three-dimensional (3D) data modelling and indoor navigation. 3D indoor navigation modelling requires a valid 3D geometrical model that can be represented as a cell complex: a model without any gap or intersection such that the two cells, a room and corridor, should perfectly touch each other. This research is to develop a method for 3D topological modelling of an indoor navigation network using a geometrical model of an indoor building environment. To reduce the time and cost of the surveying process, a low-cost non-contact range-based surveying technique was used to acquire indoor building data. This technique is rapid as it requires a shorter time than others, but the results show inconsistencies in the horizontal angles for short distances in indoor environments. The rangefinder was calibrated using the least squares adjustment and a polynomial kernel. A method of combined interval analysis and homotopy continuation was developed to model the uncertainty level and minimize error of the non-contact range-based surveying techniques used in an indoor building environment. Finally, a method of 3D indoor topological building modelling was developed as a base for building models which include 3D geometry, topology and semantic information. The developed methods in this research can locate a low-cost, efficient and affordable procedure for developing a disaster management system in the near-future.
Chapter
Building segmentation is essential in infrastructure development, population management, and geological observations. This article targets shallow models due to their interpretable nature to assess the presence of LiDAR data for supervised segmentation. The benchmark data used in this article are published in NORA MapAI competition for deep learning model. Shallow models are compared with deep learning models based on Intersection over Union (IoU) and Boundary Intersection over Union (BIoU). In the proposed work, boundary masks from the original mask are generated to improve the BIoU score, which relates to building shapes’ borderline. The influence of LiDAR data is tested by training the model with only ariel images in task 1 and a combination of aerial and LiDAR data in task 2 and then compared. Shallow models outperform deep learning models in IoU by 8% using aerial images (task 1) only and 2% in combined aerial images and LiDAR data (task 2). In contrast, deep learning models show better performance on BIoU scores. Boundary masks improve BIoU scores by 4% in both tasks. Light Gradient Boosting Machine (LightGBM) performs better than RF and Extreme Gradient Boosting (XGBoost).
Article
Introduction. The 3D modeling technology of the urban environment using LiDAR survey data expands the possibilities of urban research. With proper use of various methods, models and algorithms for processing and analyzing LiDAR data, they can significantly facilitate and open up new opportunities for many applications discussed in this paper. The main research objective of the paper is to review methods for analyzing LiDAR survey data in urban studies and to present individual elements of the author’s optimization of these methods. Results. LiDAR data obtained as a result of laser scanning of the earth's surface from a certain vehicle form a three-dimensional terrain model in the point cloud form of varying density degrees. The post-processing of such data can branch out into many applications, which are discussed in this paper. The building extraction from a cloud of LiDAR points is performed using complex computational operations, the essence of which is to calculate the points of separate planes of the buildings roofs and then extract these points for 3D building modeling. There are many approaches to building extraction that aim to either improve the quality and accuracy of the extracted models or to speed up the data processing. Finding the optimal solution for 3D modeling of the urban environment is an urgent task in this area of research. Tracking changes in urban buildings involves comparing digital models of urban areas for different time periods in order to obtain the changes volume for each building. In a similar fashion, LiDAR data is used to assess damage to buildings by creating random points on the buildings walls and comparing their displacements before and after the damage. The population estimate using LiDAR data is based on a comparison of population data for census tracts with data on the number, area and volume of buildings in the same tracts obtained from processed LiDAR data. As a result, the expected population in each individual building can be calculated. Roads extraction from LiDAR data is performed by creating an image of the LiDAR laser pulse intensity and then comparing this image with a digital surface model. The article provides an example of a scheme for such road extraction. In addition, methods for extracting and mapping power lines by filtering the corresponding points are also considered. The ability to determine the exact size, slope, and exposure of a building's roof plane also makes it possible to estimate the potential level of solar radiation received by the roof, which can contribute to the optimal placement of solar power plants. Such an assessment may cause some difficulties, which are discussed in the article. The article proposes various optimization solutions for the considered methods, which were partially implemented in the ELiT software. In addition to effective tools for automatic data processing, the ELiT Project also provides an environment for high-quality visualization of results in a standard web-GIS interface. Conclusions. LiDAR data, in combination with efficient algorithms for processing and filtering data, greatly facilitates the solution of a number of tasks related to area monitoring and urban planning. In the future, the high accuracy of LiDAR data and the possibility of their visualization in GIS will make it possible to analyze the urban development features in order to identify the urban geosystemic properties of the city.
Article
In most Mobile Laser Scanning (MLS) applications, filtering is a necessary step. In this paper, a segmentation-based filtering method is proposed for MLS point cloud, where a segment rather than an individual point is the basic processing unit. In particular, the MLS point clouds in some blocks are clustered into segments by a surface growing algorithm, and then the object segments are detected and removed. A segment-based filtering method is employed to detect the ground segments. The experiment in this paper uses two MLS point cloud datasets to evaluate the proposed method. Experiments indicate that, compared with the classic progressive TIN (Triangulated Irregular Network) densification algorithm, the proposed method is capable of reducing the omission error, the commission error and total error by 3.62%, 7.87% and 5.54% on average, respectively.
Article
Recently, roadside Light Detection and Ranging (LiDAR) has been deployed for different transportation applications such as high-resolution-micro-traffic data collection, vehicle–pedestrian safety evaluation, and driver’s behavior analysis. An ideal LiDAR-enhanced traffic infrastructure system needs multiple LiDAR sensors deployed around intersections and along road segments, which generate a seamless coverage of intersections or arterials. To obtain continuous and complete traffic data, the integration method is much more important to extend the data range and improve the density of scanned points. In this research, an innovative approach based on the GPS mapping method was present to automatically integrate data collected by different LiDAR sensors in a global coordinate. In this method, the raw data collected by multiple LiDAR sensors are used as input, at least 4 reference points collected by GPS devices are needed for each LiDAR sensor, then a transformation step is applied to transform all the LiDAR points into the Earth-centered, Earth-fixed coordinate system. After obtaining all the LiDAR points in the global coordinate system, an Iterative Closest Point (ICP) method was used to reduce the errors caused by data collection and calculation. The sensitivity analysis part provided the best number for the reference points collection. At last, the data collected at two sites (Evans & McCarran intersection and Blue parking lot of University of Nevada, Reno (UNR) were selected to verify the method. The testing results showed that the proposed method has a high level of automation and improved accuracy.
Article
In this paper, a different approach based on convolutional neural networks (CNNs) is proposed to generate digital surface model (DSM) from a single high-resolution satellite image. In this regard, an approach based on a deep convolutional neural network was designed. The proposed CNN has an encoder-decoder structure to extract multi-scale features in the encoding part and estimate the height values by up-sampling the extracted abstract features. Then, a filtering approach based on morphological operators is proposed to extract the non-ground pixels from each estimated height image. The final digital surface mode Shuttle Radar Topography Mission (SRTM) is obtained by integrating the SRTM elevation model and extracted non-ground objects. Evaluating the estimated height images indicated 0.219, 0.865, and 2.912 m on average log10 error, relative error, and root mean square error (RMSE), respectively. In addition, investigating the final integrated DSM indicated 4.625 m on average for RMSE, demonstrating a promising performance of the proposed approach.
Article
Building extraction from light detection and ranging (LiDAR) data for 3-dimensional (3D) reconstruction requires accurately classified LiDAR points. In recent years, approaches developed for the classification mostly based on gridded LiDAR data. In the gridding process of LiDAR data, there is a characteristic point loss which results in reduced height accuracy. The effect of such loss can be eliminated using classified raw LiDAR data. In this study, an automatic point-based classification approach for raw LiDAR data classification with spatial features has been proposed for 3D building reconstruction. Using spatial features, the hierarchical rules have been determined. The spatial features, such as height, the local environment, and multi-return, of the LiDAR points were analyzed, and every single LiDAR points automatically assigned to the classes based on these features. The proposed classification approach based on raw LiDAR data had an overall accuracy of 79.7% in the test site located in Istanbul, Turkey. Finally, 3D building reconstruction was performed using the results of the proposed automatic point-based classification approach.
Article
Point cloud segmentation is a crucial fundamental step in 3D reconstruction, object recognition and scene understanding. This paper proposes a supervoxel-based point cloud segmentation algorithm in region growing principle to solve the issues of inaccurate boundaries and nonsmooth segments in the existing methods. To begin with, the input point cloud is voxelized and then pre-segmented into sparse supervoxels by flow constrained clustering, considering the spatial distance and local geometry between voxels. Afterwards, plane fitting is applied to the over-segmented supervoxels and seeds for region growing are selected with respect to the fitting residuals. Starting from pruned seed patches, adjacent supervoxels are merged in region growing style to form the final segments, according to the normalized similarity measure that integrates the smoothness and shape constraints of supervoxels. We determine the values of parameters via experimental tests, and the final results show that, by voxelizing and pre-segmenting the point clouds, the proposed algorithm is robust to noises and can obtain smooth segmentation regions with accurate boundaries in high efficiency.
Article
The application of convolutional neural networks has been shown to significantly improve the accuracy of building extraction from very high-resolution (VHR) remote sensing images. However, there exist so-called semantic gaps among different kinds of buildings due to the large intraclass variance of buildings, and most of the present-day methods are ineffective in extracting various buildings in large areas that cover different scenes, for example, urban villages and high-rise buildings, because existing building extraction strategies are the same for various scenes. With the improvement of the resolution of remote sensing images, it is feasible to improve the image interpretation based on the scene prior. However, this idea has not been fully utilized in building extraction from VHR remote sensing imagery. This study proposes a scene-driven multitask parallel attention convolutional network (MTPA-Net) to resolve these limitations. The proposed approach classifies the input image into multilabel scenes and further separately maps the buildings in pixel level under different scenes. In addition, a simple postprocessing method is applied to integrate the building extraction results and scene prior. Our proposed method does not require multimodel training and the network can learn in an end-to-end manner. The performance of our proposed method is evaluated on a data set that includes various urban and rural scenes with diverse landscapes. The experimental results show that the proposed MTPA-Net outperforms state-of-the-art algorithms by reducing misclassification areas and maintaining improved robustness.
Article
Full-text available
Automated procedures are necessary to cope with the vast amounts of digitized information within the field of cultural heritage. During the last 15 years, digital landscape analysis and detection of cultural heritage monuments developed rapidly especially due to the availability of LiDAR data. With the increasing amount of information, automated procedures are suitable for monitoring and surveying known monuments, as well as detecting unknown monuments. This study measures the state of automated procedures within cultural heritage detection and management, by correlating key terms for LiDAR data with academic citations of their use. Cross-referencing this impact measure with occurrences of "automated procedures" enhances our understanding of best practices. We analyze these results, using the methods of network analysis (NA) with respect to personal, institutional, and financial ties and actors involved in automated monument detection. In addition, a Systematic Literature Review (SLR) using standardized search structures on publications related to "automated monument detection" for LiDAR data from 2000 to 2015 reveals the evolution of the field. The observable trends and patterns within the combined results of (NA) and (SLR) allow for a critical assessment of current research practices. Based on these results we conclude by formulating recommendations for future implementations.
Article
Full-text available
Although many efforts have been made on the extraction of houses from LiDAR (Light Detection and Ranging) and/or aerial imagery and/or their fusion, little investigation using co-registration between the orthoimage map and LiDAR on the basis of geodetic coordinates as element for house extraction. For this reason, this paper first overviews the advances of LiDAR and investigates the advantages and disadvantages of LiDAR system vs. traditional photogrammetry, and then indicates that LiDAR technology has not yet resolved all existing problems that traditional photogrammetry remained so far, such as texture information, LiDAR point cloud density. A comprehensive comparison in extraction of houses (feature information) from LiDAR data set and from aerial imagery are also presented. It has been widely accepted and admitted that full automation for extraction of houses (feature information in city area) from LiDAR point cloud has still been difficult. Therefore, this paper proposes a human-computer interaction operation for houses extraction through combination of LiDAR point cloud and the orthorectified high-resolution aerial imagery. The real data is utilized for validation of the proposed method.
Article
Full-text available
Recent advances in the availability of very high-resolution (VHR) satellite data together withefficient data acquisition and large area coverage have led to an upward trend in their applicationsfor automatic 3-D building model reconstruction which require large-scale and frequent updates,such as disaster monitoring and urban management. Digital Surface Models (DSMs) generatedfrom stereo satellite imagery suffer from mismatches, missing values, or blunders, resulting inrough building shape representations. To handle 3-D building model reconstruction using suchlow-quality DSMs, we propose a novel automatic multistage hybrid method using DSMs togetherwith orthorectified panchromatic (PAN) and pansharpened data (PS) of multispectral (MS) satelliteimagery. The algorithm consists of multiple steps including building boundary extraction anddecomposition, image-based roof type classification, and initial roof parameter computation whichare prior knowledge for the 3-D model fitting step. To fit 3-D models to the normalized DSM(nDSM) and to select the best one, a parameter optimization method based on exhaustive searchis used sequentially in 2-D and 3-D. Finally, the neighboring building models in a building blockare intersected to reconstruct the 3-D model of connecting roofs. All corresponding experimentsare conducted on a dataset including four different areas of Munich city containing 208 buildingswith different degrees of complexity. The results are evaluated both qualitatively and quantitatively.According to the results, the proposed approach can reliably reconstruct 3-D building models, eventhe complex ones with several inner yards and multiple orientations. Furthermore, the proposedapproach provides a high level of automation by limiting the number of primitive roof types and byperforming automatic parameter initialization.
Article
Full-text available
Digital Earth frameworks deal with data sets of different types collected from various sources. In order to effectively store, retrieve, and transmit these data sets, efficient multiscale data representations that are compatible with the underlying structure of the Digital Earth framework are required. In this paper, we describe several such techniques and their properties; namely, how to represent data in the multiscale cell hierarchy of a DGGS or in the multiscale hierarchy of a customized wavelet transform. We also discuss how these techniques can be tuned to be applicable to the A3H DGGS.
Article
This letter presents a novel approach to automated extraction of roof planes from airborne light detection and ranging data based on spectral clustering of straight-line segments. The straight-line segments are derived from laser scan lines, and 3-D line geometry analysis is employed to identify coplanar line segments so as to avoid skew lines in plane estimation. Spectral analysis reveals the spectrum of the adjacency matrix formed by the straight-line segments. Spectral clustering is then performed in feature space where the clusters are more prominent, resulting in a more robust extraction of roof planes. The proposed approach has been tested on ISPRS benchmark data sets, with the results showing high quality in terms of completeness, correctness, and geometrical accuracy, thus confirming that the proposed approach can extract roof planes both accurately and efficiently.
Article
Full-text available
In this paper, a new method for the automated generation of 3D building models from directly observed point clouds generated by LIDAR sensors is presented. By a hierarchic application of robust interpolation using a skew error distribution function, the LIDAR points being on the terrain are separated from points on buildings and other object classes, and a digital terrain model (DTM) can be computed. Points on buildings have to be separated from other points classified as off-terrain points, which is accomplished by an analysis of the height differences of a digital surface model passing through the original LIDAR points and a digital terrain model. Thus, a building mask is derived, and polyhedral building models are created in these candidate regions in a bottom-up procedure by applying curvature-based segmentation techniques. Intermediate results will be presented for a test site located in the City of Vienna.
Article
Full-text available
Airborne laser altimetry has become a very popular technique for the acquisition of digital elevation models. The high point density that can be achieved with this technique enables applications of laser data for many other purposes. This paper deals with the construction of 3D models of the urban environment. A three-dimensional version of the well-known Hough transform is used for the extraction of planar faces from the irregularly distributed point clouds. To support the 3D reconstruction usage is made of available ground plans of the buildings. Two different strategies are explored to reconstruct building models from the detected planar faces and segmented ground plans. Whereas the first strategy tries to detect intersection lines and height jump edges, the second one assumes that all detected planar faces should model some part of the building. Experiments show that the second strategy is able to reconstruct more buildings and more details of this buildings, but that it sometimes leads to additional parts of the model that do not exist. When restricted to buildings with rectangular segments of the ground plan, the second strategy was able to reconstruct 83 buildings out of a dataset with 94 buildings.
Article
Full-text available
A methodology for evaluating range image segmentation algorithms is proposed. This methodology involves (a) a common set of 40 laser range finder images and 40 structured light scanner images that have manually specified ground truth and (b) a set of defined performance metrics for instances of correctly segmented, missed and noise regions, over- and under-segmentation, and accuracy of the recovered geometry. A tool is used to objectively compare a machine generated segmentation against the specified ground truth. Four research groups have contributed to evaluate their own algorithm for segmenting a range image into planar patches. Key words: experimental comparison of algorithms, range image segmentation, low level processing, performance evaluation In general, standardized segmentation error metrics are needed to help advance the stateof -the-art. No quantitative metrics are measured on standard test images in most of today's research environments. ---NSF Range Image Unde...
Article
Full-text available
Virtual reality applications in the context of urban planning presume the acquisition of 3D urban models. Photo realism can only be achieved, if the geometry of buildings is represented by a detailed and accurate CAD model and if artificial texture or real world imagery is additionally mapped to the faces and roofs of the buildings. In the approach presented in this paper height data provided by airborne laser scanning and existing ground plans of buildings are combined in order to enable an automatic data capture by the integration of these different types of information. Afterwards virtual reality city models are generated by texture processing, i.e. by mapping of terrestrial images. Thus the rapid acquisition of 3D urban GIS is feasible.
Article
Full-text available
This paper describes two developments in the automatic reconstruction of buildings from aerial images. The first is an algorithm for automatically matching line segments over multiple images. The algorithm employs geometric constraints based on the multi-view geometry together with photometric constraints derived from the line neighbourhood, and achieves a performance of better than 95% correct matches over three views. The second development is a method for automatically computing a piecewise planar reconstruction based on the matched lines. The novelty here is that a planar facet hypothesis can be generated from a single 3D line, using an inter-image homography applied to the line neighbourhood. The algorithm has successfully generated near complete roof reconstructions from multiple images. This work has been carried out as part of the EC IMPACT project. A summary of the project is included. 1 INTRODUCTION Reconstruction of buildings from aerial images has received continual atten...
Article
At head of title: Deutsche Geodätische Kommission bei der Bayerischen Akademie der Wissenschaften. Vita. Thesis (Dr.-Ing.)--Rheinische Friedrich-Wilhelms-Universität, 1997. Includes bibliographical references (p. 144-150).
Article
A new method for semi-automatic building extraction together with a concept for storing building models alongside with terrain data in a topographical information system (TIS) is presented. A user interface based on Constructive Solid Geometry is combined with an internal data structure completely based on boundary representation. Each building can be de-composed into a set of simple primitives which are reconstructed individually. After selecting a primitive from a data base of common building shapes, the primitive parameters can be modified by interactive measurement in digital images in order to provide approximate values for automatic fine measurement. In all phases, the properties of the boundary models are directly connected to parameter estimation: the parameters of the building primitives are determined in a hybrid adjustment of camera co-ordinates and fictitious observations of points being situated on building faces. Automatic fine measurement is an application of a general framework for object surface reconstruction using hierarchical feature based object space matching. The integration of object space into the matching process is achieved by the new modeling technique. The management of both building and terrain data in a TIS is based on a unique principle. Meta data are managed in a relational data base, whereas the actual data are treated as binary large objects. The new method is evaluated in a test project (image scale: 1:4500, 70 % overlap, 50 % side lap). The automatic tool gives results with an accuracy of +-2-5 cm in planimetric position and +-5-10 cm in height. ?
Dreidimensionale Gebäuderekonstrutkion aus digitalen Oberflächenmodellen und Grundrissen [Three-Dimensional Building Reconstruction from Digital Surface Models and Ground Plans
  • C Brenner
C. Brenner, Dreidimensionale Gebäuderekonstrutkion aus digitalen Oberflächenmodellen und Grundrissen [Three-Dimensional Building Reconstruction from Digital Surface Models and Ground Plans], doctoral dissertation, DGK-C 530, Inst. Photogrammetry, Stuttgart Univ., 2000.
Dreidimensionale Geb&auml,uderekonstrutkion aus digitalen Oberfl&auml,chenmodellen und Grundrissen [Three-Dimensional Building Reconstruction from Digital Surface Models and Ground Plans], doctoral dissertation, DGK-C 530
  • C Brenner
Geb&auml,udeerfassung aus digitalen Oberfl&auml,chenmodellen [Building Extraction from Digital Surface Models
  • U Weidner
&lt;i&gt;Dreidimensionale Gebäuderekonstrutkion aus digitalen Oberflächenmodellen und Grundrissen&lt;/i&gt; {Three-Dimensional Building Reconstruction from Digital Surface Models and Ground Plans}, doctoral dissertation, DGK-C 530
  • C Brenner
A New Method for Building Extraction in Urban Areas from High-Resolution Lidar Data," &lt;i&gt;Int'l Archives Photogrammetry and Remote Sensing&lt;/i&gt
  • F Rottensteiner
  • C Briese
3D Building Model Reconstruction from Point Clouds and Ground Plans," &lt;i&gt;Int'l Archives Photogrammetry and Remote Sensing&lt
  • G Vosselman
  • S Dijkman
Urban GIS from Laser Altimeter and 2D Map Data," &lt;i&gt;Int'l Archives Photogrammetry and Remote Sensing&lt
  • N Haala
  • C Brenner
  • K H Anders
Automatic Line Matching and 3D Reconstruction of Buildings from Multiple Views," &lt;i&gt;Int'l Archives Photogrammetry and Remote Sensing&lt
  • C Baillard
&lt;i&gt;Extraktion polymorpher Bildstrukturen und ihre topologische und geometrische Gruppierung&lt;/i&gt; {Extraction of Polymorphic Image Structures and their Topologic and Geometric Grouping}, doctoral dissertation, DGK-C 502
  • C Fuchs