Project

Point Clouds

Goal: Technology related to managing, processing, and visualizing massive 3D and 3+1D point clouds.

Updates
0 new
0
Recommendations
0 new
0
Followers
0 new
24
Reads
0 new
233

Project log

Matthias Trapp
added a research item
Presentation of research paper "A Non-Photorealistic Rendering Technique for Art-directed Hatching of 3D Point Clouds".
Matthias Trapp
added a research item
Presentation of paper "Interactive Editing of Voxel-Based Signed Distance Fields" presented at the 30th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG 2022).
Matthias Trapp
added a research item
Signed distance functions computed in discrete form from given RGB-D data as regular voxel grids can represent manifold shapes as the zero crossing of a trivariate function; the corresponding meshes can be derived by the Marching Cubes algorithm. However, 3D models automatically reconstructed in this way often contain irrelevant objects or artifacts, such as holes or noise, due to erroneous scan data and error-prone reconstruction processes. This paper presents an approach for interactive editing of signed distance functions, derived from RGB-D data in the form of regular voxel grids, that enables the manual refinement and enhancement of reconstructed 3D geometry. To this end, we combine concepts known from constructive solid geometry, where complex models are created from simple base shapes, with the voxel-based representation of geometry reconstructed from real-world scans. Our approach can be implemented entirely on GPU to enable real-time interaction. Further, we present how to implement high-level operators, such as copy, move, and unification.
Matthias Trapp
added a research item
Point clouds or point-based geometry of varying density can nowadays be easily acquired using LiDAR cameras or modern smartphones with LiDAR sensors. We demonstrate how this data can be used directly to create novel artistic digital content using Non-Photorealistic Rendering techniques. We introduce a GPU-based technique for art-directable NPR rendering of 3D point clouds at interactive frame-rates. The technique uses either a subset or all of the points to generate oriented, sketchy strokes by taking local curvature and normal information into account. It uses X-Toon textures as part of its parameterization, supports hatching and cross hatching, and is inherently temporal coherent with respect to virtual camera movements. This introduces significant artistic freedom that is underlined by our results, which show that a variety of different sketchy styles such as colored crayons, pencil, pointillism, wax crayons, blue print, and chalk-drawings can be achieved on a wide spectrum of point clouds, i.e., covering 3D polygonal meshes as well as iPad-based LiDAR scans.
Matthias Trapp
added 2 research items
Integration and analysis of real-time and historic sensor data provides important insights into the operational status of buildings. There is a need for the integration of sensor data and digital representations of the built environment for furthering stakeholder engagement within the realms of Real Estate 4.0 and Facility Management (FM), especially in a spatial representation context. In this paper, we propose a general system architecture that integrates point cloud data and sensor data for visualization and analysis. We further present a prototypical web-based implementation of that architecture and demonstrate its application for the integration and visualization of sensor data from a typical office building, with the aim to communicate and analyze occupant comfort. The empirical results obtained from our prototypical implementation demonstrate the feasibility of our approach for the provisioning of light-weight software components for the service-oriented integration of Building Information Modeling (BIM), Building Automation Systems (BASs), Integrated Workplace Management Systems (IWMSs), and future Digital Twin (DT) platforms.
Matthias Trapp
added 3 research items
The rapid digitalization of the Facility Management (FM) sector has increased the demand for mobile, interactive analytics approaches concerning the operational state of a building. These approaches provide the key to increasing stakeholder engagement associated with Operation and Maintenance (O&M) procedures of living and working areas, buildings, and other built environment spaces. We present a generic and fast approach to process and analyze given 3D point clouds of typical indoor office spaces to create corresponding up-to-date approximations of classified segments and object-based 3D models that can be used to analyze, record and highlight changes of spatial configurations. The approach is based on machine-learning methods used to classify the scanned 3D point cloud data using 2D images. This approach can be used to primarily track changes of objects over time for comparison, allowing for routine classification, and presentation of results used for decision making. We specifically focus on classification, segmentation, and reconstruction of multiple different object types in a 3D point-cloud scene. We present our current research and describe the implementation of these technologies as a web-based application using a services-oriented methodology.
Advances versus adaptation of Industry 4.0 practices in Facility Management (FM) have created usage demand for up-to-date digitized building assets. The use of Building Information Modelling (BIM) for FM in the Operation and Maintenance (O&M) stages of the building lifecycle is intended to bridge the gap between operations and digital data, but lacks the functionality of assessing and forecasting the state of the built environment in real-time. To accommodate this, BIM data needs to be constantly updated with the current state of the built environment. However, generation of as-is BIM data for a digital representation of a building is a labor intensive process. While some software applications offer a degree of automation for the generation of as-is BIM data, they can be impractical to use for routinely updating digital FM documentation. Current approaches for capturing the built environment using remote sensing and photometry-based methods allow for the creation of 3D point clouds that can be used as basis data for a Digital Twin (DT), along with existing BIM and FM documentation. 3D point clouds themselves do not contain any semantics or specific information about the building components they represent physically, but using machine learning methods they can be enhanced with semantics that would allow for reconstruction of as-is BIM and basis DT data. This paper presents current research and development progress of a service-oriented platform for generation of semantically rich 3D point cloud representations of indoor environments. A specific focus is placed on the reconstruction and visualization of the captured state of the built environment for increasing FM stakeholder engagement and facilitating collaboration. The preliminary results of a prototypical web-based application demonstrate the feasibility of such a platform for FM using a service-oriented paradigm.
We present a method for generating approximate 2D and 3D floor plans derived from 3D point clouds. The plans are approximate boundary representations of built indoor structures. The algorithm slices the 3D point cloud, combines concave primary boundary shape detection and regularization algorithms as well as k-means clustering for detection of secondary boundaries. The algorithm can also generate 3D floor plan meshes based on extruding 2D floor plans. The experimental results demonstrate that approximate 2D vector-based and 3D mesh-based floor plans can be efficiently created within a given accuracy for typical indoor 3D point clouds. The described approach allows for generation of up-to-date, on-the-fly floor plans representations. The presented approach is implement as a client-side web application, thus making it adaptable as a lightweight solution or component for service-oriented use. The generated floor plan approximations can be used as enhancing elements for manifold applications in various Architecture, Engineering and Construction domains.
Sören Discher
added 2 research items
3D point cloud technology facilitates the automated and highly detailed digital acquisition of real-world environments such as assets, sites, cities, and countries; the acquired 3D point clouds represent an essential category of geodata used in a variety of geoinformation applications and systems. In this paper, we present a web-based system for the interactive and collaborative exploration and inspection of arbitrary large 3D point clouds. Our approach is based on standard WebGL on the client side and is able to render 3D point clouds with billions of points. It uses spatial data structures and level-of-detail representations to manage the 3D point cloud data and to deploy out-of-core and web-based rendering concepts. By providing functionality for both, thin-client and thick-client applications, the system scales for client devices that are vastly different in computing capabilities. Different 3D point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering and highlighting, e.g., based on per-point surface categories or temporal information. A set of interaction techniques allows users to collaboratively work with the data, e.g., by measuring distances and areas, by annotating, or by selecting and extracting data subsets. Additional value is provided by the system's ability to display additional, context-providing geodata alongside 3D point clouds and to integrate task-specific processing and analysis operations. We have evaluated the presented techniques and the prototype system with different data sets from aerial, mobile, and terrestrial acquisition campaigns with up to 120 billion points to show their practicality and feasibility.
Real-time rendering for 3D point clouds allows for interactively exploring and inspecting real-world assets, sites, or regions on a broad range of devices but has to cope with their vastly different computing capabilities. Virtual reality (VR) applications rely on high frame rates (i.e., around 90 fps as opposed to 30 - 60 fps) and show high sensitivity to any kind of visual artifacts, which are typical for 3D point cloud depictions (e.g., holey surfaces or visual clutter due to inappropriate point sizes). We present a novel rendering system that allows for an immersive, nausea-free exploration of arbitrary large 3D point clouds on state-of-the-art VR devices such as HTC Vive and Oculus Rift. Our approach applies several point-based and image-based rendering techniques that are combined using a multipass rendering pipeline. The approach does not require to derive generalized, mesh-based representations in a preprocessing step and preserves precision and density of the raw 3D point cloud data. The presented techniques have been implemented and evaluated with massive real-world data sets from aerial, mobile, and terrestrial acquisition campaigns containing up to 2.6 billion points to show the practicability and scalability of our approach.
Sören Discher
added a research item
Today, landscapes, cities, and infrastructure networks are com-monly captured at regular intervals using LiDAR or image-based remote sensing technologies. The resulting point clouds, representing digital snap-shots of the reality, are used for a growing number of applications, such as urban development, environmental monitoring, and disaster management. Multi-temporal point clouds, i.e., 4D point clouds, result from scanning the same site at different points in time and open up new ways to automate common geoinformation management workflows, e.g., updating and main-taining existing geodata such as models of terrain, infrastructure, building, and vegetation. However, existing GIS are often limited by processing strat-egies and storage capabilities that generally do not scale for massive point clouds containing several terabytes of data. We demonstrate and discuss techniques to manage, process, analyze, and provide large-scale, distributed 4D point clouds. All techniques have been implemented in a system that follows service-oriented design principles, thus, maximizing its interopera-bility and allowing for a seamless integration into existing workflows and systems. A modular service-oriented processing pipeline is presented that uses out-of-core and GPU-based processing approaches to efficiently han-dle massive 4D point clouds and to reduce processing times significantly. With respect to the provision of analysis results, we present web-based vis-ualization techniques that apply real-time rendering algorithms and suita-ble interaction metaphors. Hence, users can explore, inspect, and analyze arbitrary large and dense point clouds. The approach is evaluated based on several real-world applications and datasets featuring different densities and characteristics. Results show that it enables the management, pro-cessing, analysis, and distribution of massive 4D point clouds as required by a growing number of applications and systems.
Matthias Trapp
added a research item
We present a set of techniques for the combined and comparative visualization of 3D model geometry extracted from Building Information Models (BIM) and corresponding point clouds. It addresses the steady need to validate, update and combine BIM, in particular based on in-situ captured point clouds, throughout the whole lifecycle of buildings and facilities. To assess the present as-built interior and exterior in comparison to the as-designed or as-documented building representations, our techniques allow for deviation analysis and visualization, which serve as an effective method for enhancing stakeholder engagement. For example, Facility Management (FM) stakeholders can use deviation analysis and visualization to identify, inspect and monitor any spatial alterations both for interior and exterior building parts. Visualized instantaneous deviations can inform stakeholders of further need for investigation; they may not even have architecture, engineering and construction (AEC) expertise or access to BIM software. We describe a prototypical implementation that demonstrates the application of comparative deviation analysis and visualization. Finally, we discuss how the visualization output can provide a tool for a variety of stakeholders to improve applications and workflows for FM.
Rico Richter
added 2 research items
Remote sensing methods, such as LiDAR and image-based photogrammetry, are established approaches for capturing the physical world. Professional and low-cost scanning devices are capable of generating dense 3D point clouds. Typically, these 3D point clouds are preprocessed by GIS and are then used as input data in a variety of applications such as urban planning, environmental monitoring, disaster management, and simulation. The availability of area-wide 3D point clouds will drastically increase in the future due to the availability of novel capturing methods (e.g., driver assistance systems) and low-cost scanning devices. Applications, systems, and workflows will therefore face large collections of redundant, up-to-date 3D point clouds and have to cope with massive amounts of data. Hence, approaches are required that will efficiently integrate, update, manage, analyze, and visualize 3D point clouds. In this paper, we define requirements for a system infrastructure that enables the integration of 3D point clouds from heterogeneous capturing devices and different timestamps. Change detection and update strategies for 3D point clouds are presented that reduce storage requirements and offer new insights for analysis purposes. We also present an approach that attributes 3D point clouds with semantic information (e.g., object class category information), which enables more effective data processing, analysis, and visualization. Out-of-core real-time rendering techniques then allow for an interactive exploration of the entire 3D point cloud and the corresponding analysis results. Web-based visualization services are utilized to make 3D point clouds available to a large community. The proposed concepts and techniques are designed to establish 3D point clouds as base datasets, as well as rendering primitives for analysis and visualization tasks, which allow operations to be performed directly on the point data. Finally, we evaluate the presented system, report on its applications, and discuss further research challenges.
Rico Richter
added 5 research items
Vegetation objects represent one main compositional element of digital models of our environment required by a growing number of simulation, analysis, and visualization applications. However, a detailed representation of vegetation in 3D spatial models is generally not feasible due to the lack of up-to-date, object-based, and area-wide tree surveys and computational limits in data acquisition, storage, and visualization regarding vegetation. In this paper, we present an approach for automatic detection, categorization, and visualization of individual trees based on dense 3D point cloud analysis and efficient real-time rendering techniques such as instancing, adaptive tessellation, and vertex displacement. We have evaluated our approach for an urban area and a forest area with about 100 points/m², running real-time visualization on standard desktop hardware. The results indicate that this kind of automatic tree cadastre based on dense 3D point clouds is a practicable and cost-efficient approach to integrate area-wide, object-based vegetation models into virtual 3D landscape and 3D city models and, in particular, significantly enhance their visual appearance and their suitability for computational applications.
Sören Discher
added 2 research items
3D-Punktwolken, die mittels LiDAR oder photogrammetrischen Verfahren effizient erzeugt werden können, stellen im Kontext von Geoinformationssystemen eine essentielle Kategorie von Geodaten dar. Echtzeitfähige Rendering-Techniken für 3D-Punktwolken ermöglichen deren interaktive Darstellung und Erkundung. In diesem Beitrag wird gezeigt, wie semantische Informationen, die eine Zuordnung der Punkte zu Bebauung, Gelände und Vegetation ermöglichen, mit topologischen Informationen kombiniert werden können, um das Erscheinungsbild von 3D-Punktwolken aufgaben- und anwendungsfallspezifisch zu optimieren. Das vorgestellte neuartige Renderingverfahren ermöglicht es, verschiedene Darstellungsstile für unterschiedliche Punktkategorien zur Laufzeit anzupassen. Das kategoriebasierte Rendering erleichtert die Hervorhebung und damit das Erkennen von einzelnen Strukturen und Objekten; sie wird mit Techniken zur Fokus-&-Kontext-Visualisierung dynamisch umgesetzt. Somit wird es Nutzern erleichtert, die Struktur, den Aufbau und den Gesamtkontext des durch eine 3D-Punktwolke repräsentierten Gebietes zu erfassen, zu analysieren und zu explorieren.
Sören Discher
added 4 research items
Die Nutzung von hoch aufgelösten, räumlich überlappenden und multitemporalen 3D-Punktwolken im Kontext von Geoinformationssystemen stellt hohe Anforderungen an die Leistungsfähigkeit der zugrundeliegenden Software- und Hardwaresysteme. Um angesichts eines weiter zunehmenden Datenaufkommens ein effizientes und wirtschaftliches Arbeiten mit solchen Daten zu ermöglichen, entwickeln wir eine service-basierte Software- und Geodateninfrastruktur, die eine Erfassung, Aktualisierung und Bereitstellung von 3D-Punktwolken im Sinne eines kontinuierlichen Prozesses ermöglicht. In diesem Beitrag erläutern wir die grundlegenden Anforderungen und den konzeptionellen Aufbau einer entsprechenden Infrastruktur, die unter anderem die bedarfsgerechte Bereitstellung ausgewählter Bereiche einer 3D-Punktwolke anhand von semantischen oder temporalen Attributen unterstützt
3D point clouds are a digital representation of our world and used in a variety of applications. They are captured with LiDAR or derived by image-matching approaches to get surface information of objects, e.g., indoor scenes, buildings, infrastructures, cities, and landscapes. We present novel interaction and visualization techniques for heterogeneous, time variant, and semantically rich 3D point clouds. Interactive and view-dependent see-through lenses are introduced as exploration tools to enhance recognition of objects, semantics, and temporal changes within 3D point cloud depictions. We also develop filtering and highlighting techniques that are used to dissolve occlusion to give context-specific insights. All techniques can be combined with an out-of-core real-time rendering system for massive 3D point clouds. We have evaluated the presented approach with 3D point clouds from different application domains. The results show the usability and how different visualization and exploration tasks can be improved for a variety of domain-specific applications.
3D point clouds are a digital representation of our world and used in a variety of applications. They are captured with LiDAR or derived by image-matching approaches to get surface information of objects, e.g., indoor scenes, buildings, infrastructures, cities, and landscapes. We present novel interaction and visualiza-tion techniques for heterogeneous, time variant, and semantically rich 3D point clouds. Interactive and view-dependent see-through lenses are introduced as exploration tools to enhance recognition of objects, semantics, and temporal changes within 3D point cloud depictions. We also develop filtering and highlighting techniques that are used to dissolve occlusion to give context-specific insights. All techniques can be combined with an out-of-core real-time rendering system for massive 3D point clouds. We have evaluated the presented approach with 3D point clouds from different application domains. The results show the usability and how different visualization and exploration tasks can be improved for a variety of domain-specific applications. Fig. 1 Example of a massive 3D point cloud consisting of indoor and outdoor scans. It is explored with a see-through lens to inspect the occluded interior of the building in the context of the overall scan.
Jürgen Döllner
added a project reference
Jürgen Döllner
added a project goal
Technology related to managing, processing, and visualizing massive 3D and 3+1D point clouds.