PresentationPDF Available

Real-Time Volumetric Tests Using Layered Depth Images



Presentation of Research Paper "Real-Time Volumetric Tests�Using Layered Depth Images"
Real-Time Volumetric Tests
Using Layered Depth Images
Hasso Plattner Institute,
University of Potsdam,
Matthias Trapp, Jürgen Döllner
EG 2008 :: 17th April
Real-Time Volumetric Tests Using Layered Depth Images :: Matthias Trapp 2
Layered Depth Images on GPU
Volumetric Parity Test
EG 2008 :: 17th April
Real-Time Volumetric Tests Using Layered Depth Images :: Matthias Trapp 3
Volumetric Test…
…determines if a 3D point
is inside or outside a given volume
Areas of application:
Generalized clipping,
Rendering with hybrid styles,
GPU collision detection,…
Performed in shader program
EG 2008 :: 17th April
Real-Time Volumetric Tests Using Layered Depth Images :: Matthias Trapp 4
Real-Time Volumetric Tests
Ingredients: Data structure + algorithm
Volume representation of polygonal shapes
Hardware accelerated data structure
Should give a sufficient approximation
Layered Depth Image
Volumetric test for arbitrary 3D points
Applicable in shader programs,
Fast and efficient implementation
Volumetric Parity Test
EG 2008 :: 17th April
Real-Time Volumetric Tests Using Layered Depth Images :: Matthias Trapp 5
Layer Depth Images on GPU
Layered Depth Images (LDI)
[Shade 1998]
GPU friendly representation of LDIs:
Depth maps = layers of unique depth complexity
3D texture or 2D texture array of depth maps
Texture format: 32bit floating point
EG 2008 :: 17th April
Real-Time Volumetric Tests Using Layered Depth Images :: Matthias Trapp 6
LDI Example
Depth Layers LDI = (LDI0,,LDI7)
3D LDI Texture Space [0,1]3
Non-Convex Polygonal Mesh S with d = 7
3D World Space IR3
EG 2008 :: 17th April
Real-Time Volumetric Tests Using Layered Depth Images :: Matthias Trapp 7
Depth-Peeling to 3D Texture
1. Scale shape into unit volume
2. Set orthographic projection, adjust near/far planes
3. Determine depth complexity of shape
4. Create and initialize 3D texture (LDI)
5. Depth-peel shape [Everitt 2001]
Render-to-texture (slice of 3D texture)
Use linear depth buffer values [Lapidious 1999]
uniform sampler3D LDI;
uniform int pass;
varying float linearDepth;
void main(void){
if((pass > 0) && (linearDepth <= texelFetch3D(LDI, ivec3(gl_FragCoord.xy, pass-1),0).x)){
gl_FragDepth = linearDepth;
GLSL fragment shader for 2nd depth test (SM4)
EG 2008 :: 17th April
Real-Time Volumetric Tests Using Layered Depth Images :: Matthias Trapp 8
Volumetric Parity Test (VPT)
“to determine if a point is inside or outside
a complex 3D volume represented by an LDI”
Given: arbitrary 3D point:
Requested: Boolean parity:
1. Transformation into LDI texture space:
2. Perform ray-marching through depth maps
EG 2008 :: 17th April
Real-Time Volumetric Tests Using Layered Depth Images :: Matthias Trapp 9
Ray-Marching in LDI Texture Space
1. Construct a ray
2. Sample form each slice
3. Compare depth values
Ray R
(Ts ,Tt ,0) (Ts ,Tt ,1)
pT =0 pT =1 pT =0 pT =1
EG 2008 :: 17th April
Real-Time Volumetric Tests Using Layered Depth Images :: Matthias Trapp 10
Efficient Shader Implementation
bool volumetricParityTestSM4(
in vec3 T, // Point in LDI texture-space
in sampler3D LDI, // layered depth image
in int depth, // LDI slices
in bool initParity) // initial parity
// initial parity; true = outside
bool parity = initParity;
// for each texture layer do
for(int i = 0; i < depth; i++)
{ // perform depth test
if(T.r <= texelFetch3D(LDI, ivec3(, i), 0).x)
parity = !parity; // swap parity
return parity;
EG 2008 :: 17th April
Real-Time Volumetric Tests Using Layered Depth Images :: Matthias Trapp 11
Memory consumptions
Depth-peeling is costly
Limited number of LDI
VPT is fill-limited
Under sampling artifacts
Aliasing artifacts
EG 2008 :: 17th April
Real-Time Volumetric Tests Using Layered Depth Images :: Matthias Trapp 12
Conclusion & Future Work
Take away:
Volumetric test in real-time
Based on LDI
Different applications
Sampling artifacts
Future work:
LDI compression
Optimal viewpoint selection
Solve sampling artifacts
Ray-LDI intersection test
EG 2008 :: 17th April
Real-Time Volumetric Tests Using Layered Depth Images :: Matthias Trapp 13
Main References
[Lapidous 1999]
LAPIDOUS E., JIAO G.: Optimal Depth Buffer for Low-
Cost Graphics Hardware. In HWWS ’99 (New York,
NY, USA, 1999), ACM, pp. 6773.
[Shade 1998]
Layered Depth Images. In SIGGRAPH ’98 (New York,
NY, USA, 1998), ACM, pp. 231242.
[Everitt 2001]
CASS EVERITT: Interactive Order-Independent
Transparency. Tech. rep., NVIDIA Corporation, 2001.
... Our image-based approach works for every clipped arbitrary solid. Therefore, a volumetric depth sprite [Trapp and Doellner 2008] of the polygonal mesh is created in a preprocessing step. Following to that, two steps are performed per frame: First, the solid mesh is ren- dered into the frame buffer with applied relief clipping. ...
... We presented a new rendering technique for performing clipping and capping of arbitrary solid meshes against relief clip planes in real-time. For future work, we adapt this technique for apply capping for clipping against volumes [Trapp and Doellner 2008]. Further, we want to replace displacement mapping with parallax mapping to increase performance. ...
Conference Paper
Full-text available
The concept of clipping planes is well known in computer graphics and can be used to create cut-away views. But clipping against just analytical defined planes is not always suitable for communicating every aspect of such visualization. For example, in hand-drawn technical illustrations, artists tend to communicate the difference between a cut and a model feature by using non-regular, sketchy cut lines instead of straight ones. To enable this functionality in computer graphics, we present a technique for rendering relief clip planes in real-time. Therefore, we extend the clip plane equation with an additional offset map, which can be represented by a texture map that contains height values. Clipping is then performed by varying the clip plane equation with respect to such an offset map. Further, we propose a capping technique that enables the rendering of caps onto the clipped area to convey the impression of solid material. It avoids a re-meshing of a solid polygonal mesh after clipping is performed. Our approach is pixel precise, applicable in real-time, and takes fully advantage of graphics accelerators.
... For such a test, a ray that is projected horizontally from a given point can be cast, and the number of intersections with the edges of the boundary polygon are detected as a switchable Boolean statement. Additionally, methods for testing ray to triangle or ray to volume intersections can also be used [277]. ...
Full-text available
One of the key challenges in modern Facility Management (FM) is to digitally reflect the current state of the built environment, referred to as-is or as-built versus as-designed representation. While the use of Building Information Modeling (BIM) can address the issue of digital representation, the generation and maintenance of BIM data requires a considerable amount of manual work and domain expertise. Another key challenge is being able to monitor the current state of the built environment, which is used to provide feedback and enhance decision making. The need for an integrated solution for all data associated with the operational life cycle of a building is becoming more pronounced as practices from Industry 4.0 are currently being evaluated and adopted for FM use. This research presents an approach for digital representation of indoor environments in their current state within the life cycle of a given building. Such an approach requires the fusion of various sources of digital data. The key to solving such a complex issue of digital data integration, processing and representation is with the use of a Digital Twin (DT). A DT is a digital duplicate of the physical environment, states, and processes. A DT fuses as-designed and as-built digital representations of built environment with as-is data, typically in the form of floorplans, point clouds and BIMs, with additional information layers pertaining to the current and predicted states of an indoor environment or a complete building (e.g., sensor data). The design, implementation and initial testing of prototypical DT software services for indoor environments is presented and described. These DT software services are implemented within a service-oriented paradigm, and their feasibility is presented through functioning and tested key software components within prototypical Service-Oriented System (SOS) implementations. The main outcome of this research shows that key data related to the built environment can be semantically enriched and combined to enable digital representations of indoor environments, based on the concept of a DT. Furthermore, the outcomes of this research show that digital data, related to FM and Architecture, Construction, Engineering, Owner and Occupant (AECOO) activity, can be combined, analyzed and visualized in real-time using a service-oriented approach. This has great potential to benefit decision making related to Operation and Maintenance (O&M) procedures within the scope of the post-construction life cycle stages of typical office buildings.
... Discretizing a polygonal model into a set of particles is made by using a modified of depth peeling [Everitt 2001], originally used for rendering transparent polygons. The technique has been also applied to various other operations like collision detection [Trapp and Döllner 2008] and polygonal discretization [Harada 2007]. In this work, we use a pre-processing step to discretize the rigid body's polygons into a set of particles, as shown in Figure 2. ...
Full-text available
Simulation of natural phenomena, such as water and smoke, is a very important topic to increase real time scene realism in video-games. Besides the graphical aspect, in order to achieve realism, it is necessary to correctly simulate and solve its complex governing equations, requiring an intense computational work.Fluid simulation is achieved by solving the Navier-Stokes set of equations, using a numerical method in CPU or GPU, independently, as these equations do not have an analytical solution. The real time simulacraon also requires the simulation of interaction of the particles with objects in the scene, requiring many collision and contact forces calculation, which may drastically increase the computational time. In this paper we propose an heterogeneous multicore CPU and GPU hybrid architecture for fluid simulation with two-ways of interaction between them, and with a fine granularity control over rigid body's shape collision. We also show the impact of this heterogeneous architecture over GPU and CPU bounded simulations, which is commonly used for this kind of application. The heterogeneous architecture developed in this work is developed to best fit the Single Instruction Multiple Thread (SIMT) model used by GPUs in all simulation stages, allowing a high level performance increase.
Conference Paper
A ray-rep is an approximation of a solid using a grid of parallel rays; intervals along the ray denote various materials comprising the solid. Ray-reps have a variety of applications in computer graphics and mechanical engineering. Given a ray-rep, this paper focuses on the problem of generating a single polygonal mesh that bounds the materials forming the solid. Most investigations of this problem has focused on the two-material case. These primal methods typically build an approximating volume by connecting intersection points on the ray-rep to form curves and surfaces. In this paper, we describe a new, more flexible method that constructs polygonal approximations using a dual method. In the two-material case with normal data, this dual method typically produces better approximations than existing primal methods. In contrast to existing methods, our dual method also generalizes to three or more materials and produces reasonable polygonal approximations from ray-reps.
Computational fluid dynamics in simulation has become an important field not only for physics and engineering areas but also for simulation, computer graphics, virtual reality and even video game development. Many efficient models have been developed over the years, but when many contact interactions must be processed, most models present difficulties or cannot achieve real-time results when executed. The advent of parallel computing has enabled the development of many strategies for accelerating the simulations. Our work proposes a new system which uses some successful algorithms already proposed, as well as a data structure organisation based on a heterogeneous architecture using CPUs and GPUs, in order to process the simulation of the interaction of fluids and rigid bodies. This successfully results in a two-way interaction between them and their surrounding objects. As far as we know, this is the first work that presents a computational collaborative environment which makes use of two different paradigms of hardware architecture for this specific kind of problem. Since our method achieves real-time results, it is suitable for virtual reality, simulation and video game fluid simulation problems.
Virtual 3D city models increasingly cover whole city areas; hence, the perception of complex urban structures becomes increasingly difficult. Using abstract visualization, complexity of these models can be hidden where its visibility is unnecessary, while important features are maintained and highlighted for better comprehension and communication. We present a technique to automatically generalize a given virtual 3D city model consisting of building models, an infrastructure network and optional land coverage data; this technique creates several representations of increasing levels of abstraction. Using the infrastructure network, our technique groups building models and replaces them with cell blocks, while preserving local landmarks. By computing a landmark hierarchy, we reduce the set of initial landmarks in a spatially balanced manner for use in higher levels of abstraction. In four application examples, we demonstrate smooth visualization of transitions between precomputed representations; dynamic landmark highlighting according to virtual camera distance; an implementation of a cognitively enhanced route representation, and generalization lenses to combine precomputed representations in focus + context visualization.
Full-text available
We present a novel framework which can efficiently evaluate approximate Boolean set operations for B-rep models by highly parallel algorithms. This is achieved by taking axis-aligned surfels of Layered Depth Images (LDI) as a bridge and performing Boolean operations on the structured points. As compared with prior surfel-based approaches, this paper has much improvement. Firstly, we adopt key-data pairs to store LDI more compactly. Secondly, robust depth peeling is investigated to overcome the bottleneck of layer-complexity. Thirdly, an out-of-core tiling technique is presented to overcome the limitation of memory. Real-time feedback is provided by streaming the proposed pipeline on the many-core graphics hardware.
In this paper we present a set of efficient image based rendering methods capable of rendering multiple frames per second on a PC. The first method warps Sprites with Depth representing smooth surfaces without the gaps found in other techniques. A second method for more general scenes performs warping from an intermediate representation called a Layered Depth Image (LDI). An LDI is a view of the scene from a single input camera view, but with multiple pixels along each line of sight. The size of the representation grows only linearly with the observed depth complexity in the scene. Moreover, because the LDI data are represented in a single image coordinate system, McMillan's warp ordering algorithm can be successfully adapted. As a result, pixels are drawn in the output image in back-to-front order. No z-buffer is required, so alphacompositing can be done efficiently without depth sorting. This makes splatting an efficient solution to the resampling problem. 1 Introduction Image base...
this document is to enable OpenGL developers to implement this technique with NVIDIA OpenGL extensions and GeForce3 hardware. Since shadow mapping is integral to the technique a very basic introduction is provided, but the interested reader is encouraged to explore the referenced material for more detail