Figure 11 - uploaded by João Luiz Dihl Comba
Content may be subject to copyright.
Dual-depth relief textures. The combined use of front and back depth layers produces tight bounds for an object representation.
Source publication
This paper presents a technique for mapping relief textures onto arbitrary polygonal models in real time. In this approach, the mapping of the relief data is done in tangent space. As a result, it can be applied to polygonal representations of curved surfaces producing correct self-occlusions, interpenetrations, shadows and perpixel lighting effect...
Contexts in source publication
Context 1
... texture-mapped polygons. The borders of the two polygons are shown on the left. This method, however, has not been extended to arbitrary surfaces. ElHew and Yang [ElHelw and Yang 2003] used cylindrical versions of relief textures ( i.e. , cylindrical images with depth measured along the normal directions to the cylinder) to render images of endoscopic simulations. They create inside- looking-outside renditions by warping the cylindrical textures ac- cording the the viewer’s position and by texture-mapping the result onto a reference cylinder. Their technique cannot be generalized to arbitrary surfaces. We exploit the programmability of modern graphics hardware to effectively render surface details onto arbitrary polygonal surfaces. Since the rendering is performed using fragment shaders, we can also perform per-pixel shading and compute shadows. Thus, the color texture originally used to store pre-computed diffuse shading can be discarded and replaced by a normal map. Any 2D texture can be mapped onto the resulting representation. Figure 6 shows a relief texture represented by its corresponding depth and normal maps. The depth map is quantized and represented using the alpha channel of the RGB α texture used to store the normal map. This way, a single 32-bit per texel texture suffices to represent the structure of a relief texture. We normalize the height values to the [ 0 , 1 ] range. Figure 7 shows the representation (cross-section) of such a height-field surface. From top to bottom, the depth values vary from 0.0 to 1.0. The process of mapping relief data to a polygonal surface can be conceptually understood as following. For each fragment to be rendered: • compute the viewing direction (VD) as the vector from the viewer to the 3D position of the point on the polygonal surface; • transform VD to the tangent space (defined by the tangent, normal and bi-normal vectors) associated with the current fragment; • use VD’ (the transformed VD) and A, the ( s , t ) texture coordinates of the fragment, to compute B, the ( u , v ) texture coordinates where the ray reaches the depth value 1.0 (see Figure 7); • compute the intersection between VD’ and the height-field surface using a binary search starting with A and B; • perform the shading of the fragment using the attributes ( e.g. , normal, depth, color, etc.) associated with the texture coordinates of the computed intersection point. This process is illustrated in Figure 7. Point A has an associated depth equal to zero, while B has depth equal to 1.0. At each step, one computes the midpoint of the current interval and assigns it the average depth and texture coordinates of the endpoints. In the example shown in Figure 7, the circle marked ”1” represents the first midpoint. The averaged texture coordinates are used to access the depth map. If the stored depth is smaller than the computed one, the point along the ray is inside the height field surface, as in the case of point 1 in Figure 7. The binary search proceeds with one endpoint inside and other outside the surface. In the example shown in Figure 7, the numbers indicate the order in which the midpoints are obtained. In practice, we have found that eight steps of binary subdivision is sufficient to produce very satisfactory results. This is equivalent to subdivide the depth range of the height field in 2 8 = 256 equally spaced intervals. Other researchers have used 64 axis-aligned equally-spaced 2D texture slices to render displacement maps using 3D textures [Meyer and Neyret 1998; Kautz and Seidel 2001]. The reader should also notice that our approach takes advantage of texture interpolation. Thus, while in techniques based on 3D texture mapping one may see in between slices, our technique does not suffer from this problem. As the depth map is treated and accessed as a texture, texture filtering ( e.g. , bilinear) guarantees that the height-field surface will be continuous. As a result, the proposed technique can be used to produce extreme close-up views of the surface without noticeable artifacts (see Figures 16 and 17). The binary search procedure just described may lead to incorrect results if the viewing ray intersects the height field surfaces in more than one point, as illustrated in Figure 8. In this example, the depth value associated with the first midpoint has a depth value smaller than the one retrieved from the depth map. Since the point is above the height field surface, the binary search would continue its way going deeper into the bounding box and find point 3 as the intersection, which is clearly incorrect. In order to avoid missing the first intersection, we start the process with a linear search. Beginning at point A, we step along the AB line at increments of δ times the length of AB looking for the first point inside the surface (Figure 9). If the graphics card supports shading model 3.0, δ varies from fragment to fragment as function of the angle between VD’ and the interpolated surface normal at the fragment. As this angle grows, the value of δ decreases. In our current implementation, no more than 32 steps are taken along the segment AB. Notice that since the linear search does not involve any dependent texture accesses, this process is very fast as we can make several texture fetches in parallel. Once the first point under the height field surface has been identi- fied, the binary search starts using the last point outside the surface and current one. In this case, a smaller number of binary subdivi- sions is needed. For example, if the depth interval between two lin- early searched points is 1/8, a six-step binary search will be equivalent to subdividing the interval into 512 (2 3 × 2 6 ) equally spaced intervals. Rendering shadows is a visibility problem [Williams 1978]. There- fore, a similar procedure can be used to determine whether a frag- ment is lit or in shade. In this case, we check if the light ray intersects the height-field surface between point C and the actual point being shaded (Figure 10). In case an intersection exists, the point must be in shade. Notice that there is no need to find the actual intersection point, but simply decide whether such an intersection exists, which can also be done using a similar strategy. Figure 14 and 19(c) show examples of relief renderings containing self-shadowing. This section introduces an extension to relief textures that uses two layers of depth information. Such an extension, called dual-depth relief textures , can be used to produce approximate representations for opaque, closed-surface objects using only one relief-mapped polygon. As one tries to sample an object using a single relief texture, not enough information will be available to produce a proper recon- struction. In particular, no information will exist about what lays behind the object (Figure 11 left). In these cases, inverse rendering techniques may extend the ends of these surfaces forming “skins” [McMillan 1997]. The occurrence of skins can be elimi- nated with the use of one extra layer of depth that represents the back of the object (Figure 11 (center)). The combined effect of the two depth layers produces a much tighter boundary for the object (Figure 11 (right)) and leads to better quality renderings. Notice that this representation is not exactly a layered-depth image (LDI) [Shade et al. 1998]: the two layers of depth are computed as orthographic distances measured with respect to one of the faces of the depth bounding box and it does not store color information. Moreover, the second depth layer is not used directly for rendering, but for constraining the search for ray-height-field intersections. Like other impostor techniques, this representation is not intended to be seen from arbitrary viewpoints. However, we show that they can be used for quite large range of angles. The two depth maps and the normals can be stored in a single texture. Since all normals are unit length, we can store only the x and y components in the normal map, using the other two channels to represent the two depth layers. The z component of the normal can be recovered in the shader as z = 1 − ( x 2 + y 2 ) . Figure 12 shows dual-depth maps for two models: angel (top) and Christ (bottom). The depth values of both layers are defined with respect to the same reference plane. In Figure 12, the maps on the left represent the front of the object, while the ones on the right represent the back surface. The rendering process using two depth layers is similar to what was described in Section 3. In this case, however, a point is considered inside the represented object if front depth ≤ point depth ≤ back depth ...
Context 2
... texture-mapped polygons. The borders of the two polygons are shown on the left. This method, however, has not been extended to arbitrary surfaces. ElHew and Yang [ElHelw and Yang 2003] used cylindrical versions of relief textures ( i.e. , cylindrical images with depth measured along the normal directions to the cylinder) to render images of endoscopic simulations. They create inside- looking-outside renditions by warping the cylindrical textures ac- cording the the viewer’s position and by texture-mapping the result onto a reference cylinder. Their technique cannot be generalized to arbitrary surfaces. We exploit the programmability of modern graphics hardware to effectively render surface details onto arbitrary polygonal surfaces. Since the rendering is performed using fragment shaders, we can also perform per-pixel shading and compute shadows. Thus, the color texture originally used to store pre-computed diffuse shading can be discarded and replaced by a normal map. Any 2D texture can be mapped onto the resulting representation. Figure 6 shows a relief texture represented by its corresponding depth and normal maps. The depth map is quantized and represented using the alpha channel of the RGB α texture used to store the normal map. This way, a single 32-bit per texel texture suffices to represent the structure of a relief texture. We normalize the height values to the [ 0 , 1 ] range. Figure 7 shows the representation (cross-section) of such a height-field surface. From top to bottom, the depth values vary from 0.0 to 1.0. The process of mapping relief data to a polygonal surface can be conceptually understood as following. For each fragment to be rendered: • compute the viewing direction (VD) as the vector from the viewer to the 3D position of the point on the polygonal surface; • transform VD to the tangent space (defined by the tangent, normal and bi-normal vectors) associated with the current fragment; • use VD’ (the transformed VD) and A, the ( s , t ) texture coordinates of the fragment, to compute B, the ( u , v ) texture coordinates where the ray reaches the depth value 1.0 (see Figure 7); • compute the intersection between VD’ and the height-field surface using a binary search starting with A and B; • perform the shading of the fragment using the attributes ( e.g. , normal, depth, color, etc.) associated with the texture coordinates of the computed intersection point. This process is illustrated in Figure 7. Point A has an associated depth equal to zero, while B has depth equal to 1.0. At each step, one computes the midpoint of the current interval and assigns it the average depth and texture coordinates of the endpoints. In the example shown in Figure 7, the circle marked ”1” represents the first midpoint. The averaged texture coordinates are used to access the depth map. If the stored depth is smaller than the computed one, the point along the ray is inside the height field surface, as in the case of point 1 in Figure 7. The binary search proceeds with one endpoint inside and other outside the surface. In the example shown in Figure 7, the numbers indicate the order in which the midpoints are obtained. In practice, we have found that eight steps of binary subdivision is sufficient to produce very satisfactory results. This is equivalent to subdivide the depth range of the height field in 2 8 = 256 equally spaced intervals. Other researchers have used 64 axis-aligned equally-spaced 2D texture slices to render displacement maps using 3D textures [Meyer and Neyret 1998; Kautz and Seidel 2001]. The reader should also notice that our approach takes advantage of texture interpolation. Thus, while in techniques based on 3D texture mapping one may see in between slices, our technique does not suffer from this problem. As the depth map is treated and accessed as a texture, texture filtering ( e.g. , bilinear) guarantees that the height-field surface will be continuous. As a result, the proposed technique can be used to produce extreme close-up views of the surface without noticeable artifacts (see Figures 16 and 17). The binary search procedure just described may lead to incorrect results if the viewing ray intersects the height field surfaces in more than one point, as illustrated in Figure 8. In this example, the depth value associated with the first midpoint has a depth value smaller than the one retrieved from the depth map. Since the point is above the height field surface, the binary search would continue its way going deeper into the bounding box and find point 3 as the intersection, which is clearly incorrect. In order to avoid missing the first intersection, we start the process with a linear search. Beginning at point A, we step along the AB line at increments of δ times the length of AB looking for the first point inside the surface (Figure 9). If the graphics card supports shading model 3.0, δ varies from fragment to fragment as function of the angle between VD’ and the interpolated surface normal at the fragment. As this angle grows, the value of δ decreases. In our current implementation, no more than 32 steps are taken along the segment AB. Notice that since the linear search does not involve any dependent texture accesses, this process is very fast as we can make several texture fetches in parallel. Once the first point under the height field surface has been identi- fied, the binary search starts using the last point outside the surface and current one. In this case, a smaller number of binary subdivi- sions is needed. For example, if the depth interval between two lin- early searched points is 1/8, a six-step binary search will be equivalent to subdividing the interval into 512 (2 3 × 2 6 ) equally spaced intervals. Rendering shadows is a visibility problem [Williams 1978]. There- fore, a similar procedure can be used to determine whether a frag- ment is lit or in shade. In this case, we check if the light ray intersects the height-field surface between point C and the actual point being shaded (Figure 10). In case an intersection exists, the point must be in shade. Notice that there is no need to find the actual intersection point, but simply decide whether such an intersection exists, which can also be done using a similar strategy. Figure 14 and 19(c) show examples of relief renderings containing self-shadowing. This section introduces an extension to relief textures that uses two layers of depth information. Such an extension, called dual-depth relief textures , can be used to produce approximate representations for opaque, closed-surface objects using only one relief-mapped polygon. As one tries to sample an object using a single relief texture, not enough information will be available to produce a proper recon- struction. In particular, no information will exist about what lays behind the object (Figure 11 left). In these cases, inverse rendering techniques may extend the ends of these surfaces forming “skins” [McMillan 1997]. The occurrence of skins can be elimi- nated with the use of one extra layer of depth that represents the back of the object (Figure 11 (center)). The combined effect of the two depth layers produces a much tighter boundary for the object (Figure 11 (right)) and leads to better quality renderings. Notice that this representation is not exactly a layered-depth image (LDI) [Shade et al. 1998]: the two layers of depth are computed as orthographic distances measured with respect to one of the faces of the depth bounding box and it does not store color information. Moreover, the second depth layer is not used directly for rendering, but for constraining the search for ray-height-field intersections. Like other impostor techniques, this representation is not intended to be seen from arbitrary viewpoints. However, we show that they can be used for quite large range of angles. The two depth maps and the normals can be stored in a single texture. Since all normals are unit length, we can store only the x and y components in the normal map, using the other two channels to represent the two depth layers. The z component of the normal can be recovered in the shader as z = 1 − ( x 2 + y 2 ) . Figure 12 shows dual-depth maps for two models: angel (top) and Christ (bottom). The depth values of both layers are defined with respect to the same reference plane. In Figure 12, the maps on the left represent the front of the object, while the ones on the right represent the back surface. The rendering process using two depth layers is similar to what was described in Section 3. In this case, however, a point is considered inside the represented object if front depth ≤ point depth ≤ back depth ...
Context 3
... texture-mapped polygons. The borders of the two polygons are shown on the left. This method, however, has not been extended to arbitrary surfaces. ElHew and Yang [ElHelw and Yang 2003] used cylindrical versions of relief textures ( i.e. , cylindrical images with depth measured along the normal directions to the cylinder) to render images of endoscopic simulations. They create inside- looking-outside renditions by warping the cylindrical textures ac- cording the the viewer’s position and by texture-mapping the result onto a reference cylinder. Their technique cannot be generalized to arbitrary surfaces. We exploit the programmability of modern graphics hardware to effectively render surface details onto arbitrary polygonal surfaces. Since the rendering is performed using fragment shaders, we can also perform per-pixel shading and compute shadows. Thus, the color texture originally used to store pre-computed diffuse shading can be discarded and replaced by a normal map. Any 2D texture can be mapped onto the resulting representation. Figure 6 shows a relief texture represented by its corresponding depth and normal maps. The depth map is quantized and represented using the alpha channel of the RGB α texture used to store the normal map. This way, a single 32-bit per texel texture suffices to represent the structure of a relief texture. We normalize the height values to the [ 0 , 1 ] range. Figure 7 shows the representation (cross-section) of such a height-field surface. From top to bottom, the depth values vary from 0.0 to 1.0. The process of mapping relief data to a polygonal surface can be conceptually understood as following. For each fragment to be rendered: • compute the viewing direction (VD) as the vector from the viewer to the 3D position of the point on the polygonal surface; • transform VD to the tangent space (defined by the tangent, normal and bi-normal vectors) associated with the current fragment; • use VD’ (the transformed VD) and A, the ( s , t ) texture coordinates of the fragment, to compute B, the ( u , v ) texture coordinates where the ray reaches the depth value 1.0 (see Figure 7); • compute the intersection between VD’ and the height-field surface using a binary search starting with A and B; • perform the shading of the fragment using the attributes ( e.g. , normal, depth, color, etc.) associated with the texture coordinates of the computed intersection point. This process is illustrated in Figure 7. Point A has an associated depth equal to zero, while B has depth equal to 1.0. At each step, one computes the midpoint of the current interval and assigns it the average depth and texture coordinates of the endpoints. In the example shown in Figure 7, the circle marked ”1” represents the first midpoint. The averaged texture coordinates are used to access the depth map. If the stored depth is smaller than the computed one, the point along the ray is inside the height field surface, as in the case of point 1 in Figure 7. The binary search proceeds with one endpoint inside and other outside the surface. In the example shown in Figure 7, the numbers indicate the order in which the midpoints are obtained. In practice, we have found that eight steps of binary subdivision is sufficient to produce very satisfactory results. This is equivalent to subdivide the depth range of the height field in 2 8 = 256 equally spaced intervals. Other researchers have used 64 axis-aligned equally-spaced 2D texture slices to render displacement maps using 3D textures [Meyer and Neyret 1998; Kautz and Seidel 2001]. The reader should also notice that our approach takes advantage of texture interpolation. Thus, while in techniques based on 3D texture mapping one may see in between slices, our technique does not suffer from this problem. As the depth map is treated and accessed as a texture, texture filtering ( e.g. , bilinear) guarantees that the height-field surface will be continuous. As a result, the proposed technique can be used to produce extreme close-up views of the surface without noticeable artifacts (see Figures 16 and 17). The binary search procedure just described may lead to incorrect results if the viewing ray intersects the height field surfaces in more than one point, as illustrated in Figure 8. In this example, the depth value associated with the first midpoint has a depth value smaller than the one retrieved from the depth map. Since the point is above the height field surface, the binary search would continue its way going deeper into the bounding box and find point 3 as the intersection, which is clearly incorrect. In order to avoid missing the first intersection, we start the process with a linear search. Beginning at point A, we step along the AB line at increments of δ times the length of AB looking for the first point inside the surface (Figure 9). If the graphics card supports shading model 3.0, δ varies from fragment to fragment as function of the angle between VD’ and the interpolated surface normal at the fragment. As this angle grows, the value of δ decreases. In our current implementation, no more than 32 steps are taken along the segment AB. Notice that since the linear search does not involve any dependent texture accesses, this process is very fast as we can make several texture fetches in parallel. Once the first point under the height field surface has been identi- fied, the binary search starts using the last point outside the surface and current one. In this case, a smaller number of binary subdivi- sions is needed. For example, if the depth interval between two lin- early searched points is 1/8, a six-step binary search will be equivalent to subdividing the interval into 512 (2 3 × 2 6 ) equally spaced intervals. Rendering shadows is a visibility problem [Williams 1978]. There- fore, a similar procedure can be used to determine whether a frag- ment is lit or in shade. In this case, we check if the light ray intersects the height-field surface between point C and the actual point being shaded (Figure 10). In case an intersection exists, the point must be in shade. Notice that there is no need to find the actual intersection point, but simply decide whether such an intersection exists, which can also be done using a similar strategy. Figure 14 and 19(c) show examples of relief renderings containing self-shadowing. This section introduces an extension to relief textures that uses two layers of depth information. Such an extension, called dual-depth relief textures , can be used to produce approximate representations for opaque, closed-surface objects using only one relief-mapped polygon. As one tries to sample an object using a single relief texture, not enough information will be available to produce a proper recon- struction. In particular, no information will exist about what lays behind the object (Figure 11 left). In these cases, inverse rendering techniques may extend the ends of these surfaces forming “skins” [McMillan 1997]. The occurrence of skins can be elimi- nated with the use of one extra layer of depth that represents the back of the object (Figure 11 (center)). The combined effect of the two depth layers produces a much tighter boundary for the object (Figure 11 (right)) and leads to better quality renderings. Notice that this representation is not exactly a layered-depth image (LDI) [Shade et al. 1998]: the two layers of depth are computed as orthographic distances measured with respect to one of the faces of the depth bounding box and it does not store color information. Moreover, the second depth layer is not used directly for rendering, but for constraining the search for ray-height-field intersections. Like other impostor techniques, this representation is not intended to be seen from arbitrary viewpoints. However, we show that they can be used for quite large range of angles. The two depth maps and the normals can be stored in a single texture. Since all normals are unit length, we can store only the x and y components in the normal map, using the other two channels to represent the two depth layers. The z component of the normal can be recovered in the shader as z = 1 − ( x 2 + y 2 ) . Figure 12 shows dual-depth maps for two models: angel (top) and Christ (bottom). The depth values of both layers are defined with respect to the same reference plane. In Figure 12, the maps on the left represent the front of the object, while the ones on the right represent the back surface. The rendering process using two depth layers is similar to what was described in Section 3. In this case, however, a point is considered inside the represented object if front depth ≤ point depth ≤ back depth ...
Similar publications
Efficiently calculating accurate soft shadows cast by area light sources remains a difficult problem. Ray tracing based approaches are subject to noise or banding, and most other accurate methods either scale poorly with scene geometry or place restrictions on geometry and/or light source size and shape. Beam tracing is one solu- tion which has his...
Realistic rendering of underwater scenes has been a subject of in-creasing importance in modern real-time 3D applications, such as open-world 3D games, which constantly present the user with op-portunities to submerge oneself in an underwater environment. Cru-cial to the accurate recreation of these environments are the effects of caustics and godr...
When rendering materials represented by high frequency geome- try such as hair, smoke or clouds, standard shadow mapping or shadow volume algorithms fail to produce good self shadowing re- sults due to aliasing. Moreover, in all of the aforementioned ex- amples, properly approximating self shadowing is crucial to getting realistic results. To cope...
Gaseous phenomena such as clouds, fog, and mist have been difficult to render in realistic monoscopic imaging environments. Such phenomena are transparent, cast shadows, have dynamic behavior, and are of variable density. This paper describes a method based on splatting, billboarding, and alpha-blending that works well in a realistic real-time ster...
Real-time rendering of a complete 3D scene with hatching strokes is an important direction in NPR field. In this paper, a comprehensive solution is presented to render complicate scenes with pen-and-ink style in real time. With the help of powerful programmable graphics hardware, our real-time system includes many features as hatching, continuous t...
Citations
... A suite of techniques has been proposed as approximations to displacement mapping, including parallax mapping [KTI*01], relief mapping [OBM00,POC05], view-dependent displacement mapping (VDM) [WWT*03], and generalized displacement maps (GDM) [WTL*04, YZX*04]. These methodologies are fundamentally geometric and reliant on heightfields, overlooking the reflective effects emanating from the intricate material geometry. ...
Neural reflectance models are capable of reproducing the spatially‐varying appearance of many real‐world materials at different scales. Unfortunately, existing techniques such as NeuMIP have difficulties handling materials with strong shadowing effects or detailed specular highlights. In this paper, we introduce a neural appearance model that offers a new level of accuracy. Central to our model is an inception‐based core network structure that captures material appearances at multiple scales using parallel‐operating kernels and ensures multi‐stage features through specialized convolution layers. Furthermore, we encode the inputs into frequency space, introduce a gradient‐based loss, and employ it adaptive to the progress of the learning phase. We demonstrate the effectiveness of our method using a variety of synthetic and real examples.
... Relief mapping: Another implemented approach to represent surface details without increasing geometry complexity is Relief Mapping [119]. ...
... The relief mapping is compared to the 125000 triangle grid. Based on recommended values, the following relief mapping parameters were tested: linear search steps: 15, binary search steps 7, depth threshold: 0.996 [119]. ...
... 9.: A single quad rendered from the same perspective with: (a) Normal mapping, (b) Relief mapping[119]. ...
Humanity has always strived to learn more about the origins of our neighboring celestial bodies. With the help of modern rover systems, unknown areas are explored through scientific measurements. With increasingly better sensors, this data becomes more extensive and complex, creating an evident need for new and improved tools. These tools should support the scientists in the collaborative analysis of the recorded measurements. Scientists from different disciplinary backgrounds work together on this analysis. Exploring the data can be made more efficient with the help of intuitive visualization, interaction, and collaborative tools. At the same time, misunderstandings among the experts can be minimized. This thesis investigates how modern augmented reality approaches can support the process of collaborative rover data analysis. Three main aspects are considered: the threedimensional visualization of high-resolution terrain data, the visualization and interaction with rover data, and the integration of multi-user collaboration tools for the collaborative discussion. A mobile augmented reality device, the Microsft HoloLens 2, is used to input, output, and process the data. In order to evaluate the implemented visualization and interaction concepts, an expert interview and several experiments for a user study are prepared in this work. Due to the current COVID-19 pandemic restrictions, both interview and user study could not be conducted. Based on promising informal preliminary user tests, potential improvements of the presented concepts are discussed.
... Partial-occlusion effects emerge naturally from ray casting. One possible way of performing ray casting against an RGB-D image is to use a fragment-shader-based ray-height-field intersection solution performed in texture space [20,21]. This would, however, require casting a cone of rays for each individual output pixel, which would hurt performance. ...
We present a real-time technique for simulating accommodation and low-order aberrations (e.g., myopia, hyperopia, and astigmatism) of the human eye. Our approach models the corresponding point spread function, producing realistic depth-dependent simulations. Real-time performance is achieved with the use of a novel light-gathering tree data structure, which allows us to approximate the contributions of over 300 samples per pixel under 6 ms per frame. For comparison, with the same time budget, an optimized ray tracer exploring specialized hardware acceleration traces two samples per pixel. We demonstrate the effectiveness of our approach through a series of qualitative and quantitative experiments on images with depth from real environments. Our results achieved SSIM values ranging from 0.94 to 0.99 and PSNR ranging from 32.4 to 43.0 in objective evaluations, indicating good agreement with the ground truth.
... A billboard, or planar impostor, is made of one single plane, and its whole appearance is encapsulated in (the maps of) its material, with its perceived shape being expressed by its silhouette, reproduced using transparency. The extreme simplicity of a billboard's geometry allows to invest more resources in shading, with its associated material containing information about the normal field of the original geometry, and even the depth component leveraged by relief mapping techniques [POC05]. ...
Dense dynamic aggregates of similar elements are frequent in natural phenomena and challenging to render under full real time constraints. The optimal representation to render them changes drastically depending on the distance at which they are observed, ranging from sets of detailed textured meshes for near views to point clouds for distant ones. Our multiscale representation use impostors to achieve the mid‐range transition from mesh‐based to point‐based scales. To ensure a visual continuum, the impostor model should match as closely as possible the mesh on one side, and reduce to a single pixel response that equals point rendering on the other. In this paper, we propose a model based on rich spherical impostors, able to combine precomputed as well as dynamic procedural data, and offering seamless transitions from close instanced meshes to distant points. Our approach is architectured around an on‐the‐fly discrimination mechanism and intensively exploits the rough spherical geometry of the impostor proxy. In particular, we propose a new sampling mechanism to reconstruct novel views from the precomputed ones, together with a new conservative occlusion culling method, coupled with a two‐pass rendering pipeline leveraging early‐Z rejection. As a result, our system scales well and is even able to render sand, while supporting completely dynamic stackings.
... • Sound as a Volume Texture: There are many works on creating deformed surfaces using different types of maps and modifiers integration to represent a genuine detailed surface extending the concept of volumetric texture [20], displacement and bump maps [21]. • 3D Fabrication of Sound: Therefore, sound-structure texture explores ways to model and fabricate 2D images [22] using maps with Mantaflow physics [23], creating a volumetric surface that generates fine detailed features [24]. The prototypes explore different printable materials, e.g. ...
... Approximations Instead of accurate calculation, approximations for intersection points or shading information can be exploited to further accelerate ray tracing. For instance, Policarpo et al. (2005) proposed an efficient surface-ray-intersection algorithm for heightfields that is based on a combination of uniform and binary search instead of an exact calculation. Though this can cause visible artifacts, this problem is later solved through relaxed cone stepping Policarpo and Oliveira (2007). ...
... Since an exact calculation of intersection points between a ray and a surface patch of the heightfield is time consuming, we approximate the intersection using uniform and binary search (Policarpo et al. (2005)). An advantage of this method is the abstraction from the real structure of the patch. ...
Visualizing spatial data within 3D terrain facilitates the exploration and comprehension of a large amount of information together with their natural frame of reference. The design of suitable visualization designs, however, is challenging due to the continuously increasing complexity of today’s data sets and the multifaceted requirements of the data, the task at hand, and the available hardware. Typically, visualizations are tailored to a fixed set of demands. However, scalability of visualizations in this regard has recently become a highly relevant topic.
Goal of this thesis is to provide scalable visualizations that flexible adapt to changing demands. To this end, a new systematization of visual designs towards the dimensionality of the data representation is proposed and a comprehensive study of available design choices for varying data aspects, including numerical data, uncertainty, and terrain models, is conducted. On that basis, novel concepts that allow to scale the visualization according to the task at hand are developed. On the one hand, using the different, interchangeably applicable design options enables the prioritization of the relevant information depending on the current task. The application of Focus&Context on data in 3D terrain, on the other hand, allows to emphasize information based on their spatial and attributive characteristics. By utilizing a perception-based evaluation strategy, assessing the visual design towards the objective of communicating the relevant information most prominently is feasible.
Eventually, the described concepts have been implemented into a sophisticated framework called TedaVis, which flexible and modular architecture enables scalability regarding different hardware conditions. This tool facilitates the interactive design and application of dynamic visualizations of spatial data in 3D terrain in the context of avionic application.
... The software running on each Odroid receives the color and depth images from the remote system to produce a relief map. The rendering software then renders a viewport at 720p for each projector by calculating the offset of its projector in this relief map [35], at ~10 FPS. This image is subsequently mapped and projected onto the drone surface. ...
... One nice benefit of this method is that the hologram appears rendered in the correct location regardless of the exact location of the projection surface. Colour and depth data of the remote head are applied to an OpenGL rectangle which is rendered in 3D using a relief mapping shader, in a method similar to Policarpo et al. [35]. Figure 5 shows the remote capture room. ...
LightBee is a novel "hologrammatic" telepresence system featuring a self-levitating light field display. It consists of a drone that flies a projection of a remote user's head through 3D space. The movements of the drone are controlled by the remote user's head movements, offering unique support for non-verbal cues, especially physical proxemics. The light field display is created by a retro-reflective sheet that is mounted on the cylindrical quadcopter. 45 smart projectors, one per 1.3 degrees, are mounted in a ring, each projecting a video stream rendered from a unique perspective onto the retroreflector. This creates a light field that naturally provides motion parallax and stereoscopy without requiring any headset nor stereo glasses. LightBee allows multiple local users to experience their own unique and correct perspective of the remote user's head. The system is currently one-directional: 2 small cameras mounted on the drone allow the remote user to observe the local scene.
... Relief mapping [11] is a texture mapping technique. It begins with a linear search to determine the interval where the first intersection is located. ...
Per-pixel Revolution Mapping is an image-based modeling and rendering technique or IBMR (Image-Based Modeling and Rendering). It consists of creating a virtual 3D objects without polygonal meshes. This technique uses a single RGBA texture that stores the needed data to generate the revolved surface. The main problem with this technique is that the 3D revolved models are rendered without realistic surface wrinkles. In this paper, we presented an improvement to enhance the realism of the 3D revolved models by combining the revolution mapping and the bump mapping. In order to synchronize between real-time depth scaling of the microrelief and the resulting shading, we have added a scaling factor that makes it possible to have a realistic depth animation. This new technique creates very convincing 3D models with realistic looking surface wrinkles and allows rendering at interactive frame rates.
... Frames with missing packets are discarded. The colour and depth data are applied to an OpenGL rectangle which is rendered in 3D using a relief mapping shader, in a method similar to Policarpo et al. [28]. This rotates the video based on the angular distance from the center of the projector cluster (and thus the center of the ZED camera) to the associated projector, up to ± 10º. ...
For telepresence to support the richness of multiparty conversations, it is important to convey motion parallax and stereoscopy without head-worn apparatus. TeleHuman2 is a "hologrammatic" telepresence system that conveys full-body 3D video of interlocutors using a human-sized cylindrical light field display. For rendering, the system uses an array of projectors mounted above the heads of participants in a ring around a retroreflective cylinder. Unique angular renditions are calculated from streaming depth video captured at the remote location. Projected images are retro-reflected into the eyes of local participants, at 1.3º intervals providing angular renditions simultaneously for left and right eyes of all onlookers, which conveys motion parallax and stereoscopy without head-worn apparatus or head tracking. Our technical evaluation of the angular accuracy of the system demonstrates that the error in judging the angle of a remote arrow object represented in TeleHuman2 is within 1 degree, and not significantly different from similar judgments of a collocated arrow object.
... Relief mapping [22] is a texture mapping technique. It performs an image space search to determine the interval where the first intersection is located between the viewing ray and a 2D depth map. ...