Chapter

Progressive Rendering Using Multi-frame Sampling

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This chapter presents an approach that distributes sampling over multiple, consecutive frames, and, thereby, enables sampling-based, real-time rendering techniques to be implemented for most graphics hardware and systems in a less complex, straightforward way. This systematic, extensible schema allows developers to effectively handle the increasing implementation complexity of advanced, sophisticated, real-time rendering techniques, while improving responsiveness and reducing required hardware resources.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The low performance of mobile graphics cards render most of the real-time approximations of today's game engines too expensive for use in the web. In this work, we use progressive rendering [Limberger et al. 2016] for interactive, yet physically-based rendering in the web. Progressive rendering distributes the rendering workload across multiple frames while showing intermediate results to the user. ...
... In comparison to rendering the effects within one frame, intermediate results can be shown to the user, which allows smoother interaction with the scene. [Limberger et al. 2016] took this idea one step further and showed how to implement order-independent transparency, depth of field, screen-space ambient occlusion, soft-shadows and more. This was later used to produce high-quality renderings of geo-referenced data [Limberger et al. 2017]. ...
... To render penumbras using progressive rendering, [Limberger et al. 2016] have shown that repeated sampling of a light source over multiple frames yields believable results. We use this approach in order to approximate physically correct shadowing. ...
Conference Paper
Full-text available
This paper presents a progressive rendering approach that enables rendering of static 3D scenes, lit by physically-based environment and area lights. Multi-frame sampling strategies are used to approximate elaborate lighting that is refined while showing intermediate results to the user. The presented approach enables interactive yet high-quality rendering in the web and runs on a wide range of devices including low-performance hardware such as mobile devices. An open-source implementation of the described techniques using TypeScript and WebGL 2.0 is presented and provided. For evaluation, we compare our rendering results to both a path tracer and a physically-based rasterizer. Our findings show that the approach approximates the lighting and shadowing of the path-traced reference well while being faster than the compared rasterizer.
... Even though this approach is well-known [Fuchs et al. 1985;Haeberli and Akeley 1990], it is neglected or at least could be better exploited in most of todays rendering systems. In this work, optimized sampling strategies for common rendering effects are discussed in detail to ease the adoption of efficient and responsive high-quality rendering [Limberger et al. 2016]. Therefore, underlying sampling characteristics such as (1) number of samples, (2) spatial or value-based distribution, (3) sample regularity and completeness, and (4) temporal convergence constraints, w.r.t. ...
Poster
Full-text available
In a rendering environment of comparatively sparse interaction, e.g., digital production tools, image synthesis and its quality do not have to be constrained to single frames. This work analyzes strategies for highly economically rendering of state-of-the-art rendering effects using progressive multi-frame sampling in real-time. By distributing and accumulating samples of sampling-based rendering techniques (e.g., anti-aliasing, order-independent transparency, physically-based depth-of-field and shadowing, ambient occlusion, reflections) over multiple frames, images of very high quality can be synthesized with unequaled resource-efficiency.
Conference Paper
Full-text available
Information cartography services provided via web-based clients using real-time rendering do not always necessitate a continuous stream of updates in the visual display. This paper shows how progressive rendering by means of multi-frame sampling and frame accumulation can introduce high-quality visual effects using robust and straightforward implementations. For it, (1) a suitable rendering loop is described, (2) WebGL limitations are discussed, and (3) an adaption of THREE.js featuring progressive anti-aliasing, screen-space ambient occlusion, and depth of field is detailed. Furthermore, sampling strategies are discussed and rendering performance is evaluated, emphasizing the low per-frame costs of this approach.
Conference Paper
Full-text available
Rendering objects transparently gives additional insight in complex and overlapping structures. However, traditional techniques for the rendering of transparent objects such as alpha blending are not very well suited for the rendering of multiple transparent objects in dynamic scenes. Screen door transparency is a technique to render transparent objects in a simple and efficient way: no sorting is required and intersecting polygons can be handled without further preprocessing. With this technique, polygons are rendered through a mask: only where the mask is present, pixels are set. However, artifacts such as incorrect opacities and distracting patterns can easily occur if the masks are not carefully designed. The requirements on the masks are considered. Next, three algorithms are presented for the generation of pixel masks. One algorithm is designed for the creation of small (e.g. 4×4) masks. The other two algorithms can be used for the creation of larger masks (e.g. 32×32). For each of these algorithms, results are presented and discussed.
Conference Paper
Full-text available
Shadow maps are a widely used shadowing technique in real time graphics. One major drawback of their use is that they cannot be filtered in the same way as color textures, typi- cally leading to severe aliasing. This paper introduces vari- ance shadow maps, a new real time shadowing algorithm. Instead of storing a single depth value, we store the mean and mean squared of a distribution of depths, from which we can eciently compute the variance over any filter re- gion. Using the variance, we derive an upper bound on the fraction of a shaded fragment that is occluded. We show that this bound often provides a good approximation to the true occlusion, and can be used as an approximate value for rendering. Our algorithm is simple to implement on current graphics processors and solves the problem of shadow map aliasing with minimal additional storage and computation. CR Categories: I.3.7 (Computer Graphics): Three- Dimensional Graphics and Realism—Color, shading, shad- owing, and texture
Article
Full-text available
Stochastic transparency provides a unified approach to order-independent transparency, antialiasing, and deep shadow maps. It augments screen-door transparency using a random sub-pixel stipple pattern, where each fragment of transparent geometry covers a random subset of pixel samples of size proportional to alpha. This results in correct alpha-blended colors on average, in a single render pass with fixed memory size and no sorting, but introduces noise. We reduce this noise by an alpha correction pass, and by an accumulation pass that uses a stochastic shadow map from the camera. At the pixel level, the algorithm does not branch and contains no read-modify-write loops, other than traditional z-buffer blend operations. This makes it an excellent match for modern massively parallel GPU hardware. Stochastic transparency is very simple to implement and supports all types of transparent geometry, able without coding for special cases to mix hair, smoke, foliage, windows, and transparent cloth in a single scene.
Article
Full-text available
Shadow maps are probably the most widely used means for the generation of shadows, despite their well known aliasing problems. In this paper we introduce perspective shadow maps, which are generated in normalized device coordinate space, i.e., after perspective transformation. This results in important reduction of shadow map aliasing with almost no overhead. We correctly treat light source transformations and show how to include all objects which cast shadows in the transformed space. Perspective shadow maps can directly replace standard shadow maps for interactive hardware accelerated rendering as well as in high-quality, offline renderers.
Article
We present a novel technique for rendering depth of field that addresses difficult overlap cases, such as close, but out-of-focus, geometry in the near-field. Such scene configurations are not managed well by state-of-the-art post-processing approaches since essential information is missing due to occlusion. Our proposed algorithm renders the scene from a single camera position and computes a layered image using a single pass by constructing per-pixel lists. These lists can be filtered progressively to generate differently blurred representations of the scene. We show how this structure can be exploited to generate depth of field in real-time, even in complicated scene constellations.
Simulating partial occlusion in post-processing depth-of-field methods
  • David Schedl
  • Wimmer Michael
[Schedl and Michael 13] David Schedl and Wimmer Michael. "Simulating partial occlusion in post-processing depth-of-field methods." In GPU Pro 4, 2013.
08] Tomas Akenine-Möller, Eric Heines, and Naty Hoffman. Real-Time Rendering
  • Akenine-Möller
Akenine-Möller et al. 08] Tomas Akenine-Möller, Eric Heines, and Naty Hoffman. Real-Time Rendering, Third edition. A. K. Peters, Ltd., 2008.
The Skylanders SWAP Force Depth-of-Field Shader
  • Mike Bukowski
  • Padraic Hennessy
  • Brian Osman
  • Morgan Mcguire
[Bukowski et al. 13] Mike Bukowski, Padraic Hennessy, Brian Osman, and Morgan McGuire. "The Skylanders SWAP Force Depth-of-Field Shader." In GPU Pro 4, pp. 175-184, 2013.
Kernel Utilities for OpenGL (glkernel)
  • Daniel Limberger
Daniel Limberger. "Kernel Utilities for OpenGL (glkernel)." https://github.com/cginternals/glkernel, 2015.