Conference Paper

Interactive Rendering of Translucent Objects.

Max-Planck-Inst. fur Inf., Saarbrucken, Germany
DOI: 10.1109/PCCGA.2002.1167862 Conference: 10th Pacific Conference on Computer Graphics and Applications (PG 2002), 9-11 October 2002, Beijing, China
Source: DBLP


This paper presents a rendering method for translucent objects, in which view point and illumination can be modified at interactive rates. In a preprocessing step the impulse response to incoming light impinging at each surface point is computed and stored in two different ways: The local effect on close-by surface points is modeled as a per-texel filter kernel that is applied to a texture map representing the incident illumination. The global response (i.e. light shining through the object) is stored as vertex-to-vertex throughput factors for the triangle mesh of the object. During rendering, the illumination map for the object is computed according to the current lighting situation and then filtered by the precomputed kernels. The illumination map is also used to derive the incident illumination on the vertices which is distributed via the vertex-to-vertex throughput factors to the other vertices. The final image is obtained by combining the local and global response. We demonstrate the performance of our method for several models.

Download full-text


Available from: Philippe Bekaert
  • Source
    • "Several interactive methods based on Jensen's model have been presented. Lensch et al. [7] and Hao et al. [8] used precomputed information and estimated multiple scattering with vertex to vertex light transfer. However, both of them ignored single scattering which is necessary for physically-correct rendering. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel technique for real-timely rendering translucent objects. Our technique is mainly based on translucent shadow maps with a new adaptive sampling strategy. In this sampling strategy, the hierarchy levels and positions for sampling are selected according to the size of target object. By our optimization, less storage in the texture is required than that in previous methods. Compared with previous methods, our technique provide a way to choose the proper sampling pattern and is applicable for a wider range of object sizes. With the implementation on GPU, the presented technique is able to render translucent objects in animated scenes where lights and material parameters vary real-timely without any lengthy preprocessing.
    Preview · Conference Paper · Jan 2009
  • Source
    • "Based on the investigations on the related literature , we understand that there are still challenges for real-time rendering of deformable translucent objects. Most of existing approaches suffer from either efficiency issue such as [14] [13], or lacking photo-realistic effects like [10] [17], in terms of deformable models. This paper describes our efforts on boosting the performance and quality with a new approximate image-space approach. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Although lots of works have been engaged in interactive and realistic rendering of translucent materials, efficient processing for deformable models remains a challenging problem. In this paper we introduce an approximate image-space approach for real-time rendering of deformable translucent models by taking account of diffuse multiple sub-surface scattering. We decom- pose the process into two stages, called the Gathering and Scattering corresponding to the computations for incident and exiting irradiance respectively. We derive a simpli- fied all-frequency illumination model for the gathering of the incident irradiance, which is amenable for deformable models using two auxiliary textures. We introduce two modes for efficient accomplishment of the view-dependent scattering. We implement our approach by fully exploiting the capabilities of graphics process- ing units (GPUs). Our implementation achieves visually plausible results and real-time frame rates for deformable models on commodity desktop PCs.
    Full-text · Conference Paper · Jan 2006
  • Source
    • "Our goal is to render deformable, translucent objects at interactive rates under varying lighting and viewing conditions (see figure 1 and color page for examples). Lensch et al. [16] proposed a method, which is interactive, but only for rigid objects with fixed, possibly inhomogeneous, subsurface scattering properties. The method by Hao et al. [5] renders translucent but rigid objects in real-time. "
    [Show abstract] [Hide abstract]
    ABSTRACT: A novel approach is presented to efficiently render local subsurface scattering effects. We introduce an importance sampling scheme for a practical subsurface scattering model. It leads to a simple and efficient rendering algorithm, which operates in image-space, and which is even amenable for implementation on graphics hardware. We demonstrate the applicability of our technique to the problem of skin rendering, for which the subsurface transport of light typically remains local. Our implementation shows that plausible images can be rendered interactively using hardware acceleration.
    Full-text · Article · Mar 2005 · Computer Graphics Forum
Show more