Conference Paper

Real-time environment map interpolation

Tsinghua Univ., Beijing, China
DOI: 10.1109/ICIG.2004.116 In proceeding of: Image and Graphics, 2004. Proceedings. Third International Conference on
Source: IEEE Xplore

ABSTRACT Environment mapping, or reflection mapping, has been widely used in the game and movie industries to give objects a realistic illumination atmosphere. For moving objects, direct frame-by-frame calculation of environment maps and correspondence-based interpolation are both impractical for real-time applications due to the large computational costs. To deal with this problem, "fake" environment mapping with a fixed, pre-generated environment image has been commonly used, but clearly such an approximation is inadequate for a highly reflective object whose environment is constantly changing as it moves. In this paper, we present an approach that sparsely samples environment maps of a moving object and rapidly interpolates them for high performance. Two techniques are introduced for fast environment map interpolation without computation of scene shading. The first method utilizes scene geometry to facilitate interpolation, and the second involves geometry reconstruction from depth buffer values to reduce inefficiencies caused by complex scene geometry. These two techniques can easily be implemented in graphics hardware, and test results show that they achieve significant boosts in performance over frame-by-frame environment map computation with little loss in visual quality.

0 Bookmarks
 · 
41 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Image-based rendering is a powerful new approach for generating real-time photorealistic computer graphics. It can provide convinc- ing animations without an explicit geometric representation. We use the "plenoptic function" of Adelson and Bergen to provide a concise problem statement for image-based rendering paradigms, such as morphing and view interpolation. The plenoptic function is a param- eterized function for describing everything that is visible from a given point in space. We present an image-based rendering system based on sampling, reconstructing, and resampling the plenoptic function. In addition, we introduce a novel visible surface algorithm and a geometric invariant for cylindrical projections that is equiva- lent to the epipolar constraint defined for planar projections.
    Proceedings of the 22nd annual conference on Computer graphics and interactive techniques; 01/1995
  • Conference Paper: Plenoptic sampling.
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper studies the problem of plenoptic sampling in image-based rendering (IBR). From a spectral analysis of light field signals and using the sampling theorem, we mathematically derive the analytical functions to determine the minimum sampling rate for light field rendering. The spectral support of a light field signal is bounded by the minimum and maximum depths only, no matter how complicated the spectral support might be because of depth variations in the scene. The minimum sampling rate for light field rendering is obtained by compacting the replicas of the spectral support of the sampled light field within the smallest interval. Given the minimum and maximum depths, a reconstruction filter with an optimal and constant depth can be designed to achieve anti-aliased light field rendering.Plenoptic sampling goes beyond the minimum number of images needed for anti-aliased light field rendering. More significantly, it utilizes the scene depth information to determine the minimum sampling curve in the joint image and geometry space. The minimum sampling curve quantitatively describes the relationship among three key elements in IBR systems: scene complexity (geometrical and textural information), the number of image samples, and the output resolution. Therefore, plenoptic sampling bridges the gap between image-based rendering and traditional geometry-based rendering. Experimental results demonstrate the effectiveness of our approach.
    Proceedings of the 27th annual conference on Computer graphics and interactive techniques; 01/2000
  • [Show abstract] [Hide abstract]
    ABSTRACT: Three experiments were conducted to examine the processes underlying the prediction of the future position of a moving target. A target moved horizontally across a computer screen at constant velocity, disappearing partway across the screen, and subjects responded when they estimated the target would have passed a point on the far side of the screen, had it continued on its path. The first experiment demonstrated that visual tracking of the target is not necessary for successful position estimation. In the second experiment, the time over which the prediction was made rather than the interval for which the target was visible, the distance over which the prediction was made, or the velocity of the target, was found to affect performance. Finally, performance was not affected when markers were placed parallel to the trajectory of the target; the presence of gratings which masked portions of the target's path did not affect subjects' performance. The previous literature suggests that the spatial interval over which predictions are made is the important variable; we find that temporal factors are the major determinants of prediction.
    Perception 02/1991; 20(1):5-16. · 1.31 Impact Factor