Conference Paper

An Image-Based Scene Representation and Rendering Framework Incorporating Multiple Representations.

Conference: Proceedings of the Vision, Modeling, and Visualization Conference 2003 (VMV 2003), München, Germany, November 19-21, 2003
Source: DBLP

ABSTRACT A variety of image-based scene representations like light fields, concentric mosaics, panoramas, and omnidirectional video have been proposed in the past years. These image-based scene represen- tations provide photorealistic interactive user navigation in a 3D scene. As the trade-off between acquisition complexity, freedom of movement and rendering quality differs for the diverse tech- niques, the most efficient scene representation and rendering technique should be selected with re- spect to scene content and complexity. Splitting a scene into partial representations which are adapted to local requirements is pro- posed in this paper. Besides meaningful restric- tions to user movement, the transition between different image-based scene representations is addressed to provide an efficient image based walkthrough for large and complex scenes. We identify rendering parameters to achieve a seam- less transition between different representations and present results for stitching together concen- tric mosaics, omnidirectional video and light fields.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Interactive walkthrough applications require detailed 3D models to give users a sense of immersion in an environment. Traditionally these models are built using computer-aided design tools to define geometry and material properties. But creating detailed models is time-consuming and it is also difficult to reproduce all geometric and photometric subtleties of real-world scenes. Computer vision attempts to alleviate this problem by extracting geometry and photogrammetry from images of the real-world scenes. However, these models are still limited in the amount of detail they recover. Image-based rendering generates novel views by resampling a set of images of the environment without relying upon an explicit geometric model. Current such techniques limit the size and shape of the environment, and they do not lend themselves to walkthrough applications. In this paper, we define a parameterization of the 4D plenoptic function that is particularly suitable for interactive walkthroughs and define a method for its sampling and reconstructing. Our main contributions are: 1) a parameterization of the 4D plenoptic function that supports walkthrough applications in large, arbitrarily shaped environments; 2) a simple and fast capture process for complex environments; and 3) an automatic algorithm for reconstruction of the plenoptic function.
    Proceedings of the 28th annual conference on Computer graphics and interactive techniques; 01/2001
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Image-based rendering is a powerful new approach for generating real-time photorealistic computer graphics. It can provide convinc- ing animations without an explicit geometric representation. We use the "plenoptic function" of Adelson and Bergen to provide a concise problem statement for image-based rendering paradigms, such as morphing and view interpolation. The plenoptic function is a param- eterized function for describing everything that is visible from a given point in space. We present an image-based rendering system based on sampling, reconstructing, and resampling the plenoptic function. In addition, we introduce a novel visible surface algorithm and a geometric invariant for cylindrical projections that is equiva- lent to the epipolar constraint defined for planar projections.
    Proceedings of the 22nd annual conference on Computer graphics and interactive techniques; 01/1995
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present a novel image-based rendering technique, which we call manifold hopping. Our technique provides users with perceptually continuous navigation by using only a small number of strategically sampled manifold mosaics or multiperspective panoramas. Manifold hopping has two modes of navigation: moving continuously along any manifold, and discretely between manifolds. An important feature of manifold hopping is that significant data reduction can be achieved without sacrificing output visual fidelity, by carefully adjusting the hopping intervals. A novel view along the manifold is rendered by locally warping a single manifold mosaic using a constant depth assumption, without the need for accurate depth or feature correspondence. The rendering errors caused by manifold hopping can be analyzed in the signed Hough ray space. Experiments with real data demonstrate that we can navigate smoothly in a virtual environment with as little as 88k data compressed from 11 concentric mosaics.
    International Journal of Computer Vision 10/2002; 50(2):185-201. DOI:10.1023/A:1020398016678 · 3.53 Impact Factor


Available from