Image-Based Rendering

Foundations and TrendsĀ® in Computer Graphics and Vision 01/2006; 2(3). DOI: 10.1561/0600000012
Source: DBLP


Image-based rendering (IBR) is unique in that it requires computer graphics, computer vision, and image processing to join forces to solve a common goal, namely photorealistic rendering through the use of images. IBR as an area of research has been around for about ten years, and substantial progress has been achieved in effiectively capturing, representing, and rendering scenes. In this article, we survey the techniques used in IBR. Our survey shows that representations and rendering techniques can differ radically, depending on design decisions related to ease of capture, use of geometry, accuracy of geometry (if used), number and distribution of source images, degrees of freedom for virtual navigation, and expected scene complexity.

7 Reads
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Image based methods have proved to efficiently render scenes with a higher efficiency than geometry based ap-proaches, mainly because of one of their most important advantages: the bounded complexity by the image resolu-tion, instead of by the number of primitives. Furthermore, due to their parallel and discrete nature, they are highly suitable for GPU implementations. On the other hand, dur-ing the last few years point-based graphics has emerged as a promising complement to other representations. How-ever, with the continuous increase of scene complexity, so-lutions for directly processing and rendering point clouds are in demand. In this paper, algorithms for efficiently ren-dering large point models using image reconstruction tech-niques are proposed. Except for the projection of samples onto screen space, the reconstruction time is bounded only by the screen resolution. The method is also extended to in-terpolate other primitives, such as lines and triangles. In addition, no extra data-structure is required, making the strategy memory efficient.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Virtual worlds seek to provide an online setting where users can interact in a shared environment. Popular vir-tual worlds such as Second Life and World of Warcraft, however, rely on share-nothing data and strict partition-ing as much as possible. They translate a large world into many tiny worlds. This partitioning conflicts with the intended goal of a virtual world by greatly limiting interaction and reducing the shared experience. We present Meru, an architecture for scalable, fed-erated virtual worlds. Meru's key insight is that, com-pared to traditional distributed object systems, virtual world objects have the additional property of being em-bedded in a three-dimensional geometry. By leveraging this geometric information in messaging and caching, Meru can allow uncongested virtual world objects to pass messages with 800 times the throughput as Second Life while also gracefully scaling to handle the congestion of ten thousand active senders. Unlike virtual worlds today, Meru achieves this performance without any partition-ing, maintaining a single, seamless world.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes a method of generating a new view sequence for view-based outdoor navigation. View-based navigation approaches have been shown to be effective but have a drawback that a view sequence for the route to be navigated is needed beforehand. This will be an issue especially for navigation in an open space where numerous potential routes exist; it is almost impossible to take view sequences for all of the routes by actually moving on them. We therefore develop a method of generating a view sequence for arbitrary routes from an omnidirectional view sequence taken at a limited movement. The method is based on visual odometry-based map generation and an image-to-image morphing using homography. The effectiveness of the method is validated by view-based localization experiments.
Show more

Similar Publications