PresentationPDF Available

A Generalization Approach for 3D Viewing Deformations of Single-Center Projections



Presentation of Research Paper "A Generalization Approach for 3D Viewing Deformations of Single-Center Projections"
2008-22-01Generalization Concept1
A Generalization Approach
for 3D Viewing Deformations
of Single-Center Projections
Matthias Trapp , Jürgen Döllner
3rd International Conference on Computer Graphics Theory and Applications
22 - 25 January, 2008, Funchal, Madeira - Portugal
Computer Graphics Systems Group,
Hasso-Plattner Institute,
University of Potsdam,
Introduction & Basic Concepts
Generalization Concept
2008-22-01Introduction & Basic Concepts 3
Mission: Unified Rendering Technique
Unify techniques for:
Non-planar projections
2D lens effects
Image warping
Implementation requirements:
Real-time visualization
Large scene rendering
Single projection center (SCOP)
2008-22-01Introduction & Basic Concepts 4
Basic Concept Overview
Dynamic cube map
Screen-aligned quad
Fragment shader
3-Phase rendering:
1. Create/Update dynamic cube map
2. Setup projection shader
3. Render screen-aligned quad
Main characteristics:
Image-based approach
Fully hardware accelerated
(-1, -1) (1, 0)
(1, 1)
(0, -1)
O = (0, 0)
Fst = (s, t)
Cube map
2008-22-01Introduction & Basic Concepts 5
Basic Concept Details
Define projection function:
Apply camera orientation:
Sample from cube map:
2008-22-01Introduction & Basic Concepts 6
Example: Cylindrical Projection
Projection function:
Horizontal FOV: 360°, Vertical FOV 60°
-α/2 0
Fst = (s,t)
2008-22-01Introduction & Basic Concepts 7
Example: Spherical Projections
Projection function:
Viewport truncation:
(0,1,0) (s,t)
2008-22-01Introduction & Basic Concepts 8
Optimization: Normal Maps
For static projection functions
Store normalized cube map sampling vectors
Using Render-To-Texture (RTT)
Floating point texture precision
Normal Maps
OMNIMAX Cylindrical (Horizontal)
Introduction & Basic Concepts
Generalization Concept
2008-22-01Generalization Concept10
Generalization Concept Overview
Tile Screen
Feature Map
Normal Map
Final Rendering
"i,j: frender(Tij )
Nst = fnormal(Ast )
Cst = fsample(Nst )
2008-22-01Generalization Concept11
Projection Tile Screen - Example
Final Rendering
2008-22-01Generalization Concept12
Projection Tile Screens
Projection tile screen (TPS)=set of projection tiles
Projection tile=set of tile features
Tile Feature:
Tile Screen
2008-22-01Generalization Concept13
Generating Feature Maps
Feature-map rendering:
1. Setup render-to-texture
2. Setup orthogonal-projection
3. Encode feature properties as color values:
4. Render tiles successively
Cube map sampling vectors:
Calculated using fragment shader
Vector derived by:
2008-22-01Generalization Concept14
Projection Tiles Extensions
PTS is hard to model and control
Triangulation influences interpolation
Covers not all possible tile shapes
No hard transition between tiles
Regular grid triangulated planar mesh („triangle soup“)
Enables hard transitions between tiles
Enable the usage of modeling tools
Introduction & Basic Concepts
Generalization Concept
Dynamic Cube Maps
Single-Pass: needs DX10 compatible hardware
Evaluate the scene only once
Geometry shader multiply primitive
Project primitive to cube map faces
Rasterization to six texture layers in parallel
Multi-Pass: most compatible approach
Evaluate scene six times
RTT to each cube face
Runtime optimizations:
Omit whole cube map update
Omit cube map side update
Main Shader
uniform samplerCube samplerCubeMap;
uniform vec3 lookTo;
uniform vec3 lookUp;
uniform vec3 offset;
//function prototypes
vec3 distortion(vec2 texCoords);
vec3 correctLookTo(vec3 samplingVector);
void main(void)
//apply distortion function
vec3 distortionVector = distortion(gl_TexCoord[0].st);
//lookTo transformation
distortionVector = correctLookTo(distortionVector) - offset;
//back with cubemap lookup
gl_FragColor = textureCube(samplerCubeMap, distortionVector);
Shader main entry point
Cylindrical Projection Shader
uniform int panoramaMode;
uniform float FOVhorz;
uniform float FOVvert;
const int HORIZONTAL = 1;
vec3 distortion(in vec2 textureCoords)
vec3 lookUp = vec3(0.0, 0.0, 0.0);
if(panoramaMode == HORIZONTAL)
float angle = radians(FOVhorz / 2.0);
float x = cos(textureCoords.s * angle);
float y = textureCoords.t * tan(radians(FOVvert / 2.0));
float z = sin(textureCoords.s * angle);
lookUp = vec3(x, y, z);
} else {
float angle = radians(FOVvert / 2.0);
float x = textureCoords.s * tan(radians(FOVhorz / 2.0));
float y = cos(textureCoords.t * angle);
float z = sin(textureCoords.t * angle);
lookUp vec3(x, y, z);
return lookUp;
Projection function
Introduction & Basic Concepts
Generalization Concept
Non-Planar Projection Surfaces
Horizontal FOV: 360°, Vertical FOV 90°
Normal Map
Final Rendering
Non-Planar Projection Surface
Using Custom Normal Maps
Normal Map (Tangent Space)
Normal Map (Unit Space)
Final Rendering
Horizontal FOV: 90°, Vertical FOV 60°
Combinations of Projections
Lens Effects
Horizontal FOV: 180°, Vertical FOV 135°
Final Rendering
Normal Map
Compound Eye
T i1
T i4
T i2
T i3
Horizontal FOV: 120°, Vertical FOV 60°
Introduction & Basic Concepts
Generalization Concept
Rendering quality depends on:
Cube map resolution
Tessellation of tile screen
Undersampling / Oversampling
Dynamic cube map can be costly
Interpolation artifacts by contrary tessellation
Haik Lorenz, Jürgen Döllner,
Dynamic Mesh Refinement on GPU using Geometry Shaders,
WSCG 2008 (to appear)
Take aways:
General concept for SCOP distortions:
Non-planar projections
2D lenses with arbitrary shapes
Image warping and distortions
Applicable in real-time for large scenes
Controllable via projection tile screens
Important: resolution of cube map and tessellation of PTS
Future work:
Improve rendering quality
Develop graphical user interface for PTS
Shift PTS tessellation to GPU
Q & A
Thank You.
Matthias Trapp
Computer Graphics Systems Group
Prof. Dr. Jürgen Döllner
Research group 3D-Geoinformation
... In the literature, various technical solutions have been proposed to provide an extended field of view through non planar projection [3,2,8,11,6]. Surprisingly, the current techniques do not take advantage of the projection methods developed in the cartography field [10] or in omnidirectional computer vision [1,4]. In these domains, numerous methods have been proposed to display and apprehend a large set of (360 • ) information onto a 2D plane, such as for sailors or pilots. ...
... However, this method, cannot handle full 360 • and it requires highly tessellated geometry (a common limitation of geometrybased approaches) [12]. If the final projection uses a single center of projection [12,11], an image-based approach is an interesting alternative. In this case, the VE is first rendered to 6 offline buffers with 6 standard planar projections [5]. ...
... This approach also fails in handling projections with multiple centers. The general method of handling the projection consists here in an approximation with multiple planar projections [11,6]. ...
Conference Paper
Full-text available
Typical field-of-view (fov) of visual feedback in virtual reality applications is generally limited. In some cases, e.g. in videogames, the provided fov can be artificially increased, using simple perspective projection methods. In this paper, we design and evaluate different visualization techniques, inspired by the cartography domain, for navigating in virtual environments (VE) with a 360° horizontal fov of the scene. We have conducted an evaluation of different methods compared to a rendering method of reference, i.e. a perspective projection, in a basic navigation task. Our results confirm that using any omnidirectional rendering method could lead to more efficient navigation in terms of average task completion time. Among the different 360° projection methods, the subjective preference was significantly given to a cylindrical projection method (equirectangular). Taken together, our results suggest that omnidirectional rendering could be used in virtual reality applications in which fast navigation or full and rapid visual exploration are important. They pave the way to novel kinds of visual cues and visual rendering methods in virtual reality.
... Potential drawbacks of such approaches are sampling artifacts or hardware antialiasing/anisotropic filtering incompatibility [9,12]. Moreover, these methods are not able to generate coherent stereo pairs [15,16]. ...
Conference Paper
Full-text available
In this paper we introduce a novel approach for stereoscopic rendering of virtual environments with a wide Field-of-View (FoV) up to 360°. Handling such a wide FoV implies the use of non-planar projections and generates specific problems such as for rasterization and clipping of primitives. We propose a novel pre-clip stage specifically adapted to geometric approaches for which problems occur with polygons spanning across the projection discontinuities. Our approach integrates seamlessly with immersive virtual reality systems as it is compatible with stereoscopy, head-tracking, and multi-surface projections. The benchmarking of our approach with different hardware setups could show that it is well compliant with real-time constraint, and capable of displaying a wide range of FoVs. Thus, our geometric approach could be used in various VR applications in which the user needs to extend the FoV and apprehend more visual information.
Who have never wanted to have eyes in the back of his head? This doctoral thesis proposes to study the extension of the human field-of-view (FoV) in both real and virtual environments. First we have designed FlyVIZ, a new device to increase the human FoV. It is composed of a helmet, combining a catadioptric camera, a HMD and an image processing algorithm. Wearing this device allows a user to experience 360° vision of its surroundings. The prototype is demonstrated through scenarii such as grasping an object held out behind their back without turning their head or walking backward through doorways. Then we have proposed a novel method to render virtual environments with wide FoV in real-time. To solve the rendering issue induced by usage of non-planar projections, we introduce a special stage in real-time rendering pipeline. Our method was then adapted for real-time stereoscopic rendering with 360° FoV. We have conducted a preliminary evaluation of real-time wide FoV rendering for a navigation task in virtual reality. Our results confirm that using a wide FoV rendering method could lead to more efficient navigation in terms of average task completion time. Among the different tested non-planar projection methods, the subjective preference is given to equirectangular and Hammer projections. We also address the problem of frame cancellation, generated by the conflict between two depth cues: stereo disparity and occlusion with the screen border. We have proposed the Stereoscopy Compatible Volume Clipping (SCVC), solving the problem by rendering only the part of the viewing volume free of disparity - frame occlusion conflict. The method was evaluated and results have shown that SCVC notably improved users’ depth perception and that the users expressed preference for SCVC. Wide FoV opens novel perspectives for environments exploration or monitoring. Therefore, it could benefit to several applications, both in real world context or virtual environments. In safety and security applications, firemen, policemen or soldiers could take advantage of wide FoV. Performance of searching task and fast exploration in virtual environments could also be improved with wide FoV.
Full-text available
This paper presents the work carried out by a multidisciplinary team of researchers, gathering knowledge in architecture, drawing, geometry, mathematics and computation. The research was directed in order to create a computational tool for architectural visualization - a new digital perspectograph - with the use of a new theoretical and operative approach to linear perspective. A new kind of projection surface, a parametric one, is added to the perspective concept under current tools. The mutations of this surface are explained and a set of graphical outputs is shown. A workshop with architecture students took place to help test and validate the concept and the computational prototype.
Conference Paper
This paper presents an approach to real-time rendering of non-planar projections with a single center and straight projection rays. Its goal is to provide the same optimal and consistent image quality GPUs deliver for perspective projections. It therefor renders the result directly without image resampling. In contrast to most object-space approaches, it does not evaluate non-linear functions on the GPU, but approximates the projection itself by a set of perspective projection pieces. Within each piece, graphics hardware can provide optimal image quality. The result is a coherent and crisp rendering. Procedural textures and stylization effects greatly benefit from our method as they usually rely on screen-space operations. The real-time implementation runs entirely on GPU. It replicates input primitives on demand and renders them into all relevant projection pieces. The method is independent of the input mesh density and is not restricted to static meshes. Thus, it is well suited for interactive applications. We demonstrate an analytic and a freely designed projection based on our method. KeywordsNon-planar projections-Geometry shaders-Geometry amplification-Non-photorealistic rendering
Conference Paper
Full-text available
The communication of cultural heritage in public spaces such as museums or exhibitions, gain more and more importance during the last years. The possibilities of interactive 3D applications open a new degree of freedom beyond the mere presentation of static visualizations, such as pre-produced video or image data. A user is now able to directly interact with 3D virtual environments that enable the depiction and exploration of digital cultural heritage artifacts in real-time. However, such technology requires concepts and strategies for guiding a user throughout these scenarios, since varying levels of experiences within interactive media can be assumed. This paper presents a concept as well as implementation for communication of digital cultural heritage in public spaces, by example of the project Roman Cologne. It describes the results achieved by an interdisciplinary team of archaeologists, designers, and computer graphics engineers with the aim to virtually reconstruct an interactive high-detail 3D city model of Roman Cologne.
Conference Paper
Full-text available
Stereo rendering, as an additional visual cue for humans, is an important method to increase the immersion into 3D virtual environments. Stereo pairs synthesized for the left and right eye are displayed in a way that the human visual system interprets as 3D perception. Stereoscopy is an emerging field in cinematography and gaming. While generating stereo images is well known for standard projections, the implementation of stereoscopic viewing for interactive non-planar single-center projections, such as cylindrical and spherical projections, is still a challenge. This paper presents the results of adapting an existing image-based approach for generating interactive stereoscopic non-planar projections for polygonal scenes on consumer graphics hardware. In particular, it introduces a rendering technique for generating image-based, non-planar stereo pairs within a single rendering pass. Further, this paper presents a comparison between the image-based and a geometry-based approach with
ResearchGate has not been able to resolve any references for this publication.