PresentationPDF Available

A Generalization Approach for 3D Viewing Deformations of Single-Center Projections

Authors:

Abstract

Presentation of Research Paper "A Generalization Approach for 3D Viewing Deformations of Single-Center Projections"
2008-22-01Generalization Concept1
BBB
A Generalization Approach
for 3D Viewing Deformations
of Single-Center Projections
Matthias Trapp , Jürgen Döllner
3rd International Conference on Computer Graphics Theory and Applications
22 - 25 January, 2008, Funchal, Madeira - Portugal
Computer Graphics Systems Group,
Hasso-Plattner Institute,
University of Potsdam,
2008-22-012
Outline
Introduction & Basic Concepts
Generalization Concept
Implementation
Applications
Conclusions
2008-22-01Introduction & Basic Concepts 3
Mission: Unified Rendering Technique
Unify techniques for:
Non-planar projections
2D lens effects
Image warping
Implementation requirements:
Real-time visualization
Large scene rendering
Single projection center (SCOP)
SCOP MCOP MCOP
2008-22-01Introduction & Basic Concepts 4
Basic Concept Overview
Components:
Dynamic cube map
Screen-aligned quad
Fragment shader
3-Phase rendering:
1. Create/Update dynamic cube map
2. Setup projection shader
3. Render screen-aligned quad
Main characteristics:
Image-based approach
Fully hardware accelerated
t
s
(-1, -1) (1, 0)
(1, 1)
(0, -1)
O = (0, 0)
Fst = (s, t)
y
x
z
(1,1,-1)
(-1,-1,1)
O
S=(x,y,z)
Cube map
2008-22-01Introduction & Basic Concepts 5
Basic Concept Details
Define projection function:
Apply camera orientation:
Sample from cube map:
2008-22-01Introduction & Basic Concepts 6
Example: Cylindrical Projection
Projection function:
Horizontal
Vertical
Horizontal FOV: 360°, Vertical FOV 60°
-β/2
-α/2 0
Fst = (s,t)
β/2
0
α/2
θ
φ
Viewport
2008-22-01Introduction & Basic Concepts 7
Example: Spherical Projections
Projection function:
Viewport truncation:
x
y
z
V
r
θ
φ
(0,0,-1)
(1,0,0)
(0,1,0) (s,t)
C
A
B
C
2008-22-01Introduction & Basic Concepts 8
Optimization: Normal Maps
For static projection functions
Store normalized cube map sampling vectors
Using Render-To-Texture (RTT)
Floating point texture precision
Projections
Normal Maps
OMNIMAX Cylindrical (Horizontal)
2008-22-019
Outline
Introduction & Basic Concepts
Generalization Concept
Implementation
Applications
Conclusions
2008-22-01Generalization Concept10
Generalization Concept Overview
T11
E12
E11
E22
E21
Tile Screen
Ast
Feature Map
Nst
Normal Map
Cst
Final Rendering
"i,j: frender(Tij )
Nst = fnormal(Ast )
Cst = fsample(Nst )
A
B
C
D
2008-22-01Generalization Concept11
Projection Tile Screen - Example
Final Rendering
2008-22-01Generalization Concept12
Projection Tile Screens
Projection tile screen (TPS)=set of projection tiles
Projection tile=set of tile features
Tile Feature:
T11
E12
E11
E22
E21
Tile Screen
2008-22-01Generalization Concept13
Generating Feature Maps
Feature-map rendering:
1. Setup render-to-texture
2. Setup orthogonal-projection
3. Encode feature properties as color values:
Angles:
4. Render tiles successively
Cube map sampling vectors:
Calculated using fragment shader
Vector derived by:
2008-22-01Generalization Concept14
Projection Tiles Extensions
Limitations:
PTS is hard to model and control
Triangulation influences interpolation
Covers not all possible tile shapes
No hard transition between tiles
Improvements:
Regular grid triangulated planar mesh („triangle soup“)
Enables hard transitions between tiles
Enable the usage of modeling tools
?
2008-22-0115
Outline
Introduction & Basic Concepts
Generalization Concept
Implementation
Applications
Conclusions
2008-22-01Implementation16
Dynamic Cube Maps
Single-Pass: needs DX10 compatible hardware
Evaluate the scene only once
Geometry shader multiply primitive
Project primitive to cube map faces
Rasterization to six texture layers in parallel
Multi-Pass: most compatible approach
Evaluate scene six times
RTT to each cube face
Runtime optimizations:
Omit whole cube map update
Omit cube map side update
2008-22-01Implementation17
Main Shader
uniform samplerCube samplerCubeMap;
uniform vec3 lookTo;
uniform vec3 lookUp;
uniform vec3 offset;
//function prototypes
vec3 distortion(vec2 texCoords);
vec3 correctLookTo(vec3 samplingVector);
void main(void)
{
//apply distortion function
vec3 distortionVector = distortion(gl_TexCoord[0].st);
//lookTo transformation
distortionVector = correctLookTo(distortionVector) - offset;
//back with cubemap lookup
gl_FragColor = textureCube(samplerCubeMap, distortionVector);
return;
}
Shader main entry point
2008-22-01Implementation18
Cylindrical Projection Shader
uniform int panoramaMode;
uniform float FOVhorz;
uniform float FOVvert;
const int HORIZONTAL = 1;
vec3 distortion(in vec2 textureCoords)
{
vec3 lookUp = vec3(0.0, 0.0, 0.0);
if(panoramaMode == HORIZONTAL)
{
float angle = radians(FOVhorz / 2.0);
float x = cos(textureCoords.s * angle);
float y = textureCoords.t * tan(radians(FOVvert / 2.0));
float z = sin(textureCoords.s * angle);
lookUp = vec3(x, y, z);
} else {
float angle = radians(FOVvert / 2.0);
float x = textureCoords.s * tan(radians(FOVhorz / 2.0));
float y = cos(textureCoords.t * angle);
float z = sin(textureCoords.t * angle);
lookUp vec3(x, y, z);
}//endif
return lookUp;
}
Projection function
2008-22-0119
Outline
Introduction & Basic Concepts
Generalization Concept
Implementation
Applications
Conclusions
2008-22-01Applications20
Non-Planar Projection Surfaces
Horizontal FOV: 360°, Vertical FOV 90°
Normal Map
Final Rendering
Non-Planar Projection Surface
2008-22-01Applications21
Using Custom Normal Maps
Normal Map (Tangent Space)
Normal Map (Unit Space)
Final Rendering
Horizontal FOV: 90°, Vertical FOV 60°
2008-22-01Applications22
Combinations of Projections
2008-22-01Applications23
Lens Effects
Horizontal FOV: 180°, Vertical FOV 135°
Final Rendering
Normal Map
2008-22-01Applications24
Compound Eye
T i1
T i4
T i2
T i3
Horizontal FOV: 120°, Vertical FOV 60°
2008-22-0125
Outline
Introduction & Basic Concepts
Generalization Concept
Implementation
Applications
Conclusions
2008-22-01Conclusions26
Limitations
Rendering quality depends on:
Cube map resolution
Tessellation of tile screen
Undersampling / Oversampling
Dynamic cube map can be costly
Interpolation artifacts by contrary tessellation
Haik Lorenz, Jürgen Döllner,
Dynamic Mesh Refinement on GPU using Geometry Shaders,
WSCG 2008 (to appear)
A
B
2008-22-01Conclusions27
Conclusions
Take aways:
General concept for SCOP distortions:
Non-planar projections
2D lenses with arbitrary shapes
Image warping and distortions
Applicable in real-time for large scenes
Controllable via projection tile screens
Important: resolution of cube map and tessellation of PTS
Future work:
Improve rendering quality
Develop graphical user interface for PTS
Shift PTS tessellation to GPU
2008-22-0128
Q & A
Thank You.
Contact:
Matthias Trapp
matthias.trapp@hpi.uni-potsdam.de
Computer Graphics Systems Group
Prof. Dr. Jürgen Döllner
www.hpi.uni-potsdam.de/3d
Research group 3D-Geoinformation
www.3dgi.de
... In the literature, various technical solutions have been proposed to provide an extended field of view through non planar projection [3,2,8,11,6]. Surprisingly, the current techniques do not take advantage of the projection methods developed in the cartography field [10] or in omnidirectional computer vision [1,4]. In these domains, numerous methods have been proposed to display and apprehend a large set of (360 • ) information onto a 2D plane, such as for sailors or pilots. ...
... However, this method, cannot handle full 360 • and it requires highly tessellated geometry (a common limitation of geometrybased approaches) [12]. If the final projection uses a single center of projection [12,11], an image-based approach is an interesting alternative. In this case, the VE is first rendered to 6 offline buffers with 6 standard planar projections [5]. ...
... This approach also fails in handling projections with multiple centers. The general method of handling the projection consists here in an approximation with multiple planar projections [11,6]. ...
Conference Paper
Full-text available
Typical field-of-view (fov) of visual feedback in virtual reality applications is generally limited. In some cases, e.g. in videogames, the provided fov can be artificially increased, using simple perspective projection methods. In this paper, we design and evaluate different visualization techniques, inspired by the cartography domain, for navigating in virtual environments (VE) with a 360° horizontal fov of the scene. We have conducted an evaluation of different methods compared to a rendering method of reference, i.e. a perspective projection, in a basic navigation task. Our results confirm that using any omnidirectional rendering method could lead to more efficient navigation in terms of average task completion time. Among the different 360° projection methods, the subjective preference was significantly given to a cylindrical projection method (equirectangular). Taken together, our results suggest that omnidirectional rendering could be used in virtual reality applications in which fast navigation or full and rapid visual exploration are important. They pave the way to novel kinds of visual cues and visual rendering methods in virtual reality.
... Potential drawbacks of such approaches are sampling artifacts or hardware antialiasing/anisotropic filtering incompatibility [9,12]. Moreover, these methods are not able to generate coherent stereo pairs [15,16]. ...
Conference Paper
Full-text available
In this paper we introduce a novel approach for stereoscopic rendering of virtual environments with a wide Field-of-View (FoV) up to 360°. Handling such a wide FoV implies the use of non-planar projections and generates specific problems such as for rasterization and clipping of primitives. We propose a novel pre-clip stage specifically adapted to geometric approaches for which problems occur with polygons spanning across the projection discontinuities. Our approach integrates seamlessly with immersive virtual reality systems as it is compatible with stereoscopy, head-tracking, and multi-surface projections. The benchmarking of our approach with different hardware setups could show that it is well compliant with real-time constraint, and capable of displaying a wide range of FoVs. Thus, our geometric approach could be used in various VR applications in which the user needs to extend the FoV and apprehend more visual information.
Article
Who have never wanted to have eyes in the back of his head? This doctoral thesis proposes to study the extension of the human field-of-view (FoV) in both real and virtual environments. First we have designed FlyVIZ, a new device to increase the human FoV. It is composed of a helmet, combining a catadioptric camera, a HMD and an image processing algorithm. Wearing this device allows a user to experience 360° vision of its surroundings. The prototype is demonstrated through scenarii such as grasping an object held out behind their back without turning their head or walking backward through doorways. Then we have proposed a novel method to render virtual environments with wide FoV in real-time. To solve the rendering issue induced by usage of non-planar projections, we introduce a special stage in real-time rendering pipeline. Our method was then adapted for real-time stereoscopic rendering with 360° FoV. We have conducted a preliminary evaluation of real-time wide FoV rendering for a navigation task in virtual reality. Our results confirm that using a wide FoV rendering method could lead to more efficient navigation in terms of average task completion time. Among the different tested non-planar projection methods, the subjective preference is given to equirectangular and Hammer projections. We also address the problem of frame cancellation, generated by the conflict between two depth cues: stereo disparity and occlusion with the screen border. We have proposed the Stereoscopy Compatible Volume Clipping (SCVC), solving the problem by rendering only the part of the viewing volume free of disparity - frame occlusion conflict. The method was evaluated and results have shown that SCVC notably improved users’ depth perception and that the users expressed preference for SCVC. Wide FoV opens novel perspectives for environments exploration or monitoring. Therefore, it could benefit to several applications, both in real world context or virtual environments. In safety and security applications, firemen, policemen or soldiers could take advantage of wide FoV. Performance of searching task and fast exploration in virtual environments could also be improved with wide FoV.
Chapter
Full-text available
This paper presents the work carried out by a multidisciplinary team of researchers, gathering knowledge in architecture, drawing, geometry, mathematics and computation. The research was directed in order to create a computational tool for architectural visualization - a new digital perspectograph - with the use of a new theoretical and operative approach to linear perspective. A new kind of projection surface, a parametric one, is added to the perspective concept under current tools. The mutations of this surface are explained and a set of graphical outputs is shown. A workshop with architecture students took place to help test and validate the concept and the computational prototype.
Conference Paper
This paper presents an approach to real-time rendering of non-planar projections with a single center and straight projection rays. Its goal is to provide the same optimal and consistent image quality GPUs deliver for perspective projections. It therefor renders the result directly without image resampling. In contrast to most object-space approaches, it does not evaluate non-linear functions on the GPU, but approximates the projection itself by a set of perspective projection pieces. Within each piece, graphics hardware can provide optimal image quality. The result is a coherent and crisp rendering. Procedural textures and stylization effects greatly benefit from our method as they usually rely on screen-space operations. The real-time implementation runs entirely on GPU. It replicates input primitives on demand and renders them into all relevant projection pieces. The method is independent of the input mesh density and is not restricted to static meshes. Thus, it is well suited for interactive applications. We demonstrate an analytic and a freely designed projection based on our method. KeywordsNon-planar projections-Geometry shaders-Geometry amplification-Non-photorealistic rendering
Conference Paper
Full-text available
The communication of cultural heritage in public spaces such as museums or exhibitions, gain more and more importance during the last years. The possibilities of interactive 3D applications open a new degree of freedom beyond the mere presentation of static visualizations, such as pre-produced video or image data. A user is now able to directly interact with 3D virtual environments that enable the depiction and exploration of digital cultural heritage artifacts in real-time. However, such technology requires concepts and strategies for guiding a user throughout these scenarios, since varying levels of experiences within interactive media can be assumed. This paper presents a concept as well as implementation for communication of digital cultural heritage in public spaces, by example of the project Roman Cologne. It describes the results achieved by an interdisciplinary team of archaeologists, designers, and computer graphics engineers with the aim to virtually reconstruct an interactive high-detail 3D city model of Roman Cologne.
Conference Paper
Full-text available
Stereo rendering, as an additional visual cue for humans, is an important method to increase the immersion into 3D virtual environments. Stereo pairs synthesized for the left and right eye are displayed in a way that the human visual system interprets as 3D perception. Stereoscopy is an emerging field in cinematography and gaming. While generating stereo images is well known for standard projections, the implementation of stereoscopic viewing for interactive non-planar single-center projections, such as cylindrical and spherical projections, is still a challenge. This paper presents the results of adapting an existing image-based approach for generating interactive stereoscopic non-planar projections for polygonal scenes on consumer graphics hardware. In particular, it introduces a rendering technique for generating image-based, non-planar stereo pairs within a single rendering pass. Further, this paper presents a comparison between the image-based and a geometry-based approach with
ResearchGate has not been able to resolve any references for this publication.