Conference Paper

3D screen-space widgets for non-linear projection.

DOI: 10.1145/1101389.1101433 Conference: Proceedings of the 3rd International Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia 2005, Dunedin, New Zealand, November 29 - December 2, 2005
Source: DBLP

ABSTRACT Linear perspective is a good approximation to the format in which the human visual system conveys 3D scene information to the brain. Artists expressing 3D scenes, however, create nonlinear projections that balance their linear perspective view of a scene with elements of aesthetic style, layout and relative importance of scene objects. Manipulating the many parameters of a linear perspective camera to achieve a desired view is not easy. Controlling and combining multiple such cameras to specify a nonlinear projection is an even more cumbersome task. This paper presents a direct interface, where an artist manipulates in 2D the desired projection of a few features of the 3D scene. The features represent a rich set of constraints which define the overall projection of the 3D scene. Desirable properties of local linear perspective and global scene coherence drive a heuristic algorithm that attempts to interactively satisfy the given constraints as a weight-averaged projection of a minimal set of linear perspective cameras. This paper shows that 2D feature constraints are a direct and effective approach to control both the 2D layout of scene objects and the conceptually complex, high dimensional parameter space of nonlinear scene projection.

0 Bookmarks
 · 
159 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Abstract—Viewing data sampled,on complicated,geometry,such as a human,pelvis or helix is hard. A single camera,view is not sufficient to view different parts of such,a complicated,dataset. Multiple views are needed,in order to completely,examine,the structure. In this paper, we present a general tool-kit consisting of a set of versatile widgets, namely the unwrap, the clipping, the fisheye, and the panorama widgets, each of which encapsulates a variety of complicated camera placement issues and then combines these several camera,views into a single view in real-time. These non-linear views give a more,complete,visualization of the structure without modifying,the underlying,geometry,of the dataset. Multiple widgets,can be combined,to facilitate better understanding,of the underlying structure. Index Terms—Non-linear perspective, User interfaces, Visualization, Camera control, rendering
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present a new stereoscopic compositing technique that combines volumetric output from several stereo camera rigs. Unlike previous multi-rigging techniques, our approach does not require objects rendered with different stereo parameters to be clearly separable to prevent visual discontinuities. We accomplished that by casting not straight rays (aligned with a single viewing direction) but curved rays, and that results in a smooth blend between viewing parameters of the stereo rigs in the user-defined transition area. Our technique offers two alternative methods for defining shapes of the cast rays. The first method avoids depth distortion in the transition area by guaranteeing monotonic behavior of the stereoscopic disparity function while the second one provides a user with artistic control over the influence of each rig in the transition area. To ensure practical usability, we efficiently solve key performance issues in the ray-casting (e.g. locating cell-ray intersection and traversing rays within a cell) with a highly parallelizable quadtree-based spatial data structure, constructed in the parameterized curvilinear space, to match the shape definition of the cast rays.
    Proceedings of the Symposium on Digital Production; 07/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: 3D transformation widgets allow constrained manipulations of 3D objects and are commonly used in many 3D applications for fine-grained manipulations. Since traditional transformation widgets have been mainly designed for mouse-based systems, they are not user friendly for multitouch screens. There is little research on how to use the extra input bandwidth of multitouch screens to ease constrained transformation of 3D objects. This paper presents a small set of multitouch gestures which offers a seamless control of manipulation constraints (i.e., axis or plane) and modes (i.e., translation, rotation or scaling). Our technique does not require any complex manipulation widgets but candidate axes, which are for visualization rather than direct manipulation. Such design not only minimizes visual clutter but also tolerates imprecise touch-based inputs. To further expand our axis-based interaction vocabulary, we introduce intuitive touch gestures for relative manipulations, including snapping and borrowing axes of another object. A preliminary evaluation shows that our technique is more effective than a direct adaption of standard transformation widgets to the tactile paradigm. © 2012 Wiley Periodicals, Inc.
    Computer Graphics Forum 05/2012; 31(2pt3):651-660. · 1.64 Impact Factor

Full-text (2 Sources)

Download
71 Downloads
Available from
Jun 1, 2014