Figure 2 - uploaded by Azam Khan
Content may be subject to copyright.
Relative depth cue strengths. Vertical axis indicates that when smaller changes are needed, the given cue is stronger.  

Relative depth cue strengths. Vertical axis indicates that when smaller changes are needed, the given cue is stronger.  

Source publication
Conference Paper
Full-text available
We describe a new interactive system for 3D design review, built to exploit the visual perception cue of motion parallax, in order to enhance shape perception and aesthetic evaluation. Traditional CAD applications typically use "bookmarked" static views for design evaluation. In our system, we replace static views with moving "shots" interspersed w...

Context in source publication

Context 1
... some applied researchers focus on creating taxonomies of viewpoint motion control techniques [Bowman et al. 1999;Bowman et al. 1997] others have studied the low-level neuroscience of 3D motion cues [Peuskens et al. 2004]. In particular, Cutting and Vishton [1995] studied the relative strengths of nine depth cues including occlusion, relative size, relative density, height in the visual field, aerial perspective, motion parallax (i.e., motion perspective), binocular disparities, convergence, and accommodation (see Figure 2). With the exception of occlusion, motion parallax ranks highly compared to other types of depth cues within distances relevant to design reviews. ...

Similar publications

Conference Paper
Full-text available
Labels effectively convey co-referential relations between textual and visual elements and are a powerful tool to support learning tasks. Therefore, almost all illustrations in scientific or technical documents employ a large number of labels. This paper introduces a novel approach to integrate internal and external labels into projections of compl...

Citations

... Several researchers utilize cinematographic concepts and rules to model visually pleasing camera movements. Burtnyk et al.'s StyleCam [7] and ShowMotion [8] works provide an example of such techniques. Lino et al. [38] further expand the use of cinematography on scenes featuring actors and interactions between them. ...
Article
Full-text available
We present a method for producing documentary-style content using real-time scientific visualization. We introduce molecumentaries, i.e., molecular documentaries featuring structural models from molecular biology, created through adaptable methods instead of the rigid traditional production pipeline. Our work is motivated by the rapid evolution of scientific visualization and it potential in science dissemination. Without some form of explanation or guidance, however, novices and lay-persons often find it difficult to gain insights from the visualization itself. We integrate such knowledge using the verbal channel and provide it along an engaging visual presentation. To realize the synthesis of a molecumentary, we provide technical solutions along two major production steps: (1) preparing a story structure and (2) turning the story into a concrete narrative. In the first step, we compile information about the model from heterogeneous sources into a story graph. We combine local knowledge with external sources to complete the story graph and enrich the final result. In the second step, we synthesize a narrative, i.e., story elements presented in sequence, using the story graph. We then traverse the story graph and generate a virtual tour, using automated camera and visualization transitions. We turn texts written by domain experts into verbal representations using text-to-speech functionality and provide them as a commentary. Using the described framework, we synthesize fly-throughs with descriptions: automatic ones that mimic a manually authored documentary or semi-automatic ones which guide the documentary narrative solely through curated textual input.
... Their system was designed to create highly stylized animations of 3D scenes, such as commercials or feature films. In their follow-up work, Burtnyk et al. [7] introduced a method for presenting 3D models by using a dynamic camera. Their system SlowMotion used various cinematic transitions to maximize the visual appeal of the presented model. ...
Preprint
Full-text available
We present a method for producing documentary-style content using real-time scientific visualization. We produce molecumentaries, i.e., molecular documentaries featuring structural models from molecular biology. We employ scalable methods instead of the rigid traditional production pipeline. Our method is motivated by the rapid evolution of interactive scientific visualization, which shows great potential in science dissemination. Without some form of explanation or guidance, however, novices and lay-persons often find it difficult to gain insights from the visualization itself. We integrate such knowledge using the verbal channel and provide it along an engaging visual presentation. To realize the synthesis of a molecumentary, we provide technical solutions along two major production steps: 1) preparing a story structure and 2) turning the story into a concrete narrative. In the first step, information about the model from heterogeneous sources is compiled into a story graph. Local knowledge is combined with remote sources to complete the story graph and enrich the final result. In the second step, a narrative, i.e., story elements presented in sequence, is synthesized using the story graph. We present a method for traversing the story graph and generating a virtual tour, using automated camera and visualization transitions. Texts written by domain experts are turned into verbal representations using text-to-speech functionality and provided as a commentary. Using the described framework we synthesize automatic fly-throughs with descriptions that mimic a manually authored documentary. Furthermore, we demonstrate a second scenario: guiding the documentary narrative by a textual input.
... Besides of free flight control, Balakrishnan et al. [9] and Burtnyk et al. [10] let the operator interactively steer a camera along a purely virtual, predefined path, whereas they also aim to combine the collision free motion with more simple, intuitive interactions. Moreover, Mirhosseini et al. [3], also suggest navigation along a constrained path. ...
Conference Paper
Full-text available
For a variety of applications remote navigation of an unmanned aerial vehicle (UAV) along a flight trajectory is an essential task. For instance, during search and rescue missions in outdoor scenes, an important goal is to ensure safe navigation. Assessed by the remote operator, this could mean avoiding collisions with obstacles, but moreover avoiding hazardous flight areas. State of the art approaches enable navigation along trajectories, but do not allow for indirect manipulation during motion. In addition, they suggest to use egocentric views which could limit understanding of the remote scene. With this work we introduce a novel indirect manipulation method, based on gravitational law, to recover safe navigation in the presence of hazardous flight areas. The indirect character of our method supports manipulation at far distances where common direct manipulation methods typically fail. We combine it with an immersive exocentric view to improve understanding of the scene. We designed three flavors of our method and compared them during a user study in a simulated scene. While with this method we present a first step towards a more extensive navigation interface, as future work we plan experiments in dynamic real-world scenes.
... Some 3D browsers provide a viewpoint menu offering a choice of viewpoints [27], [3]. Authors of 3D scenes can place several viewpoints (typically for each POI) in order to allow easy navigation for users, who can then easily navigate from viewpoint to viewpoint just by selecting a menu item. ...
Conference Paper
A 3D bookmark in a networked virtual environment (NVE) provides a navigation aid, allowing the user to move quickly from its current viewpoint to a bookmarked viewpoint by simply clicking on the bookmark. In this paper, we first validate the positive impact that 3D bookmarks have in easing navigation in a 3D scene. Then, we show that, in the context of a NVE that streams content on demand from server to client, navigating with bookmarks leads to lower rendering quality at the bookmarked viewpoint, due to lower locality of data. We then investigate into how prefetching the 3D data at the bookmarks and precomputation of visible faces at the bookmarks help to improve the rendering quality.
... We now discuss each one of them. As we investigate orbiting for complex scenes, such as multi-scale environments, and dynamic geometry, such as CAD editing or animation, we do not consider pre-computation [3,9,17]. ...
Conference Paper
Full-text available
In this paper we describe a new orbiting algorithm, called SHOCam, which enables simple, safe and visually attractive control of a camera moving around 3D objects. Compared with existing methods, SHOCam provides a more consistent mapping between the user's interaction and the path of the camera by substantially reducing variability in both camera motion and look direction. Also, we present a new orbiting method that prevents the camera from penetrating object(s), making the visual feedback -- and with it the user experience -- more pleasing and also less error prone. Finally, we present new solutions for orbiting around multiple objects and multi-scale environments.
... Another possible solution is to select a set of possible points of view, and constrain the navigation to these points, which can be connected by the means of pre-computed paths (Hanson and Wernert, 1997). The choice of the points of view describing a complex environment is a very complex task (Scott et al., 2003), and authoring is usually necessary (Burtnyk et al., 2002) (Burtnyk et al., 2006). Following this approach, image-based techniques have been used to remove limitations on scene complexity and rendering quality for interactive applications. ...
Article
Full-text available
The remote visualization and navigation of 3D data directly inside the web browser is becoming a viable option, due to the recent efforts in standardizing the components for 3D rendering on the web platform. Nevertheless, handling complex models may be a challenge, especially when a more generic solution is needed to handle different cases. In particular, archeological and architectural models are usually hard to handle, since their navigation can be managed in several ways, and a completely free navigation may be misleading and not realistic. In this paper we present a solution for the remote navigation of these dataset in a WebGL component. The navigation has two possible modes: the ”bird’s eye” mode, where the user is able to see the model from above, and the ”first person” mode, where the user can move inside the structure. The two modalities are linked by a point of interest, that helps the user to control the navigation in an intuitive fashion. Since the terrain may not be flat, and the architecture may be complex, it’s necessary to handle these issues, possibly without implementing complex mesh-based collision mechanisms. Hence, a complete navigation is obtained by storing the height and collision information in an image, which provides a very simple source of data. Moreover, the same image-based approach can be used to store additional information that could enhance the navigation experience. The method has been tested in two complex test cases, showing that a simple yet powerful interaction can be obtained with limited pre-processing of data.
... Although sessions can be replayed in a chronological order (video streams plus modelling events), it makes more sense to offer different interface techniques that cater for clustered data management. The ShowMotion Technique might be a useful extension to interact with design review data (Burtnyk et al., 2006). This technique offers navigating through "temporal thumbnails" -moving 3D cuts based on cinematic visual transitions. ...
Thesis
Full-text available
Physical prototypes and scale models play an important role in engineering design processes, especially in the field of industrial design. Such models are typically used to explore and discuss design concepts in various stages, from initial idea generation to manufacturing. Over the last decade, augmented reality technologies have been developed to assist prototyping in design. We can employ head-mounted displays, projectors, or handheld video- mixing solutions, as well as 3D printing to “enrich” physical models with features, materials, and behaviour. The outcome of this project is a design support methodology, entitled interactive augmented prototyping methodology (IAP-M), which utilizes augmented reality as an instrument to support design reviews. The instrument, a projection and recording unit, was patented and licensed to a techno starter for further valorisation. Although the instrumentation is central in this study, it requires procedures and methods to be applied in design processes. The insights that lead to IAP-M originated from empirical studies and design inclusive research, and were implemented in a collection of demonstrators.
... It affects in particular inexperienced but also experienced users [18,9] and is a problem in particular in multiscale environments [31]. Moreover, navigation techniques can permit generating views of 3DGeoVEs with ineffective view properties [14] that can arise in the form of unsteady and discontinuous camera motion, awkward viewing angles presenting the model in poor light or missing important features, unwanted views, "clunky or visually jarring" views (e. g., due to frequent mouse clutching), distracting transitions, a low or inconsistent level of visual and interactive quality [38,25,10,9,11,14]. Furthermore, we can observe suboptimal efficiency of navigation techniques regarding human (e. ...
... Completing a single navigation task often requires a collection of navigation techniques resulting in frequent control switches, in particular when using standard navigation techniques [18,24]. Each control switch divides the action into separate chunks; leading to a higher separation in subtasks and a higher cognitive load, higher time consumption, inefficient movement trajectories, and ineffective view properties [38,10,11,24]. Many navigation techniques map input device DOFs to camera DOFs in real-time requiring real-time 3D rendering, although this is not crucial for many applications using V3DCMs [17,24]. ...
... It is a general principle that can be found in a multitude of approaches and is applied successfully to improve navigation [30,9,1]. Previous approaches have limited the camera's position to viewpoints [47], 1D trajectories [38,30,17,11], 1D trajectories connected through graphs [46,1,41], 2D surfaces [26,10,32], and 3D volumes [5]. Moreover, approaches imposed constraints on the camera's velocity [38,51] and orienta-tion [41]. ...
Article
Full-text available
Virtual 3D city models serve as integration platforms for complex geospatial and georeferenced information and as medium for effective communication of spatial information. In order to explore these information spaces, navigation techniques for controlling the virtual camera are required to facilitate wayfinding and movement. However, navigation is not a trivial task and many available navigation techniques do not support users effectively and efficiently with their respective skills and tasks. In this article, we present an assisting, constrained navigation technique for multiscale virtual 3D city models that is based on three basic principles: users point to navigate, users are lead by suggestions, and the exploitation of semantic, multiscale, hierarchical structurings of city models. The technique particularly supports users with low navigation and virtual camera control skills but is also valuable for experienced users. It supports exploration, search, inspection, and presentation tasks, is easy to learn and use, supports orientation, is efficient, and yields effective view properties. In particular, the technique is suitable for interactive kiosks and mobile devices with a touch display and low computing resources and for use in mobile situations where users only have restricted resources for operating the application. We demonstrate the validity of the proposed navigation technique by presenting an implementation and evaluation results. The implementation is based on service-oriented architectures, standards, and image-based representations and allows exploring massive virtual 3D city models particularly on mobile devices with limited computing resources. Results of a user study comparing the proposed navigation technique with standard techniques suggest that the proposed technique provides the targeted properties, and that it is more advantageous to novice than to expert users.
... Moreover, their camera path lacks smoothness, which leads to a shaky effect in the resulting videos. Both Burtnyk et al. [2006] and Andújar et al. [2004] used smooth paths, respectively, modeled by Bézier or Hermite polynomial curves. Similarly, we use Catmull-Rom splines to interpolate our keyviews and have a smooth path. ...
Article
Full-text available
Online galleries of 3D models typically provide two ways to preview a model before the model is downloaded and viewed by the user: (i) by showing a set of thumbnail images of the 3D model taken from representative views (or keyviews); (ii) by showing a video of the 3D model as viewed from a moving virtual camera along a path determined by the content provider. We propose a third approach called preview streaming for mesh-based 3D objects: by streaming and showing parts of the mesh surfaces visible along the virtual camera path. This article focuses on the preview streaming architecture and framework and presents our investigation into how such a system would best handle network congestion effectively. We present three basic methods: (a) stop-and-wait, where the camera pauses until sufficient data is buffered; (b) reduce-speed, where the camera slows down in accordance to reduce network bandwidth; and (c) reduce-quality, where the camera continues to move at the same speed but fewer vertices are sent and displayed, leading to lower mesh quality. We further propose two advanced methods: (d) keyview-aware, which trades off mesh quality and camera speed appropriately depending on how close the current view is to the keyviews, and (e) adaptive-zoom, which improves visual quality by moving the virtual camera away from the original path. A user study reveals that our keyview-aware method is preferred over the basic methods. Moreover, the adaptive-zoom scheme compares favorably to the keyview-aware method, showing that path adaptation is a viable approach to handling bandwidth variation.
... While very useful, we believe that static viewpoints often do not show "3D-ness" of virtual objects -as Andy van Dam mentioned: "if it ain't moving, it ain't 3D". Therefore, inspired by [Tod04,BKFK06], we performed an evaluation of static vs. animated views in 3D Web user interfaces [Jan12]. We found out that all users clearly preferred navigating in 3D using a menu with animated viewpoints than with static ones (there was not even a single user that disabled animated views during the study). ...
... StyleCam uses camera surfaces which spatially constrain the viewing camera, animation clips that allow for visually appealing transitions between different camera surfaces, and a simple interaction technique that permits the user to seamlessly and continuously move between spatial-control of the camera and temporal-control of the animated transitions (see Figure 6). Burtnyk et al. also describe ShowMotion [BKFK06], an interactive system for 3D design review of CAD models. Their system replaces traditional "bookmarked" static views with moving "shots" interspersed with cinematic visual transitions. ...