Article

Improving VIP viewer gaze estimation and engagement using adaptive dynamic anamorphosis

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Anamorphosis for 2D displays can provide viewer centric perspective viewing, enabling 3D appearance, eye contact and engagement, by adapting dynamically in real time to a single moving viewer’s viewpoint, but at the cost of distorted viewing for other viewers. We present a method for constructing non-linear projections as a combination of anamorphic rendering of selective objects whilst reverting to normal perspective rendering of the rest of the scene. Our study defines a scene consisting of five characters, with one of these characters selectively rendered in anamorphic perspective. We conducted an evaluation experiment and demonstrate that the tracked viewer centric imagery for the selected character results in an improved gaze and engagement estimation. Critically, this is performed without sacrificing the other viewers’ viewing experience. In addition, we present findings on the perception of gaze direction for regularly viewed characters located off-center to the origin, where perceived gaze shifts from being aligned to misalignment increasingly as the distance between viewer and character increases. Finally, we discuss different viewpoints and the spatial relationship between objects.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
Collaborative Mixed Reality (MR) systems are at a critical point in time as they are soon to become more commonplace. However, MR technology has only recently matured to the point where researchers can focus deeply on the nuances of supporting collaboration, rather than needing to focus on creating the enabling technology. In parallel, but largely independently, the field of Computer Supported Cooperative Work (CSCW) has focused on the fundamental concerns that underlie human communication and collaboration over the past 30-plus years. Since MR research is now on the brink of moving into the real world, we reflect on three decades of collaborative MR research and try to reconcile it with existing theory from CSCW, to help position MR researchers to pursue fruitful directions for their work. To do this, we review the history of collaborative MR systems, investigating how the common taxonomies and frameworks in CSCW and MR research can be applied to existing work on collaborative MR systems, exploring where they have fallen behind, and look for new ways to describe current trends. Through identifying emergent trends, we suggest future directions for MR, and also find where CSCW researchers can explore new theory that more fully represents the future of working, playing and being with others.
Conference Paper
Full-text available
For telepresence to support the richness of multiparty conversations, it is important to convey motion parallax and stereoscopy without head-worn apparatus. TeleHuman2 is a "hologrammatic" telepresence system that conveys full-body 3D video of interlocutors using a human-sized cylindrical light field display. For rendering, the system uses an array of projectors mounted above the heads of participants in a ring around a retroreflective cylinder. Unique angular renditions are calculated from streaming depth video captured at the remote location. Projected images are retro-reflected into the eyes of local participants, at 1.3º intervals providing angular renditions simultaneously for left and right eyes of all onlookers, which conveys motion parallax and stereoscopy without head-worn apparatus or head tracking. Our technical evaluation of the angular accuracy of the system demonstrates that the error in judging the angle of a remote arrow object represented in TeleHuman2 is within 1 degree, and not significantly different from similar judgments of a collocated arrow object.
Article
Full-text available
User engagement (UE) and its measurement have been of increasing interest in human-computer interaction (HCI). The User Engagement Scale (UES) is one tool developed to measure UE, and has been used in a variety of digital domains. The original UES consisted of 31-items and purported to measure six dimensions of engagement: aesthetic appeal, focused attention, novelty, perceived usability, felt involvement, and endurability. A recent synthesis of the literature questioned the original six-factors. Further, the ways in which the UES has been implemented in studies suggests there may be a need for a briefer version of the questionnaire and more effective documentation to guide its use and analysis. This research investigated and verified a four-factor structure of the UES and proposed a Short Form (SF). We employed contemporary statistical tools that were unavailable during the UES’ development to re-analyze the original data, consisting of 427 and 779 valid responses across two studies, and examined new data (N=344) gathered as part of a three-year digital library project. In this paper we detail our analyses, present a revised long and short form (SF) version of the UES, and offer guidance for researchers interested in adopting the UES and UES-SF in their own studies.
Conference Paper
Full-text available
We give an overview of engagement in human-agent interaction. We discuss the different definitions of engagement in human and social science, specify how they relate to certain other concepts, and give an overview of the high level behaviour that is often associated with engagement. This work serves to position our future research on engagement in human-agent interaction.
Article
Full-text available
Two biases in perceived gaze direction have been observed when eye and head orientation are not aligned. An overshoot effect indicates that perceived gaze direction is shifted away from head orientation (i.e., a repulsive effect), whereas a towing effect indicates that perceived gaze direction falls in between head and eye orientation (i.e., an attraction effect). In the 60s, three influential papers have been published with respect to the effect of head orientation on perceived gaze direction (Gibson and Pick, 1963; Cline, 1967; Anstis et al., 1969). Throughout the years, the results of two of these (Gibson and Pick, 1963; Cline, 1967) have been interpreted differently by a number of authors. In this paper, we critically discuss potential sources of confusion that have led to differential interpretations of both studies. At first sight, the results of Cline (1967), despite having been a major topic of discussion, unambiguously seem to indicate a towing effect whereas Gibson and Pick’s (1963) results seem to be the most ambiguous, although they have never been questioned in the literature. To shed further light on this apparent inconsistency, we repeated the critical experiments reported in both studies. Our results indicate an overshoot effect in both studies.
Article
Full-text available
A classical or static anamorphic image requires a specific, usually a highly oblique view direction, from which the observer can see the anamorphosis in its correct form. This paper explains dynamic anamorphosis which adapts itself to the changing position of the observer so that wherever the observer moves, he sees the same undeformed image. This dynamic changing of the anamorphic deformation in concert with the movement of the observer requires from the system to track the 3D position of the observer's eyes and the re-computation of the anamorphic deformation in real time. This is achieved using computer vision methods which consist of face detection and tracking the 3D position of the selected observer. An application of this system of dynamic anamorphosis in the context of an interactive art installation is described. We show that anamorphic deformation is also useful for improving eye contact in videoconferencing. Other possible applications involve novel user interfaces where the user can freely move and observe perspectively undeformed images. © 2013 The Author 2013. Published by Oxford University Press on behalf of The British Computer Society. All rights reserved.
Conference Paper
Full-text available
A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: ''The face is the portrait of the mind; the eyes, its informers.''. This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and Robotics.
Article
Full-text available
In this paper, we present TeleHuman, a cylindrical D display portal for life-size human telepresence. The TeleHuman 3D videoconferencing system supports 360 degree motion parallax as the viewer moves around the cylinder and optionally, stereoscopic D display of the remote person. We evaluated the effect of perspective cues on the conveyance of nonverbal cues in two experiments using a one-way telecommunication version of the system. The first experiment focused on how well the system preserves gaze and hand pointing cues. The second experiment evaluated how well the system conveys 3D body postural information. We compared 3 perspective conditions: a conventional 2D view, a 2D view with 360 degree motion parallax, and a stereoscopic view with 360 degree motion parallax. Results suggest the combined presence of motion parallax and stereoscopic cues significantly improved the accuracy with which participants were able to assess gaze and hand pointing cues, and to instruct others on 3D body poses. The inclusion of motion parallax and stereoscopic cues also led to significant increases in the sense of social presence and telepresence reported by participants.
Article
Full-text available
The aim of our experiment is to determine if eye-gaze can be estimated from a virtuality human: to within the accuracies that underpin social interaction; and reliably across gaze poses and camera arrangements likely in every day settings. The scene is set by explaining why Immersive Virtuality Telepresence has the potential to meet the grand challenge of faithfully communicating both the appearance and the focus of attention of a remote human participant within a shared 3D computer-supported context. Within the experiment n=22 participants rotated static 3D virtuality humans, reconstructed from surround images, until they felt most looked at. The dependent variable was absolute angular error, which was compared to that underpinning social gaze behaviour in the natural world. Independent variables were 1) relative orientations of eye, head and body of captured subject; and 2) subset of cameras used to texture the form. Analysis looked for statistical and practical significance and qualitative corroborating evidence. The analysed results tell us much about the importance and detail of the relationship between gaze pose, method of video based reconstruction, and camera arrangement. They tell us that virtuality can reproduce gaze to an accuracy useful in social interaction, but with the adopted method of Video Based Reconstruction, this is highly dependent on combination of gaze pose and camera arrangement. This suggests changes in the VBR approach in order to allow more flexible camera arrangements. The work is of interest to those wanting to support expressive meetings that are both socially and spatially situated, and particular those using or building Immersive Virtuality Telepresence to accomplish this. It is also of relevance to the use of virtuality humans in applications ranging from the study of human interactions to gaming and the crossing of the stage line in films and TV.
Conference Paper
Full-text available
One of the major problems of user's interaction with Embodied Con- versational Agents (ECAs) is to have the conversation last more than few second: after being amused and intrigued by the ECAs, users may find rapidly the restric- tions and limitations of the dialog systems; they may perceive the repetition of the ECAs animation; they may find the behaviors of ECAs to be inconsistent and implausible; etc... We believe that some special links, or bonds, have to be es- tablished between users and ECAs during interaction. It is our view that showing and/or perceiving interest is the necessary premise to establish a relationship. In this paper we present a model of an ECA able to establish, maintain and end the conversation based on its perception of the level of interest of its interlocutor.
Conference Paper
Full-text available
Viewing data sampled on complicated geometry, such as a helix or a torus, is hard because a single camera view can only encompass a part of the object. Either multiple views or non-linear projection can be used to expose more of the object in a single view, however, specifying such views is challenging because of the large number of parameters involved. We show that a small set of versatile wid- gets can be used to quickly and simply specify a wide variety of such views. These widgets are built on top of a general framework that in turn encapsulates a variety of complicated camera placement issues into a more natural set of parameters, making the specifica- tion of new widgets, or combining multiple widgets, simpler. This framework is entirely view-based and leaves intact the underlying geometry of the dataset, making it applicable to a wide range of data types.
Conference Paper
Full-text available
MultiView is a new video conferencing system that supports collaboration between remote groups of people. MultiView accomplishes this by being spatially faithful. As a result, MultiView preserves a myriad of nonverbal cues, includ-ing gaze and gesture, in a way that should improve com-munication. Previous systems fail to support many of these cues because a single camera perspective warps spatial char-acteristics in group-to-group meetings. In this paper, we present a formal definition of spatial faithfulness. We then apply a metaphor-based design methodology to help us spec-ify and evaluate MultiView's support of spatial faithfulness. We then present results from a low-level user study to mea-sure MultiView's effectiveness at conveying gaze and ges-ture perception. MultiView is the first practical solution to spatially faithful group-to-group conferencing, one of the most common applications of video conferencing.
Conference Paper
Full-text available
A technique is presented for deforming solid geometric models in a free-form manner. The technique can be used with any solid modeling system, such as CSG or B-rep. It can deform surface primitives of any type or degree: planes, quadrics, parametric surface patches, or implicitly defined surfaces, for example. The deformation can be applied either globally or locally. Local deformations can be imposed with any desired degree of derivative continuity. It is also possible to deform a solid model in such a way that its volume is preserved.The scheme is based on trivariate Bernstein polynomials, and provides the designer with an intuitive appreciation for its effects.
Conference Paper
Full-text available
This paper describes the CAVE (CAVE Automatic Virtual Environment) virtual reality/scientific visualization system in detail and demonstrates that projection technology applied to virtual-reality goals achieves a system that matches the quality of workstation screens in terms of resolution, color, and flicker-free stereo. In addition, this format helps reduce the effect of common tracking and system latency errors. The off-axis perspective projection techniques we use are shown to be simple and straightforward. Our techniques for doing multi-screen stereo vision are enumerated, and design barriers, past and current, are described. Advantages and disadvantages of the projection paradigm are discussed, with an analysis of the effect of tracking noise and delay on the user. Successive refinement, a necessary tool for scientific visualization, is developed in the virtual reality context. The use of the CAVE as a one-to-many presentation device at SIGGRAPH '92 and Supercomputing '92 for computational science data is also mentioned.
Conference Paper
Full-text available
Applications with 3D models are now becoming more common on tabletop displays. Displaying 3D objects on tables, however, presents problems in the way that the 3D virtual scene is presented on the 2D surface; different choices in the way the projection is designed can lead to distorted images and difficulty interpreting angles and orientations. To investigate these problems, we studied people's ability to judge object orientations under different projection conditions. We found that errors increased significantly as the center of projection diverged from the observer's viewpoint, showing that designers must take this divergence into consideration, particularly for multi-user tables. In addition, we found that a neutral center of projection combined with parallel projection geometry provided a reasonable compromise for multi-user situations.
Article
Full-text available
This paper concerns the benefits of presenting abstract data in 3D. Two experiments show that motion cues combined with stereo viewing can substantially increase the size of the graph that can be perceived. The first experiment was designed to provide quantitative measurements of how much more (or less) can be understood in 3D than in 2D. The 3D display used was configured so that the image on the monitor was coupled to the user's actual eye positions (and it was updated in real-time as the user moved) as well as being in stereo. Thus the effect was like a local "virtual reality" display located in the vicinity of the computer monitor. The results from this study show that head-coupled stereo viewing can increase the size of an abstract graph that can be understood by a factor of three; using stereo alone provided an increase by a factor of 1.6 and head coupling alone produced an increase by a factor of 2.2. The second experiment examined a variety of motion cues provided by head coupl...
Article
Full-text available
"Fish tank virtual reality" refers to the use of a standard graphics workstation to achieve real-time display of three-dimensional scenes using stereopsis and dynamic head-coupled perspective. Fish tank VR has a number of advantages over head-mounted immersion VR which make it more practical for many applications. After discussing the characteristics of fish tank VR, we describe a set of three experiments conducted to study the benefits of fish tank VR over a traditional workstation graphics display. These experiments tested user performance under two conditions: (a) whether or not stereoscopic display was used and (b) whether or not the perspective display was coupled dynamically to the positions of a user's eyes. Subjects using a comparison protocol consistently preferred headcoupling without stereo over stereo without head-coupling. Error rates in a tree tracing task similar to one used by Sollenberger and Milgram showed an order of magnitude improvement for headcoupled stereo over ...
Article
Head gaze, or the orientation of the head, is a very important attentional cue in face to face conversation. Some subtleties of the gaze can be lost in common teleconferencing systems, because a single perspective warps spatial characteristics. A recent random hole display is a potentially interesting display for group conversation, as it allows multiple stereo viewers in arbitrary locations, without the restriction of conventional autostereoscopic displays on viewing positions. We represented a remote person as an avatar on a random hole display. We evaluated this system by measuring the ability of multiple observers with different horizontal and vertical viewing angles to accurately and simultaneously judge which targets the avatar is gazing at. We compared three perspective conditions: a conventional 2D view, a monoscopic perspective-correct view, and a stereoscopic perspective-correct views. In the latter two conditions, the random hole display shows three and six views simultaneously. Although the random hole display does not provide high quality view, because it has to distribute display pixels among multiple viewers, the different views are easily distinguished. Results suggest the combined presence of perspective-correct and stereoscopic cues significantly improved the effectiveness with which observers were able to assess the avatar's head gaze direction. This motivates the need for stereo in future multiview displays.
Article
We propose a new video conferencing system that uses an array of cameras to capture a remote user and then show the video of that person on a spherical display. This telepresence system has two key advantages: (i) it can capture a near-correct image for any potential observer viewing direction because the cameras surround the user horizontally; and (ii) with view-dependent graphical representation on the spherical display, it is possible to tell where the remote user is looking from any viewpoint, whereas flat displays are visible only from the front. As a result, the display can more faithfully represent the gaze of the remote user. We evaluate this system by measuring the ability of observers to accurately judge which targets the actor is gazing at in two experiments. Results from the first experiment demonstrate the effectiveness of the camera array and spherical display system, in that it allows observers at multiple observing positions to accurately tell at which targets the remote user is looking. The second experiment further compared a spherical display with a planar display and provided detailed reasons for the improvement of our system in conveying gaze. We found two linear models for predicting the distortion introduced by misalignment of capturing cameras and the observer's viewing angles in video conferencing systems. Those models might be able to enable a correction for this distortion in future display configurations.
Article
We report on two experiments that investigate the influence of display type and viewing angle on how people place their trust during avatar-mediated interaction. By monitoring advice seeking behavior, our first experiment demonstrates that if participants observe an avatar at an oblique viewing angle on a flat display, they are less able to discriminate between expert and non-expert advice than if they observe the avatar face-on. We then introduce a novel spherical display and a ray-traced rendering technique that can display an avatar that can be seen correctly from any viewing direction. We expect that a spherical display has advantages over a flat display because it better supports non-verbal cues, particularly gaze direction, since it presents a clear and undistorted viewing aspect at all angles. Our second experiment compares the spherical display to a flat display. Whilst participants can discriminate expert advice regardless of display, a negative bias towards the flat screen emerges at oblique viewing angles. This result emphasizes the ability of the spherical display to be viewed qualitatively similarly from all angles. Together the experiments demonstrate how trust can be altered depending on how one views the avatar.
Article
Gaze, attention, and eye contact are important aspects of face to face communication, but some subtleties can be lost in videoconferencing because participants look at a single planar image of the remote user. We propose a low-cost cylindrical videoconferencing system that preserves gaze direction by providing perspective-correct images for multiple viewpoints around a conference table. We accomplish this by using an array of cameras to capture a remote person, and an array of projectors to present the camera images onto a cylindrical screen. The cylindrical screen reflects each image to a narrow viewing zone. The use of such a situated display allows participants to see the remote person from multiple viewing directions. We compare our system to three alternative display configurations. We demonstrate the effectiveness of our system by showing it allows multiple participants to simultaneously tell where the remote person is placing their gaze.
Conference Paper
This paper presents a method for the detection and recognition of social interactions in a day-long first-person video of u social event, like a trip to an amusement park. The location and orientation of faces are estimated and used to compute the line of sight for each face. The context provided by all the faces in a frame is used to convert the lines of sight into locations in space to which individuals attend. Further, individuals are assigned roles based on their patterns of attention. The rotes and locations of individuals are analyzed over time to detect and recognize the types of social interactions. In addition to patterns of face locations and attention, the head movements of the first-person can provide additional useful cues as to their attentional focus. We demonstrate encouraging results on detection and recognition of social interactions in first-person videos captured from multiple days of experience in amusement parks.
Article
We study interaction modalities for mobile devices (smartphones and tablets) that rely on a camera-based head tracking. This technique defines new possibilities for input and output interaction. For output, by computing the position of the device according to the user's head, it is for example possible to realistically control the viewpoint on a 3D scene (Head-Coupled Perspective, HCP). This technique improves the output interaction bandwidth by enhancing the depth perception and by allowing the visualization of large workspaces (virtual window). For input, head movement can be used as a means of interacting with a mobile device. Moreover such an input modality does not require any additional sensor except the built-in front-facing camera. In this paper, we classify the interaction possibilities offered by head tracking on smartphones and tablets. We then focus on the output interaction by introducing several applications of HCP on both smartphones and tablets and by presenting the results of a qualitative user experiment.
Article
Existing models of gaze motion for character animation simulate human movements, incorporating anatomical, neurophysiological, and functional constraints. While these models enable the synthesis of humanlike gaze motion, they only do so in characters that conform to human anatomical proportions, causing undesirable artifacts such as cross-eyedness in characters with non-human or exaggerated human geometry. In this paper, we extend a state-of-the-art parametric model of human gaze motion with control parameters for specifying character geometry, gaze dynamics, and performative characteristics in order to create an enhanced model that supports gaze motion in characters with a wide range of geometric properties that is free of these artifacts. The model also affords “staging effects” by offering softer functional constraints and more control over the appearance of the character's gaze movements. An evaluation study showed that the model, compared with the state-of-the-art model, creates gaze motion with fewer artifacts in characters with non-human or exaggerated human geometry while retaining their naturalness and communicative accuracy.
Conference Paper
We present a framework for the direct rendering of multiperspective images. We treat multiperspective imaging systems as devices for capturing smoothly varying set of rays, and we show that under an appropriate parametrization, multiperspective images can be characterized as continuous manifolds in ray space. We use a recently introduced class of General Linear Cameras (GLC), which describe all 2D linear subspaces of rays, as primitives for constructing multiperspective images. We show GLCs when constrained by an appropriate set of rules, can be laid out to tile the image plane and, hence, generate arbitrary multiperspective renderings. Our framework can easily render a broad class of multiperspective images, such as multiperspective panoramas, neocubist style renderings, and faux-animations from still-life scenes. We also show a method to minimize distortions in multiperspective images by uniformly sampling rays on a sampling plane even when they do not share a common origin.
Article
The perception of gaze plays a crucial role in human-human interaction. Gaze has been shown to matter for a number of aspects of communication and dialogue, especially for managing the flow of the dialogue and participant attention, for deictic referencing, and for the communication of attitude. When developing embodied conversational agents (ECAs) and talking heads, modeling and delivering accurate gaze targets is crucial. Traditionally, systems communicating through talking heads have been displayed to the human conversant using 2D displays, such as flat monitors. This approach introduces severe limitations for an accurate communication of gaze since 2D displays are associated with several powerful effects and illusions, most importantly the Mona Lisa gaze effect, where the gaze of the projected head appears to follow the observer regardless of viewing angle. We describe the Mona Lisa gaze effect and its consequences in the interaction loop, and propose a new approach for displaying talking heads using a 3D projection surface (a physical model of a human head) as an alternative to the traditional flat surface projection. We investigate and compare the accuracy of the perception of gaze direction and the Mona Lisa gaze effect in 2D and 3D projection surfaces in a five subject gaze perception experiment. The experiment confirms that a 3D projection surface completely eliminates the Mona Lisa gaze effect and delivers very accurate gaze direction that is independent of the observer's viewing angle. Based on the data collected in this experiment, we rephrase the formulation of the Mona Lisa gaze effect. The data, when reinterpreted, confirms the predictions of the new model for both 2D and 3D projection surfaces. Finally, we discuss the requirements on different spatially interactive systems in terms of gaze direction, and propose new applications and experiments for interaction in a human-ECA and a human-robot settings made possible by this technology.
Article
We present a set of algorithms and an associated display system capable of producing correctly rendered eye contact between a three-dimensionally transmitted remote participant and a group of observers in a 3D teleconferencing system. The participant's face is scanned in 3D at 30Hz and transmitted in real time to an autostereoscopic horizontal-parallax 3D display, displaying him or her over more than a 180° field of view observable to multiple observers. To render the geometry with correct perspective, we create a fast vertex shader based on a 6D lookup table for projecting 3D scene vertices to a range of subject angles, heights, and distances. We generalize the projection mathematics to arbitrarily shaped display surfaces, which allows us to employ a curved concave display surface to focus the high speed imagery to individual observers. To achieve two-way eye contact, we capture 2D video from a cross-polarized camera reflected to the position of the virtual participant's eyes, and display this 2D video feed on a large screen in front of the real participant, replicating the viewpoint of their virtual self. To achieve correct vertical perspective, we further leverage this image to track the position of each audience member's eyes, allowing the 3D display to render correct vertical perspective for each of the viewers around the device. The result is a one-to-many 3D teleconferencing system able to reproduce the effects of gaze, attention, and eye contact generally missing in traditional teleconferencing systems.
Article
An important component of many virtual reality systems is head-tracked stereo display. The head-tracker enables the image rendering system to produce images from a viewpoint location that dynamically tracks the viewers head movement, creating a convincing 3D illusion. A viewer can achieve intricate hand/eye coordination on virtual objects if virtual and physical objects can be registered to within a fraction of a centimeter. Computer graphics has traditionally been concerned with forming the correct image on a screen. When this goal is expanded to forming the correct pair of images on the viewers retinas, a number of additional physical factors must be taken into account. This paper presents the general steps that must be taken to achieve accurate high resolution head-tracked stereo display on a workstation CRT: the need for predictive head-tracking, the dynamic optical location of the viewers eyepoints, physically accurate stereo perspective viewing matrices, and corrections for refractive and curvature distortions of glass CRTs. Employing these steps, a system is described that achieves sub-centimeter virtual to physical registration.
Article
Perception of gaze direction depends not only on the position of the irises within the looker's eyes but also on the orientation of the looker's head. A simple analysis of the geometry of gaze direction predicts this dependence. This analysis is applied to explain the Wollaston effect, the Mona Lisa effect, and the newly presented Mirror gaze effect. In an experiment synthetic faces were used in which the position of the iris and the angle of head rotation were varied. Different groups of subjects judged iris position, head rotation, and gaze direction of the same stimuli. The results illustrate how cues of iris location and head orientation interact to determine perceived gaze direction.
Article
Since its introduction, the Nintendo Wii remote has become one of the world's most sophisticated and common input devices. Combining its impressive capability with a low cost and high degree of accessibility make it an ideal platform for exploring a variety of interaction research concepts. The author describes the technology inside the Wii remote, existing interaction techniques, what's involved in creating custom applications, and several projects ranging from multiobject tracking to spatial augmented reality that challenge the way its developers meant it to be used.
Article
Painting is an activity, and the artist will therefore tend to see what he paints rather than to paint what he sees." E.H. Gombrich. While general trends in computer graphics continue to drive towards more photorealistic imagery, increasing attention is also being devoted to painterly renderings of computer generated scenes. Whereas artists using traditional media almost always deviate from the confines of a precise linear perspective view, digital artists struggle to transcend the standard pin-hole camera model in generating an envisioned image of a three dimensional scene. More specifically, a key limitation of existing camera models is that they inhibit the artistic exploration and understanding of a subject, which is essential for expressing it successfully. Past experiments with non-linear perspectives have primarily focused on abstract mathematical camera models for raytracing, which are both noninteractive and provide the artist with little control over seeing what he wants to see. We address this limitation with a cohesive, interactive approach for exploring nonlinear perspective projections. The approach consists of a new camera model and a toolbox of interactive local and global controls for a number of properties, including regions of interest, distortion, and spatial relationship. Furthermore, the approach is incremental, allowing non-linear perspective views of a scene to be built gradually by blending and compositing multiple linear perspectives. In addition to artistic non-photorealistic rendering, our approach has interesting applications in conceptual design and scientific visualization.
Article
Artistic rendering is an important research area in Computer Graphics, yet relatively little attention has been paid to the projective properties of computer generated scenes. Motivated by the surreal storyboard of an animation in production---Ryan---this paper describes interactive techniques to control and render scenes using nonlinear projections. The paper makes three contributions. First, we present a novel approach that distorts scene geometry such that when viewed through a standard linear perspective camera, the scene appears nonlinearly projected. Second, we describe a framework for the interactive authoring of nonlinear projections defined as a combination of scene constraints and a number of linear perspective cameras. Finally, we address the impact of nonlinear projection on rendering and explore various illumination effects. These techniques, implemented in Maya and used in the production of the animation Ryan, demonstrate how geometric and rendering effects resulting from nonlinear projections can be seamlessly introduced into current production pipelines.
Article
We describe a new approach for simulating apparent camera motion through a 3D environment. The approach is motivated by a traditional technique used in 2D cel animation, in which a single background image, which we call a multiperspective panorama,is used to incorporate multiple views of a 3D environment as seen from along a given camera path. When viewed through a small moving window, the panorama produces the illusion of 3D motion. In this paper, we explore how such panoramas can be designed by computer, and we examine their application to cel animation in particular. Multiperspective panoramas should also be useful for any application in which predefined camera moves are applied to 3D scenes, including virtual reality fly-throughs, computer games, and architectural walk-throughs.
Depth precision visualized
  • Reed
Interactive three dimensional displays on handheld devices. US Patent Office
  • K Mitchell
Generalized perspective projection
  • Kooima
Head tracking for desktop VR displays using the Wii remote
  • J C Lee
A computational investigation into the human representation and processing of visual information
  • D Man
  • A Vision