Conference Paper

First or Third-Person Hearing? A Controlled Evaluation of Auditory Perspective on Embodiment and Sound Localization Performance

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For instance, users can become giants [10], have novel body parts [125] or even adopt non-humanoid avatars [11,72]. There is also a growing attention to how audio in VR, such as changes to the avatar's voice [19,23,25,61] or the presence of footstep sounds [65] impact immersive experience. Similarly, audio is gaining attention in remote collaboration contexts [116,129]. ...
Preprint
We introduce Audio Personas, enabling users to "decorate" themselves with body-anchored sounds in audio augmented reality. Like outfits, makeup, and fragrances, audio personas offer an alternative yet dynamic channel to augment face-to-face interactions. For instance, one can set their audio persona as rain sounds to reflect a bad mood, bee sounds to establish personal boundaries, or a playful "woosh" sound to mimic passing by someone like a breeze. To instantiate the concept, we implemented a headphone-based prototype with multi-user tracking and audio streaming. Our formative study with designers revealed that audio personas were preferred in public and semi-public-private spaces for managing social impressions (e.g., personality) and signaling current states (e.g., emotions). Our preregistered in-lab study with 64 participants showed that audio personas influenced how participants formed impressions. Individuals with positive audio personas were rated as more socially attractive, more likable, and less threatening than those with negative audio personas.
Article
Full-text available
In virtual reality, the avatar - the user's digital representation - is an important element which can drastically influence the immersive experience. In this paper, we especially focus on the use of “dissimilar” avatars i.e., avatars diverging from the real appearance of the user, whether they preserve an anthropomorphic aspect or not. Previous Various studies reported that dissimilar avatars can positively impact have diverse positive impacts on the user experience, in terms for example of interaction, perception or and behaviour. However, given the sparsity and multi-disciplinary character of research related to dissimilar avatars, it tends to lack common understanding and methodology, hampering the establishment of novel knowledge on this topic. In this paper, we propose to address these limitations by discussing: (i) a methodology for dissimilar avatars characterization, (ii) their impacts on the user experience, (iii) their different fields of application, and finally, (iv) future research direction on this topic. Taken together, we believe that this paper can support future research related to dissimilar avatars, and help designers of VR applications to leverage dissimilar avatars appropriately.
Conference Paper
Full-text available
Modern games make creative use of First-and Third-person perspectives (FPP and TPP) to allow the player to explore virtual worlds. Traditionally, FPP and TPP perspectives are seen as distinct concepts. Yet, Virtual Reality (VR) allows for flexibility in choosing perspectives. We introduce the notion of a perspective continuum in VR, which is technically related to the camera position and conceptually to how users perceive their environment in VR. A perspective continuum enables adapting and manipulating the sense of agency and involvement in the virtual world. This flexibility of perspectives broadens the design space of VR experiences through deliberately manipulating perception. In a study, we explore users' attitudes, experiences and perceptions while controlling a virtual character from the two known perspectives. Statistical analysis of the empirical results shows the existence of a perspective continuum in VR. Our findings can be used to design experiences based on shifts of perception.
Article
Full-text available
To reproduce realistic audio-visual scenarios in the laboratory, Ambisonics is often used to reproduce a sound field over loudspeakers and virtual reality (VR) glasses are used to present visual information. Both technologies have been shown to be suitable for research. However, the combination of both technologies, Ambisonics and VR glasses, might affect the spatial cues for auditory localization and thus, the localization percept. Here, we investigated how VR glasses affect the localization of virtual sound sources on the horizontal plane produced using either 1st-, 3rd-, 5th- or 11th-order Ambisonics with and without visual information. Results showed that with 1st-order Ambisonics the localization error is larger than with the higher orders, while the differences across the higher orders were small. The physical presence of the VR glasses without visual information increased the perceived lateralization of the auditory stimuli by on average about 2°, especially in the right hemisphere. Presenting visual information about the environment and potential sound sources did reduce this HMD-induced shift, however it could not fully compensate for it. While the localization performance itself was affected by the Ambisonics order, there was no interaction between the Ambisonics order and the effect of the HMD. Thus, the presence of VR glasses can alter acoustic localization when using Ambisonics sound reproduction, but visual information can compensate for most of the effects. As such, most use cases for VR will be unaffected by these shifts in the perceived location of the auditory stimuli.
Conference Paper
Full-text available
Advances in tracking technology and wireless headsets enable walking as a means of locomotion in Virtual Reality. When exploring virtual environments larger than room-scale, it is often desirable to increase users' perceived walking speed, for which we investigate three methods. (1) Ground-Level Scaling increases users' avatar size, allowing them to walk farther. (2) Eye-Level Scaling enables users to walk through a World in Miniature, while maintaining a street-level view. (3) Seven-League Boots amplifies users' movements along their walking path. We conduct a study comparing these methods and find that users feel most embodied using Ground-Level Scaling and consequently increase their stride length. Using Seven-League Boots, unlike the other two methods, diminishes positional accuracy at high gains, and users modify their walking behavior to compensate for the lack of control. We conclude with a discussion on each technique's strength and weaknesses and the types of situation they might be appropriate for.
Conference Paper
Full-text available
Orientation is an emerging issue in cinematic Virtual Reality (VR), as viewers may fail in locating points of interest. Recent strategies to tackle this research problem have investigated the role of cues, specifically diegetic sound effects. In this paper, we examine the use of sound spatialization for orientation purposes, namely by studying different spatialization conditions ("none", "partial", and "full" spatial manipulation) of multitrack soundtracks. We performed a between-subject mixed-methods study with 36 participants, aided by Cue Control, a tool we developed for dynamic spatial sound editing and data collection/analysis. Based on existing literature on orientation cues in 360º and theories on human listening, we discuss situations in which the spatialization was more effective (namely, "full" spatial manipulation both when using only music and when combining music and diegetic effects), and how this can be used by creators of 360º videos.
Article
Full-text available
To achieve accurate spatial auditory perception, subjects typically require personal head-related transfer functions (HRTFs) and the freedom for head movements. Loudspeaker-based virtual sound environments allow for realism without individualized measurements. To study audio-visual perception in realistic environments, the combination of spatially tracked head mounted displays (HMDs), also known as virtual reality glasses, and virtual sound environments may be valuable. However, HMDs were recently shown to affect the subjects’ HRTFs and thus might influence sound localization performance. Furthermore, due to limitations of the reproduction of visual information on the HMD, audio-visual perception might be influenced. Here, a sound localization experiment was conducted both with and without an HMD and with a varying amount of visual information provided to the subjects. Furthermore, interaural time and level difference errors (ITDs and ILDs) as well as spectral perturbations induced by the HMD were analyzed and compared to the perceptual localization data. The results showed a reduction of the localization accuracy when the subjects were wearing an HMD and when they were blindfolded. The HMD-induced error in azimuth localization was found to be larger in the left than in the right hemisphere. When visual information of the limited set of source locations was provided, the localization error induced by the HMD was found to be negligible. Presenting visual information of hand-location and room dimensions showed better sound localization performance compared to the condition with no visual information. The addition of possible source locations further improved the localization accuracy. Also adding pointing feedback in form of a virtual laser pointer improved the accuracy of elevation perception but not of azimuth perception.
Conference Paper
Full-text available
Knowing the locations of spatially distributed objects is important in many different scenarios (e.g., driving a car and being aware of other road users). In particular, it is critical for preventing accidents with objects that come too close (e.g., cyclists or pedestrians). In this paper, we explore how peripheral cues can shift a user's attention towards spatially distributed out-of-view objects. We identify a suitable technique for visualization of these out-of-view objects and explore different cue designs to advance this technique to shift the user's attention. In a controlled lab study, we investigate non-animated peripheral cues with audio stimuli and animated peripheral cues without audio stimuli. Further, we looked into how user's identify out-of-view objects. Our results show that shifting the user's attention only takes about 0.86 seconds on average when animated stimuli are used, while shifting the attention with non-animated stimuli takes an average of 1.10 seconds.
Article
Full-text available
Inside virtual reality, users can embody avatars that are collocated from a first-person perspective. When doing so, participants have the feeling that the own body has been substituted by the self-avatar, and that the new body is the source of the sensations. Embodiment is complex as it includes not only body ownership over the avatar, but also agency, co-location, and external appearance. Despite the multiple variables that influence it, the illusion is quite robust, and it can be produced even if the self-avatar is of a different age, size, gender, or race from the participant's own body. Embodiment illusions are therefore the basis for many social VR experiences and a current active research area among the community. Researchers are interested both in the body manipulations that can be accepted, as well as studying how different self-avatars produce different attitudinal, social, perceptual, and behavioral effects. However, findings suggest that despite embodiment being strongly associated with the performance and reactions inside virtual reality, the extent to which the illusion is experienced varies between participants. In this paper, we review the questionnaires used in past experiments and propose a standardized embodiment questionnaire based on 25 questions that are prevalent in the literature. We encourage future virtual reality experiments that include first-person virtual avatars to administer this questionnaire in order to evaluate the degree of embodiment.
Article
Full-text available
Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.
Article
Full-text available
Empirical research on the bodily self has shown that the body representation is malleable, and prone to manipulation when conflicting sensory stimuli are presented. Using Virtual Reality (VR) we assessed the effects of manipulating multisensory feedback (full body control and visuo-tactile congruence) and visual perspective (first and third person perspective) on the sense of embodying a virtual body that was exposed to a virtual threat. We also investigated how subjects behave when the possibility of alternating between first and third person perspective at will was presented. Our results support that illusory ownership of a virtual body can be achieved in both first and third person perspectives under congruent visuo-motor-tactile condition. However, subjective body ownership and reaction to threat were generally stronger for first person perspective and alternating condition than for third person perspective. This suggests that the possibility of alternating perspective is compatible with a strong sense of embodiment, which is meaningful for the design of new embodied VR experiences.
Article
Full-text available
Current design of virtual reality (VR) applications relies essentially on the transposition of users’ viewpoint in first-person perspective (1PP). Within this context, our research aims to compare the impact and the potentialities enabled via the integration of the third-person perspective (3PP) in immersive virtual environments (IVE). Our empirical study is conducted in order to assess the sense of presence, the sense of embodiment, and performance of users confronted with a series of tasks presenting a case of potential use for the video game industry. Our results do not reveal significant differences concerning the sense of spatial presence with either point of view. Nonetheless, they provide evidence confirming the relevance of using the first-person perspective to induce a sense of embodiment toward a virtual body, especially in terms of self-location and ownership. However, no significant differences were observed concerning the sense of agency. Concerning users’ performance, our results demonstrate that the first-person perspective enables more accurate interactions, while the third-person perspective provides better space awareness.
Article
Full-text available
People generally show greater preference for members of their own racial group compared to racial out-group members. This type of ‘in-group bias’ is evident in mimicry behaviors. We tend to automatically mimic the behaviors of in-group members, and this behavior is associated with interpersonal sensitivity and empathy. However, mimicry is reduced when interacting with out-group members. Although race is considered an unchangeable trait, it is possible using embodiment in immersive virtual reality to engender the illusion in people of having a body of a different race. Previous research has used this technique to show that after a short period of embodiment of White people in a Black virtual body their implicit racial bias against Black people diminishes. Here we show that this technique powerfully enhances mimicry. We carried out an experiment with 32 White (Caucasian) female participants. Half were embodied in a White virtual body and the remainder in a Black virtual body. Each interacted in two different sessions with a White and a Black virtual character, in counterbalanced order. The results show that dyads with the same virtual body skin color expressed greater mimicry than those of different color. Importantly, this effect occurred depending on the virtual body’s race, not participants’ actual racial group. When embodied in a Black virtual body, White participants treat Black as their novel in-group and Whites become their novel out-group. This reversed in-group bias effect was obtained regardless of participants’ level of implicit racial bias. We discuss the theoretical and practical implications of this surprising psychological phenomenon.
Article
Full-text available
Mental health problems are inseparable from the environment. With virtual reality (VR), computer-generated interactive environments, individuals can repeatedly experience their problematic situations and be taught, via evidence-based psychological treatments, how to overcome difficulties. VR is moving out of specialist laboratories. Our central aim was to describe the potential of VR in mental health, including a consideration of the first 20 years of applications. A systematic review of empirical studies was conducted. In all, 285 studies were identified, with 86 concerning assessment, 45 theory development, and 154 treatment. The main disorders researched were anxiety ( n = 192), schizophrenia ( n = 44), substance-related disorders ( n = 22) and eating disorders ( n = 18). There are pioneering early studies, but the methodological quality of studies was generally low. The gaps in meaningful applications to mental health are extensive. The most established finding is that VR exposure-based treatments can reduce anxiety disorders, but there are numerous research and treatment avenues of promise. VR was found to be a much-misused term, often applied to non-interactive and non-immersive technologies. We conclude that VR has the potential to transform the assessment, understanding and treatment of mental health problems. The treatment possibilities will only be realized if – with the user experience at the heart of design – the best immersive VR technology is combined with targeted translational interventions. The capability of VR to simulate reality could greatly increase access to psychological therapies, while treatment outcomes could be enhanced by the technology's ability to create new realities. VR may merit the level of attention given to neuroimaging.
Article
Full-text available
This study evaluates several methods for reporting the perceived location of real sound sources. It is well known that the method used for collecting judgments in auditory-localization experiments has a strong influence on the accuracy of a subject’s response. Previous works on auditory-localization tasks revealed that egocentric pointing methods (which are based on a body-centered coordinate system) allow for more accurate judgments than verbal reporting or exocentric pointing techniques (which are based on a 2D or 3D reporting device). Three different egocentric methods are compared: the most commonly applied “manual pointing” and “head pointing” methods, and the “proximal pointing” method, which forces the participants to indicate the apparent direction by pointing in the proximal region of the head with a marker held at the fingertips. The two first methods involve a rotation of the body of the participant, whereas the third method only involves movements of the arm(s) and hand(s) with a fixed head. Sound stimuli were presented randomly over 24 loudspeakers that were uniformly distributed on the upper hemisphere around the subject. The merits of the different methods are compared and discussed with regard to localization errors and to practical considerations. Although they show similar trends, each of the different methods affects the pointing accuracy in a specific way. The proximal pointing method, for example, is more accurate for sources located at high elevation angles. However, at rear locations close to the median plane an increased bias appears due to difficulties in performing the motor task to reach these positions. The proximal pointing method shows faster response times, which may be advantageous when planning 3D sound localization experiments.
Conference Paper
Full-text available
Contemporary digital game developers offer a variety of games for the diverse tastes of their customers. Although the gaming experience often depends on one's preferences, the same may not apply to the level of their immersion. It has been argued whether the player perspective can influence the level of player's involvement with the game. The aim of this study was to research whether interacting with a game in first person perspective is more immersive than playing in the third person point of view (POV). The set up to test the theory involved participants playing a role-playing game in either mode, naming their preferred perspective, and subjectively evaluating their immersive experience. The results showed that people were more immersed in the game play when viewing the game world through the eyes of the character, regardless of their preferred perspectives.
Article
Full-text available
The use of Virtual Reality (VR) in sports training is now widely studied with the perspective to transfer motor skills learned in virtual environments (VEs) to real practice. However precision motor tasks that require high accuracy have been rarely studied in the context of VE, especially in Large Screen Image Display (LSID) platforms. An example of such a motor task is the basketball free throw, where the player has to throw a ball in a 46cm wide basket placed at 4.2m away from her. In order to determine the best VE training conditions for this type of skill, we proposed and compared three training paradigms. These training conditions were used to compare the combinations of different user perspectives: first (1PP) and third-person (3PP) perspectives, and the effectiveness of visual guidance. We analysed the performance of eleven amateur subjects who performed series of free throws in a real and immersive 1:1 scale environment under the proposed conditions. The results show that ball speed at the moment of the release in 1PP was significantly lower compared to real world, supporting the hypothesis that distance is underestimated in large screen VEs. However ball speed in 3PP condition was more similar to the real condition, especially if combined with guidance feedback. Moreover, when guidance information was proposed, the subjects released the ball at higher - and closer to optimal - position (5-7% higher compared to no-guidance conditions). This type of information contributes to better understand the impact of visual feedback on the motor performance of users who wish to train motor skills using immersive environments. Moreover, this information can be used by exergames designers who wish to develop coaching systems to transfer motor skills learned in VEs to real practice. Copyright
Conference Paper
Full-text available
Third Person Perspective (3PP) viewpoints have the potential to expand how one perceives and acts in a virtual environment. They offer increased awareness of the posture and of the surrounding of the virtual body as compared to First Person Perspective (1PP). But from another standpoint, 3PP can be considered as less effective for inducing a strong sense of embodiment into a virtual body. Following an experimental paradigm based on full body motion capture and immersive interaction, this study investigates the effect of perspective and of visuomotor synchrony on the sense of embodiment. It provides evidence supporting a high sense of embodiment in both 1PP and 3PP during engaging motor tasks, as well as guidelines for choosing the optimal perspective depending on location of targets
Article
Full-text available
Many people fail to save what they need to for retirement (Munnell, Webb, and Golub-Sass 2009). Research on excessive discounting of the future suggests that removing the lure of immediate rewards by pre-committing to decisions, or elaborating the value of future rewards can both make decisions more future-oriented. In this article, we explore a third and complementary route, one that deals not with present and future rewards, but with present and future selves. In line with thinkers who have suggested that people may fail, through a lack of belief or imagination, to identify with their future selves (Parfit 1971; Schelling 1984), we propose that allowing people to interact with age-progressed renderings of themselves will cause them to allocate more resources toward the future. In four studies, participants interacted with realistic computer renderings of their future selves using immersive virtual reality hardware and interactive decision aids. In all cases, those who interacted with virtual future selves exhibited an increased tendency to accept later monetary rewards over immediate ones.
Article
Full-text available
In 2 studies, the Inclusion of Other in the Self (IOS) Scale, a single-item, pictorial measure of closeness, demonstrated alternate-form and test–retest reliability; convergent validity with the Relationship Closeness Inventory (E. Berscheid et al, 1989), the R. J. Sternberg (1988) Intimacy Scale, and other measures; discriminant validity; minimal social desirability correlations; and predictive validity for whether romantic relationships were intact 3 mo later. Also identified and cross-validated were (1) a 2-factor closeness model (Feeling Close and Behaving Close) and (2) longevity–closeness correlations that were small for women vs moderately positive for men. Five supplementary studies showed convergent and construct validity with marital satisfaction and commitment and with a reaction-time (RT)-based cognitive measure of closeness in married couples; and with intimacy and attraction measures in stranger dyads following laboratory closeness-generating tasks. In 3 final studies most Ss interpreted IOS Scale diagrams as depicting interconnectedness. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.
Article
Full-text available
During an out-of-body experience (OBE), the experient seems to be awake and to see his body and the world from a location outside the physical body. A closely related experience is autoscopy (AS), which is characterized by the experience of seeing one's body in extrapersonal space. Yet, despite great public interest and many case studies, systematic neurological studies of OBE and AS are extremely rare and, to date, no testable neuroscientific theory exists. The present study describes phenomenological, neuropsychological and neuroimaging correlates of OBE and AS in six neurological patients. We provide neurological evidence that both experiences share important central mechanisms. We show that OBE and AS are frequently associated with pathological sensations of position, movement and perceived completeness of one's own body. These include vestibular sensations (such as floating, flying, elevation and rotation), visual body-part illusions (such as the illusory shortening, transformation or movement of an extremity) and the experience of seeing one's body only partially during an OBE or AS. We also find that the patient's body position prior to the experience influences OBE and AS. Finally, in five patients, brain damage or brain dysfunction is localized to the temporo-parietal junction (TPJ). These results suggest that the complex experiences of OBE and AS represent paroxysmal disorders of body perception and cognition (or body schema). The processes of body perception and cognition, and the unconscious creation of central representation(s) of one's own body based on proprioceptive, tactile, visual and vestibular information-as well as their integration with sensory information of extrapersonal space-is a prerequisite for rapid and effective action with our surroundings. Based on our findings, we speculate that ambiguous input from these different sensory systems is an important mechanism of OBE and AS, and thus the intriguing experience of seeing one's body in a position that does not coincide with its felt position. We suggest that OBE and AS are related to a failure to integrate proprioceptive, tactile and visual information with respect to one's own body (disintegration in personal space) and by a vestibular dysfunction leading to an additional disintegration between personal (vestibular) space and extrapersonal (visual) space. We argue that both disintegrations (personal; personal-extrapersonal) are necessary for the occurrence of OBE and AS, and that they are due to a paroxysmal cerebral dysfunction of the TPJ in a state of partially and briefly impaired consciousness.
Article
In Virtual Reality, a number of studies have been conducted to assess the influence of avatar appearance, avatar control and user point of view on the Sense of Embodiment (SoE) towards a virtual avatar. However, such studies tend to explore each factor in isolation. This paper aims to better understand the inter-relations among these three factors by conducting a subjective matching experiment. In the presented experiment (n=40), participants had to match a given "optimal" SoE avatar configuration (realistic avatar, full-body motion capture, first-person point of view), starting by a "minimal" SoE configuration (minimal avatar, no control, third-person point of view), by iteratively increasing the level of each factor. The choices of the participants provide insights about their preferences and perception over the three factors considered. Moreover, the subjective matching procedure was conducted in the context of four different interaction tasks with the goal of covering a wide range of actions an avatar can do in a VE. The paper also describes a baseline experiment (n=20) which was used to define the number and order of the different levels for each factor, prior to the subjective matching experiment (e.g. different degrees of realism ranging from abstract to personalised avatars for the visual appearance). The results of the subjective matching experiment show that point of view and control levels were consistently increased by users before appearance levels when it comes to enhancing the SoE. Second, several configurations were identified with equivalent SoE as the one felt in the optimal configuration, but vary between the tasks. Taken together, our results provide valuable insights about which factors to prioritize in order to enhance the SoE towards an avatar in different tasks, and about configurations which lead to fulfilling SoE in VE.
Conference Paper
How do people appropriate their virtual hand representation when interacting in virtual environments? In order to answer this question, we conducted an experiment studying the sense of embodiment when interacting with three different virtual hand representations, each one providing a different degree of visual realism but keeping the same control mechanism. The main experimental task was a Pick-and-Place task in which participants had to grasp a virtual cube and place it to an indicated position while avoiding an obstacle (brick, barbed wire or fire). An additional task was considered in which participants had to perform a potentially dangerous operation towards their virtual hand: place their virtual hand close to a virtual spinning saw. Both qualitative measures and questionnaire data were gathered in order to assess the sense of agency and ownership towards each virtual hand. Results show that the sense of agency is stronger for less realistic virtual hands which also provide less mismatch between the participant's actions and the animation of the virtual hand. In contrast, the sense of ownership is increased for the human virtual hand which provides a direct mapping between the degrees of freedom of the real and virtual hand.,
Article
An extensive set of head-related transfer function (HRTF) measurements of a Knowles Electronics Mannequin for Acoustic Research (KEMAR) has recently been completed. The measurements consist of the left and right ear impulse responses from a Realistic Optimus Pro 7 loudspeaker mounted 1.4 m from the KEMAR. Maximum length (ML) pseudorandom binary sequences were used to obtain the impulse responses at a sampling rate of 44.1 kHz. In total, 710 different positions were sampled at elevations from -40 deg to +90 deg. These data are being made available to the research community on the Internet via anonymous FTP and the World Wide Web.
Article
Illusions have historically been of great use to psychology for what they can reveal about perceptual processes. We report here an illusion in which tactile sensations are referred to an alien limb. The effect reveals a three-way interaction between vision, touch and proprioception, and may supply evidence concerning the basis of bodily self-identification.
Article
Two studies were performed to investigate the sense of presence within stereoscopic virtual environments as a function of the addition or absence of auditory cues. The first study examined the presence or absence of spatialized sound, while the second study compared the use of nonspatialized sound to spatialized sound. Sixteen subjects were allowed to navigate freely throughout several virtual environments and for each virtual environment, their level of presence, the virtual world realism, and interactivity between the participant and virtual environment were evaluated using survey questions. The results indicated that the addition of spatialized sound significantly increased the sense of presence but not the realism of the virtual environment. Despite this outcome, the addition of a spatialized sound source significantly increased the realism with which the subjects interacted with the sound source, and significantly increased the sense that sounds emanated from specific locations within the virtual environment. The results suggest that, in the context of a navigation task, while presence in virtual environments can be improved by the addition of auditory cues, the perceived realism of a virtual environment may be influenced more by changes in the visual rather than auditory display media. Implications of these results for presence within auditory virtual environments are discussed.
Article
I report an illusion in which individuals experience that they are located outside their physical bodies and looking at their bodies from this perspective. This demonstrates that the experience of being localized within the physical body can be determined by the visual perspective in conjunction with correlated multisensory information from the body.
Conference Paper
The paper reports the results of two experiments each investigating the sense of presence within visual and auditory virtual environments. The variables for the studies included the presence or absence of head tracking, the presence or absence of stereoscopic cues, the geometric field of view (GFOV) used to design the visual display, the presence or absence of spatialized sound and the addition of spatialized versus non-spatialized sound to a stereoscopic display. In both studies, subjects were required to navigate a virtual environment and to complete a questionnaire designed to ascertain the level of presence experienced by the participant within the virtual world. The results indicated that the reported level of presence was significantly higher when head tracking and stereoscopic cues were provided, with more presence associated with a 50 and 90 degree GFOV when compared to a narrower 10 degree GFOV. Further, the addition of spatialized sound did significantly increase ones sense of presence in the virtual environment, on the other hand, the addition of spatialized sound did not increase the apparent realism of that environment