Conference Paper

The Impact of a Self-Avatar on Cognitive Load in Immersive Virtual Reality

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The use of a self-avatar inside an immersive virtual reality system has been shown to have important effects on presence, interaction and perception of space. Based on studies from linguistics and cognition, in this paper we demonstrate that a self-avatar may aid the participant’s cognitive processes while immersed in a virtual reality system. In our study participants were asked to memorise pairs of letters, perform a spatial rotation exercise and then recall the pairs of letters. In a between-subject factor they either had an avatar or not, and in a within-subject factor they were instructed to keep their hands still or not. We found that participants who both had an avatar and were allowed to move their hands had significantly higher letter pair recall. There was no significant difference between the other three conditions. Further analysis showed that participants who were allowed to move their hands, but could not see the self-avatar, usually didn’t move their hands or stopped moving their hands after a short while. We argue that an active self-avatar may alleviate the mental load of doing the spatial rotation exercise and thus improve letter recall. The results are further evidence of the importance of an appropriate self-avatar representation in immersive virtual reality.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, incongruent distractions (ID) are more likely to cause significant disruptions because they conflict with the user's task goals, forcing the brain to process unrelated information [76]. This mismatch between the distraction and the task increases cognitive load [43] and makes it more difficult to maintain focus, thereby increasing the likelihood of BIP [62]. These distractions can negatively impact users' reaction times [34], as their attention is forced to shift between realworld and virtual stimuli [74]. ...
... As users experience BIP, their cognitive resources are redirected, which may result in slower reaction times [62]. Bracken et al. [5] found that reaction times increased during BIP events, showing that secondary task reaction time effectively measures disruptions in presence and attention. ...
... In MR environments, managing both virtual and real-world stimuli requires significant cognitive effort, particularly when distractions are present. Cognitive load theory [65] suggests that distractions increase extraneous cognitive load; because of that, when distraction occurs, users have fewer resources to maintain a presence in the VE [62]. This reflects the user's need to allocate mental resources across multiple competing sources of information. ...
Preprint
Full-text available
Distractions in mixed reality (MR) environments can significantly influence user experience, affecting key factors such as presence, reaction time, cognitive load, and Break in Presence (BIP). Presence measures immersion, reaction time captures user responsiveness, cognitive load reflects mental effort, and BIP represents moments when attention shifts from the virtual to the real world, breaking immersion. However, the effects of distractions on these elements remain insufficiently explored. To address this gap, we have presented a theoretical model to understand how congruent and incongruent distractions affect all these constructs. We conducted a within-subject study (N=54) where participants performed image-sorting tasks under different distraction conditions. Our findings show that incongruent distractions significantly increase cognitive load, slow reaction times, and elevate BIP frequency, with presence mediating these effects.
... One must employ strategies to offset the cognitive load provided by these tests to succeed. Humans naturally tend to rely on gestures for such recall, and studies have shown that the inability to gesture directly inhibits recall [24,60]. In the realm of VR, the natural view of the body is occluded by the HMD; however, such occlusions can be rectified with the involvement of an avatar. ...
... Peck & Tutar found that performance on Stroop tests was positively correlated to embodiment of an avatar (e.g., self-location) [49], suggesting that avatars may affect working memory. Steed et al. also investigated the effect of embodiment on cognitive tests and working memory [60]. Participants who had an avatar and had agency over their avatar performed better on their respective cognitive test. ...
... Based on prior work, we expect that there will be no significant differences between the three conditions in terms of measured subjective agency. Yet, previous research has determined that agency is conducive to embodiment [32,60,64]. Assuming that subjectively perceived agency and embodiment are related measures of body-ownership illusions, the relationship between agency and embodiment should be positively correlated. ...
Article
Full-text available
Control over an avatar in virtual reality can improve one's perceived sense of agency and embodiment towards their avatar. Yet, the relationship between control on agency and embodiment remains unclear. This work aims to investigate two main ideas: (1) the effectiveness of currently used metrics in measuring agency and embodiment and (2) the relationship between different levels of control on agency, embodiment, and cognitive performance. To do this, we conducted a between-participants user study with three conditions on agency (n = 57). Participants embodied an avatar with one of three types of control (i.e., Low - control over head only, Medium - control over head and torso, or High - control over head, torso, and arms) and completed a Stroop test. Our results indicate that the degree of control afforded to participants impacted their embodiment and cognitive performance but, as expected, could not be detected in the self-reported agency scores. Furthermore, our results elucidated further insights into the relationship between control and embodiment, suggesting potential uncanny valley-like effects. Future work should aim to refine agency measures to better capture the effect of differing levels of control and consider other methodologies to measure agency
... The results of [6] arrive at the conclusion that VR exposure therapies have "a large effect size compared to control conditions and an equal effect size to that of in-vivo exposure" (p. 35) and that "these results were consistent for different disorders" (p.35); see also [7] for a broader, more differentiated analysis. VR is also used successfully in medical rehabilitation treatments and, according to comparative clinical studies, leads to better recovery results than conventional therapy approaches for, amongst others, balance disorders and disorders of the upper limb functions [8]. ...
... All participants were represented in the virtual Expo Hall by their own avatar in order to optimize presence and minimize cognitive load, as was suggested by [35]. The avatars representing the participants were anthropomorphic, genderless robots. ...
... The interaction through a virtual body in VR is shown in multiple studies. Steed et al. [12] conducted a study where participants were asked to memorize pairs of letters, they afterwards had to perform a spatial rotation task (rotating a shown arrangement of blocks to match one of four shown arrangements of blocks) to distract them and afterwards had to recall this pairs of letters. The study found that a visible self-avatar, compared to a no-avatar condition, reduces mental load when doing the spatial rotation task and thus improves letter recall. ...
... The study found that a visible self-avatar, compared to a no-avatar condition, reduces mental load when doing the spatial rotation task and thus improves letter recall. Similar positive effects of avatars are stated by Mohler et al. [12], where participants who saw a fully articulated and tracked representation of themselves made more accurate judgments of absolute egocentric distance to locations compared to participants seeing no avatar. ...
Conference Paper
Virtual Reality (VR) has emerged as a powerful tool in the industry, offering benefits such as enhanced learning outcomes, improved skill acquisition, and deeper engagement across various fields [1]. However, the absence of a standardized approach and guidelines for VR system operation poses a challenge when implementing interaction types and user representations in VR environments. This paper focuses on investigating the impact of controller visualization on precision, accuracy, and immersion within VR environments. By identifying the optimal and most comfortable user representation, which involves a combination of input devices and visual displays, we aim to address this challenge. Previous research has not sufficiently explored the interplay between these interaction methods and user ownership, task performance and usability in immersive virtual environments for different tasks. To bridge this gap, we conducted a comprehensive user study, comparing three VR user representations: visualized controllers with physical controllers, visualized animated hands with physical controllers, and visualized animated hands with hand tracking. Participants evaluated each setup, and we employed qualitative feedback and quantitative metrics, including task completion times and error rates, to evaluate their experiences. Our findings highlight that hand tracking may have limitations in terms of usability but excels in generating a strong sense of body ownership compared to the alternative options. Notably, our analysis revealed minimal or no significant difference between the use of visualized hands or controllers when controllers were employed as the input device. Thus, the choice of the best user representation largely depends on personal preference, while the most effective operation mode varies based on the specific task executed in the VR environment. By leveraging these insights, the industry can harness the full potential of VR technology, driving greater productivity, efficiency, and overall success.
... Both the realms of "embodiment science" and the full body illusion (FBI) paradigm, inspired by the principles of the rubber hand illusion (RHI), which entails experimentally induced illusory ownership over a synthetic or virtual body, serve as principal conceptual frameworks aimed at comprehending the impact of self-avatars on their users (Furlan and Spagnolli 2021;Kilteni et al. 2012;Maselli and Slater 2013;Petkova et al. 2011;Spanlang et al. 2014). Their therapeutic application has demonstrated enhancements in users' cognitive abilities (Steed et al. 2016), haptic performance (Maselli et al. 2016), and self-recognition/identification (Gonzalez-Franco et al. 2020). Within this context, advanced technologies are being explored for the purpose of modifying (e.g., structuring, augmenting, or replacing) the attributes of BSC and altering the subjective experience of inhabiting a body. ...
Article
Full-text available
The application of advanced embodied technologies, particularly virtual reality (VR), has been suggested as a means to induce the full-body illusion (FBI). This technology is employed to modify different facets of bodily self-consciousness, which involves the sense of inhabiting a physical form, and is influenced by cognitive inputs, affective factors like body dissatisfaction, individual personality traits and suggestibility. Specifically, VR-based Mirror Exposure Therapies are used for the treatment of anorexia nervosa (AN). This study aims to investigate whether the “Big Five” personality dimensions, suggestibility, body dissatisfaction and/or body mass index can act as predictors for FBI, either directly or acting as a mediator, in young women of similar gender and age as most patients with AN. The FBI of 156 healthy young women immersed in VR environment was induced through visuomotor and visuo-tactile stimulations, and then assessed using the Avatar Embodiment Questionnaire, comprising four dimensions: Appearance, Ownership, Response, and Multi-Sensory. Data analysis encompassed multiple linear regressions and SPSS PROCESS macro’s mediation model. The findings revealed that the “Big Five” personality dimensions did not directly predict FBI in healthy young women, but Openness to experience, Agreeableness, and Neuroticism exerted an indirect influence on some FBI components through the mediation of suggestibility.
... Therefore, a complete evaluation of virtual embodiment based on full-body immersive VR avatars using cutting-edge VR equipment and wearable kinesthetic devices is necessary, which impacts individuals, performance, and social interaction (Steed et al., 2016;Pan and Steed, 2019;Bujić et al., 2021;Beyea et al., 2023). To contribute to the line of this research, we constructed a full-body motion tracking system using multiple motion trackers (HTC VIVE 4 ) and a pair of current state-of-the-art wearable kinesthetic gloves (HaptX), which allowed users to control avatars in VR. ...
Article
Full-text available
Enhancing the experience of virtual reality (VR) through haptic feedback could benefit applications from leisure to rehabilitation and training. Devices which provide more realistic kinesthetic (force) feedback appear to hold more promise than their simpler vibrotactile counterparts. However, our understanding of kinesthetic feedback on virtual embodiment is still limited due to the novelty of appropriate kinesthetic devices. To contribute to the line of this research, we constructed a wearable system with state-of-the-art kinesthetic gloves for avatar full-body control, and conducted a between-subjects study involving an avatar self-touch task. We found that providing a kinesthetic sense of touch substantially strengthened the embodiment illusion in VR. We further explored the ability of these kinesthetic gloves to present virtual objects haptically. The gloves were found to provide useful haptic cues about the basic 3D structure and stiffness of objects for a discrimination task. This is one of the first studies to explore virtual embodiment by employing state-of-the-art kinesthetic gloves in full-body VR.
... The research evaluated the spatial awareness for the "sense of being" among the participant by observing their behaviour and response in the simulations. "Self-Avatar for Cognition" is one of the papers in which the studies focused on the users' cognitive skills during the experimentation in XR's Virtual reality [27]. Similar to Argelaguet's Virtual embodiment experiment, one of the scenarios in the simulation introduced self-avatar for the user to perform a task and studied the participant's skills in VR and observed cognitive abilities enhanced because of having a user-avatar to perform the interaction. ...
... The representation of the body through avatars, or self-avatars, holds importance for users [77], particularly in immersive technologies like VR [73]. We define self-avatars as avatars wherein "users are embodied by a virtual avatar that is co-located with the user's body" [20], and embodiment refers to the phenomenon where users "coexist with a virtual avatar, experiencing it from a first-person perspective" [50]. ...
Preprint
Full-text available
Matching avatar characteristics to a user can impact sense of embodiment (SoE) in VR. However, few studies have examined how participant demographics may interact with these matching effects. We recruited a diverse and racially balanced sample of 78 participants to investigate the differences among participant groups when embodying both demographically matched and unmatched avatars. We found that participant ethnicity emerged as a significant factor, with Asian and Black participants reporting lower total SoE compared to Hispanic participants. Furthermore, we found that user ethnicity significantly influences ownership (a subscale of SoE), with Asian and Black participants exhibiting stronger effects of matched avatar ethnicity compared to White participants. Additionally, Hispanic participants showed no significant differences, suggesting complex dynamics in ethnic-racial identity. Our results also reveal significant main effects of matched avatar ethnicity and gender on SoE, indicating the importance of considering these factors in VR experiences. These findings contribute valuable insights into understanding the complex dynamics shaping VR experiences across different demographic groups.
... A particular aspect to achieve the feeling of immersion is the realism that the simulation provides [23]. Consequently, to improve the subjective experience of humans in simulated environments, it is relevant to have realistic digital humans, or avatars, in them [24]. An avatar is the representation of the self in a digital world [5], [25]. ...
Article
Full-text available
Photo-realistic avatar is a modern term referring to the digital asset that represents a human in computer graphic advanced systems such as video games and simulation tools. These avatars utilize the advances in graphic technologies in both software and hardware aspects. While photo-realistic avatars are increasingly used in industrial simulations, representing human factors such as human workers’ psychophysiological states, remains a challenge. This article addresses this issue by introducing the concept of MetaStates which are the digitization and representation of the psychophysiological states of a human worker in the digital world. The MetaStates influence the physical representation and performance of a digital human worker while performing a task. To demonstrate this concept, this study presents the development of a photo-realistic avatar enhanced with multi-level graphical representations of psychophysiological states relevant to Industry 5.0. This approach represents a major step forward in the use of digital humans for industrial simulations, allowing companies to better leverage the benefits of the Industrial Metaverse in their daily operations and simulations while keeping human workers at the center of the system.
... VR's spatial navigation reduces the cognitive load in programming learning, surpassing traditional text-based methods [12]. Additionally, VR provides a sense of selfpresence [22], fostering an embodied-cognitive learning experience [18,28] that encourages more intuitive interaction with the content [31], potentially improving overall learning outcomes [1]. Therefore, I propose GeoBotsVR, an easily accessible and easy-to-play VR puzzle-solving/racing game designed to foster interest and motivation in robotics, electronics, and embedded systems of players through an enjoyable experience. ...
Conference Paper
Full-text available
This article introduces GeoBotsVR, an easily accessible virtual reality game that combines elements of puzzle-solving with robotics learning and aims to cultivate interest and motivation in robotics, programming, and electronics among individuals with limited experience in these domains. The game allows players to build and customize a two-wheeled mobile robot using various robotic components and use their robot to solve various procedurally-generated puzzles in a diverse range of environments. An innovative aspect is the inclusion of a repair feature, requiring players to address randomly generated electronics and programming issues with their robot through hands-on manipulation. GeoBotsVR is designed to be immersive, replayable, and practical application-based, offering an enjoyable and accessible tool for beginners to acquaint themselves with robotics. The game simulates a hands-on learning experience and does not require prior technical knowledge, making it a potentially valuable resource for beginners to get an engaging introduction to the field of robotics.
... A diferentiating factor for avatar embodiment in VR as compared to non-immersive media is the frst-person perspective [36], which makes studying scale-related asymmetry particularly interesting when one of the players is using immersive VR. Some of the documented efects of embodiment include enhanced sensory experiences such as haptics [20]; cognition can also be enhanced when using an avatar [44,54]; the self-avatar follower efect [21], by which the avatar movements can infuence the participants; or the Proteus efect [61] This type of plasticity allows easy modifcations to the body schema and body ownership, frst demonstrated by the rubber hand illusion [10]. This efect was then reproduced in large mannequins [16,39] and it has since been found quite easy to substitute a real body with a virtual one [50,53]. ...
... The effects of avatars have been described by researchers. Steed et al. [1] suggested that the use of avatars that follow the user's movements can reduce the cognitive load of certain tasks in the VR space. People around the world have been using VR social networking services, such as VRChat, where users enjoy interacting with other users using avatars that they have selected and edited to their liking. ...
Conference Paper
Full-text available
Although the implementation of "Adaptive Virtual Reality" is becoming feasible, understanding the main effects of its realization on users based on cognitive models is essential. Here, as the first step, we first describe a model of the flow of information obtained by actual human perception through avatars in virtual reality (VR) and the resulting human reactions, and confirm the validity of the user models proposed so far. We also consider the degree of immersion predicted due to the integration of multimodal information. The cognitive processes of VR experiences are largely categorized into "perception and recognition of information (attention, memory, and decision making)" and "perception-based physical actions and interactions with VR objects". Based on this, we describe a cognitive model of VR experiences. In addition, as examples of the discrepancies in sensory perception experienced in real/VR spaces, we briefly describe the phenomena that occur in communication. We describe the cognitive models for these phenomena and qualitatively consider the degree to which sensory information obtained from the real/VR space affects the degree of chunks activation. The intensity of human sense is expressed as a logarithm according to Weber-Fechner's Law, suggesting that human senses can distinguish differences even with weak sensory information. We argue that the "slightly different from the real world" sense felt in VR content is caused by such slight differences in sensory information. Overall, we advance the cognitive understanding of the immersive experience particularly in the VR space, and qualitatively describe the possibility of designing highly immersive VR content which are adapted to each individual.
... VR's spatial navigation reduces the cognitive load in programming learning, surpassing traditional text-based methods [12]. Additionally, VR provides a sense of selfpresence [22], fostering an embodied-cognitive learning experience [18,28] that encourages more intuitive interaction with the content [31], potentially improving overall learning outcomes [1]. Therefore, I propose GeoBotsVR, an easily accessible and easy-to-play VR puzzle-solving/racing game designed to foster interest and motivation in robotics, electronics, and embedded systems of players through an enjoyable experience. ...
Preprint
Full-text available
This article introduces GeoBotsVR, an easily accessible virtual reality game that combines elements of puzzle-solving with robotics learning and aims to cultivate interest and motivation in robotics, programming, and electronics among individuals with limited experience in these domains. The game allows players to build and customize a two-wheeled mobile robot using various robotic components and use their robot to solve various procedurally-generated puzzles in a diverse range of environments. An innovative aspect is the inclusion of a repair feature, requiring players to address randomly generated electronics and programming issues with their robot through hands-on manipulation. GeoBotsVR is designed to be immersive, replayable, and practical application-based, offering an enjoyable and accessible tool for beginners to acquaint themselves with robotics. The game simulates a hands-on learning experience and does not require prior technical knowledge, making it a potentially valuable resource for beginners to get an engaging introduction to the field of robotics.
... As a result of science fiction, the term came to refer to digital representations of people in virtual environments such as personifications in online communities and video games. The use of avatars to induce a sense of embodiment in VR settings has been shown to have various effects on users' immersiveness, physical social interaction, and spatial cognition [3,4]. ...
Article
Full-text available
The ANA Avatar XPRIZE was a four-year competition to develop a robotic “avatar” system to allow a human operator to sense, communicate, and act in a remote environment as though physically present. The competition featured a unique requirement that judges would operate the avatars after less than one hour of training on the human–machine interfaces, and avatar systems were judged on both objective and subjective scoring metrics. This paper presents a unified summary and analysis of the competition from technical, judging, and organizational perspectives. We study the use of telerobotics technologies and innovations pursued by the competing teams in their avatar systems, and correlate the use of these technologies with judges’ task performance and subjective survey ratings. It also summarizes perspectives from team leads, judges, and organizers about the competition’s execution and impact to inform the future development of telerobotics and telepresence.
... It has been known for a long time that self-body representation enhances presence. 13 Moreover, Steed et al. 14 found that having a virtual body that moves according to the participant's movements enhances the cognitive processes of participants compared to not having a mapped body that reflects their movements. Participants were better able to recall pairs of letters in the active avatar condition compared to a passive one, or where they were instructed to move their hands or not. ...
Article
Full-text available
VR United is a virtual reality application that we have developed to support multiple people simultaneously interacting in the same environment. Each person is represented with a virtual body that looks like themselves. Such immersive shared environments have existed and been the subject of research for the past 30 years. Here, we demonstrate how VR United meets criteria for successful interaction, where a journalist from the Financial Times in London interviewed a professor in New York for two hours. The virtual location of the interview was a restaurant, in line with the series of interviews published as “Lunch with the FT.” We show how the interview was successful, as a substitute for a physically present one. The article based on the interview was published in the Financial Times as normal for the series. We finally consider the future development of such systems, including some implications for immersive journalism.
... Previous research has shown that the appearance and type of avatars have an impact on cognition and presence [40]. Steed et al. [36] argued the necessity of a self-avatar in VR, as it affects users' cognitive load. Expanding on this work, Pan and Steed [32] conducted further studies and found that the cognitive load of a user is influenced by the type of avatar. ...
Conference Paper
Full-text available
Avatars serve as users’ virtual identities and hold a significant role in shaping the user experience within the realm of Virtual Reality (VR). The appearance of individual avatars and the perceived self- congruence within the environment are likely to influence users’ perceived presence in VR. In this paper, we present a study that investigates four types of avatars in VR: anime, human, animal, and item. Participants were asked to choose an avatar before en- tering a virtual environment (classroom, gallery, café, street, and forest) populated with avatars of different types and to evaluate their perceived self-congruence within the environment and the perceived presence. Our study results showed no significant differ- ence in presence when users use different avatars. However, there is a correlation between users’ perceived self-congruence and social presence. We discuss the findings and provide suggestions for the future use of avatars in VR.
... For instance, research has shown evidence of the importance of being embodied in the self-avatar within immersive virtual environments, meant as the ensemble of sensations that arise in conjunction with being can inside, having, and controlling a body, especially in relation to Virtual Reality applications (Kilteni, 2012). SoE within immersive virtual environments is a highly powerful aspect, as being embodied in avatar dramatically change user's experience in VR (Peck, 2013) Several experiments have demonstrated that SoE can increase user cognitive abilities (Steed, 2016) and sensorimotor abilities (McAnally, 2022), improve user immersion (Frohner, 2019) and haptic performance (Maselli et al., 2016;Gonzalez-Franco, 2019) increase self-recognition and identification through enfacement (Gonzalez-Franco, 2020). At the same time, it is a strongly subjective perception, since every user is unique and can have significantly different experiences, responses and performances in the same VR setup (Peck, 2021). ...
Conference Paper
Full-text available
The purpose of this work is to demonstrate the influence of embodiment on the success of Visuo-haptic Learning, as it has not been yet investigated by current literature. With this aim, we conducted an experimental campaign to compare the users’ Sense of Embodiment (SoE) and learning success values obtained by experiencing the same simulated duty cycle within two different Visuo-haptic Learning environments. Interesting results have been found: the embodiment influenced the users’ completion time and mental workload, but it did not have particular incidence on the obtained learning level (intended as knowledge of the procedure). With this work, we aim to highlight the necessity of conducting wider and deeper studies about the influence of human factors and subjective perceptions on the success of Visuo-haptic Learning.
... Users' head movements are tracked by the HMD and, depending on the condition, wrist or finger movements are tracked using the HTC Vive controllers or a Leap Motion. By using these data, both users are visualized as an avatar, allowing them to see the virtual representation of their counterparts and themselves and thus improving cognitive abilities (Steed et al., 2016). An inverse kinematic technique is used for a realistic representation of the avatars' movements. ...
Article
Full-text available
Virtual reality offers exciting new opportunities for training. This inspires more and more training fields to move from the real world to virtual reality, but some modalities are lost in this transition. In the real world, participants can physically interact with the training material; virtual reality offers several interaction possibilities, but do these affect the training’s success, and if yes, how? To find out how interaction methods influence the learning outcome, we evaluate the following four methods based on ordnance disposal training for civilians: 1) Real-World , 2) Controller-VR , 3) Free-Hand-VR , and 4) Tangible-VR in a between-subjects experiment ( n = 100). We show that the Free-Hand-VR method lacks haptic realism and has the worst training outcome. Training with haptic feedback, e.g ., Controller-VR , Tangible-VR , and Real-World , lead to a better overall learning effect and matches the participant’s self-assessment. Overall, the results indicate that free-hand interaction is improved by the extension of a tracked tangible object, but the controller-based interaction is most suitable for VR training.
... Embodiment, which is related to the avatar is an important factor that influences motivation, perception, cognitive activities, and creativity-enhancing effects (e.g., the Proteus effect) in VEs (Yee and Bailenson 2009;Steed et al. 2016;Flavián et al. 2021;Keenaghan et al. 2022). Embodiment is defined as a subjective feeling of being in a virtual body and having the property of that body (Kilteni et al. 2012). ...
Article
Full-text available
As an artificial space extended from the physical environment, the virtual environment (VE) provides more possibilities for humans to work and be entertained with less physical restrictions. Benefiting from anonymity, one of the important features of VEs, users are able to receive visual stimuli that might differ from the physical environment through digital representations presented in VEs. Avatars and contextual cues in VEs can be considered as digital representations of users and contexts. In this article, we analyzed 21 articles that examined the creativity-boosting effects of different digital user and contextual representations. We summarized the main effects induced by these two digital representations, notably the effect induced by the self-similar avatar, Proteus effect, avatar with Social Identity Cues, priming effect induced by contextual representation, and embodied metaphorical effect. In addition, we examined the influence of immersion on creativity by comparing non-immersive and immersive VEs (i.e., desktop VE and headset VE, respectively). Last, we discussed the roles of embodiment and presence in the creativity in VEs, which were overlooked in the past research.
Chapter
Full-text available
This chapter explores the theological foundations and implications of digital pneumatology, a developing field that examines the presence and power of the Holy Spirit in digital spaces, particularly within the metaverse. The rapid digitalization of church practices, accelerated by the COVID-19 pandemic, has led to an increasing engagement with virtual religious experiences. While some argue that digital interactions lack the authenticity of traditional communal worship, others assert that the Holy Spirit is not bound by physical spaces and can work powerfully in digital environments. The chapter outlines the theological underpinnings of digital pneumatology, emphasizing the omnipresence, transcendence, and immanence of the Holy Spirit. It argues that just as the Spirit operates in physical spaces, He can also manifest in virtual realities, facilitating spiritual experiences, worship, and fellowship among believers. Through avatar-mediated interactions, believers can encounter the Spirit, experience transformation, and participate in meaningful religious practices. Furthermore, the chapter discusses the Holy Spirit’s role in regeneration, sanctification, impartation, and healing within digital spaces. Testimonies of spiritual encounters in online settings, including the metaverse, suggest that digital environments can serve as conduits for divine interaction. The presence of the Spirit in the metaverse fosters spiritual community and empowers believers to engage in mission and ministry beyond traditional boundaries. While acknowledging concerns about digital spirituality’s potential drawbacks—such as consumerism and detachment from physical fellowship—the chapter contends that digital platforms can enhance religious engagement rather than replace physical worship. The emergence of virtual churches and digital missions signals a paradigm shift in ecclesiology and pneumatology. Digital pneumatology thus offers a framework for understanding the Spirit’s activity in an increasingly digital world, urging scholars and church leaders to reimagine ministry in the metaverse for the next generation of believers.
Conference Paper
Full-text available
Virtual reality (VR) is found to present significant cognitive challenges due to its immersive nature and frequent sensory conflicts. This study systematically investigates the impact of sensory conflicts induced by VR remapping techniques on cognitive fatigue, and unveils their correlation. We utilized three remapping methods (haptic repositioning, head-turning redirection, and giant resizing) to create different types of sensory conflicts, and measured perceptual thresholds to induce various intensities of the conflicts. Through experiments involving cognitive tasks along with subjective and physiological measures, we found that all three remapping methods influenced the onset and severity of cognitive fatigue, with visual-vestibular conflict having the greatest impact. Interestingly, visual-experiential/memory conflict showed a mitigating effect on cognitive fatigue, emphasizing the role of novel sensory experiences. This study contributes to a deeper understanding of cognitive fatigue under sensory conflicts and provides insights for designing VR experiences that align better with human perceptual and cognitive capabilities.
Article
Matching avatar characteristics to a user can impact sense of embodiment (SoE) in VR. However, few studies have examined how participant demographics may interact with these matching effects. We recruited a diverse and racially balanced sample of 78 participants to investigate the differences among participant groups when embodying both demographically matched and unmatched avatars. We found that participant ethnicity emerged as a significant factor, with Asian and Black participants reporting lower total SoE compared to Hispanic participants. Furthermore, we found that user ethnicity significantly influences ownership (a subscale of SoE), with Asian and Black participants exhibiting stronger effects of matched avatar ethnicity compared to White participants. Additionally, Hispanic participants showed no significant differences, suggesting complex dynamics in ethnic-racial identity. Our results also reveal significant main effects of matched avatar ethnicity and gender on SoE, indicating the importance of considering these factors in VR experiences. These findings contribute valuable insights into understanding the complex dynamics shaping VR experiences across different demographic groups.
Book
Full-text available
Visualization, Visual Analytics and Virtual Reality in Medicine: State-of-the-art Techniques and Applications describes important techniques and applications that show an understanding of actual user needs as well as technological possibilities. The book includes user research, for example, task and requirement analysis, visualization design and algorithmic ideas without going into the details of implementation. This reference will be suitable for researchers and students in visualization and visual analytics in medicine and healthcare, medical image analysis scientists and biomedical engineers in general. Visualization and visual analytics have become prevalent in public health and clinical medicine, medical flow visualization, multimodal medical visualization and virtual reality in medical education and rehabilitation. Relevant applications now include digital pathology, virtual anatomy and computer-assisted radiation treatment planning.
Chapter
Full-text available
Social interaction is one of the most popular use cases of virtual reality (VR). Virtual worlds accessed through VR headsets can immerse people in diverse places and present its users however they wish to be represented. The affordances of this technology allow people to connect with themselves, others, and their surroundings in unique ways. Research has shown that social norms found in the physical world transfer over to virtual worlds. People respond to virtual people in a manner similar to how they would treat people in the physical world. Although virtual worlds and the physical world share similarities, they have many differences. Virtual reality is not—and does not necessarily need to be—a veridical representation of the physical world. Virtual reality has the ability to transform everything, such as what people look like, how they behave, where they are, and how they see things. Cues related to people, such as their visual appearance and nonverbal behavior, or place, such as the surrounding environment and perspective, can be augmented, filtered, or suppressed. These transformations also lead to significant psychological and behavioral effects, affecting how people build trust, engage with others, or communicate nonverbally. Whereas some of these transformations may be unintentional, such as technological by-products, other transformations can be intentional. As a result, it is critical to understand how social interactions occur differently in these transformed environments.
Article
A prerequisite to improving the presence of a user in mixed reality (MR) is the ability to measure and quantify presence. Traditionally, subjective questionnaires have been used to assess the level of presence. However, recent studies have shown that presence is correlated with objective and systemic human performance measures such as reaction time. These studies analyze the correlation between presence and reaction time when technical factors such as object realism and plausibility of the object's behavior change. However, additional psychological and physiological human factors can also impact presence. It is unclear if presence can be mapped to and correlated with reaction time when human factors such as conditioning are involved. To answer this question, we conducted an exploratory study (N = 60) where the relationship between presence and reaction time was assessed under three different conditioning scenarios: control, positive, and negative. We demonstrated that human factors impact presence. We found that presence scores and reaction times are significantly correlated (correlation coefficient of −0.64), suggesting that the impact of human factors on reaction time correlates with its effect on presence. In demonstrating that, our study takes another important step toward using objective and systemic measures like reaction time as a presence measure.
Article
Full-text available
The sense of embodiment refers to the sensations of being inside, having, and controlling a body. In virtual reality, it is possible to substitute a person’s body with a virtual body, referred to as an avatar. Modulations of the sense of embodiment through modifications of this avatar have perceptual and behavioural consequences on users that can influence the way users interact with the virtual environment. Therefore, it is essential to define metrics that enable a reliable assessment of the sense of embodiment in virtual reality to better understand its dimensions, the way they interact, and their influence on the quality of interaction in the virtual environment. In this review, we first introduce the current knowledge on the sense of embodiment, its dimensions (senses of agency, body ownership, and self-location), and how they relate the ones with the others. Then, we dive into the different methods currently used to assess the sense of embodiment, ranging from questionnaires to neurophysiological measures. We provide a critical analysis of the existing metrics, discussing their advantages and drawbacks in the context of virtual reality. Notably, we argue that real-time measures of embodiment, which are also specific and do not require double tasking, are the most relevant in the context of virtual reality. Electroencephalography seems a good candidate for the future if its drawbacks (such as its sensitivity to movement and practicality) are improved. While the perfect metric has yet to be identified if it exists, this work provides clues on which metric to choose depending on the context, which should hopefully contribute to better assessing and understanding the sense of embodiment in virtual reality.
Conference Paper
Acquiring accessibility information about unfamiliar places in advance is essential for wheelchair users to make better decisions about physical visits. Today’s assessment approaches such as phone calls, photos/videos, or 360◦ virtual tours often fall short of providing the specifc accessibility details needed for individual diferences. For example, they may not reveal crucial information like whether the legroom underneath a table is spacious enough or if the spatial confguration of an appliance is convenient for wheelchair users. In response, we present Embodied Exploration, a Virtual Reality (VR) technique to deliver the experience of a physical visit while keeping the convenience of remote assessment. Embodied Exploration allows wheelchair users to explore high-fdelity digital replicas of physical environments with themselves embodied by avatars, leveraging the increasingly afordable VR headsets. With a preliminary exploratory study, we investigated the needs and iteratively refned our techniques. Through a real-world user study with six wheelchair users, we found Embodied Exploration is able to facilitate remote and accurate accessibility assessment. We also discuss design implications for embodiment, safety, and practicality
Article
Full-text available
Through the rapid development of virtual reality (VR), South African Higher Educational Institutions (HEIs) have shown interest in the potential VR has in teaching and learning practices. HEIs are further urged by the South African government to use cutting edge educational technology (edtech) tools to promote student engagement and limit the high dropout rates noticeable in HEIs. The researcher explored the perceived impact VR can have on student engagement. A qualitative research methodology was adopted for this study and the research instruments included open-ended questionnaires, semi-structured interviews, and a true experiment. Thirty-six participants took part in the study. The results of the study highlight a 23 per cent higher pass rate and a 180 per cent higher engagement level in students using VR as opposed to students studying via online distance learning. Two themes emerged from the results, namely: (1) the use of VR in teaching and learning, and (2) the influence VR has on student engagement levels. The results of this study further highlight that VR learning yields higher student engagement levels and as a result, students achieve higher marks. The significance of the study lies in the assistance it can offer higher educational institutions in their decision-making process of adopting VR into their teaching and learning processes.
Chapter
The use of avatars, or digital representations of human-like figures, can facilitate embodiment in VR environments by providing a visual representation of the user within the virtual world that enhances their experience of the VR. In this study, we examined the effects of avatar appearance similarity on trusting human behavior toward no-human entities (agents). Our preliminary results revealed avatars with higher similarity has potential to elicit a relatively high level of trust.KeywordsHuman-centered DesignVirtual Reality
Preprint
Full-text available
Recently, there has been growing interest in investigating the effects of self-representation on user experience and perception in virtual environments. However, few studies investigated the effects of levels of body representation (full-body, lower-body and viewpoint) on locomotion experience in terms of spatial awareness, self-presence and spatial presence during virtual locomotion. Understanding such effects is essential for building new virtual locomotion systems with better locomotion experience. In the present study, we first built a walking-in-place (WIP) virtual locomotion system that can represent users using avatars at three levels (full-body, lower-body and viewpoint) and is capable of rendering walking animations during in-place walking of a user. We then conducted a virtual locomotion experiment using three levels of representation to investigate the effects of body representation on spatial awareness, self-presence and spatial presence during virtual locomotion. Experimental results showed that the full-body representation provided better virtual locomotion experience in these three factors compared to that of the lower-body representation and the viewpoint representation. The lower-body representation also provided better experience than the viewpoint representation. These results suggest that self-representation of users in virtual environments using a full-body avatar is critical for providing better locomotion experience. Using full-body avatars for self-representation of users should be considered when building new virtual locomotion systems and applications.
Article
Full-text available
Building on both cognitive semantics and enactivist approaches to cognition, we explore the concept of enactive metaphor and its implications for learning. Enactive approaches to cognition involve the idea that online sensory-motor and affective processes shape the way the perceiver-thinker experiences the world and interacts with others. Specifically, we argue for an approach to learning through whole-body engagement in a way that employs enactive metaphors. We summarize recent empirical studies that show enactive metaphors and whole-body involvement in virtual and mixed reality environments support and improve learning.
Article
Full-text available
Which is my body and how do I distinguish it from the bodies of others, or from objects in the surrounding environment? The perception of our own body and more particularly our sense of body ownership is taken for granted. Nevertheless, experimental findings from body ownership illusions (BOIs), show that under specific multisensory conditions, we can experience artificial body parts or fake bodies as our own body parts or body, respectively. The aim of the present paper is to discuss how and why BOIs are induced. We review several experimental findings concerning the spatial, temporal, and semantic principles of crossmodal stimuli that have been applied to induce BOIs. On the basis of these principles, we discuss theoretical approaches concerning the underlying mechanism of BOIs. We propose a conceptualization based on Bayesian causal inference for addressing how our nervous system could infer whether an object belongs to our own body, using multisensory, sensorimotor, and semantic information, and we discuss how this can account for several experimental findings. Finally, we point to neural network models as an implementational framework within which the computational problem behind BOIs could be addressed in the future.
Article
Full-text available
We report an experiment where participants observed an attack on their virtual body as experienced in an immersive virtual reality (IVR) system. Participants sat by a table with their right hand resting upon it. In IVR, they saw a virtual table that was registered with the real one, and they had a virtual body that substituted their real body seen from a first person perspective. The virtual right hand was collocated with their real right hand. Event-related brain potentials were recorded in two conditions, one where the participant's virtual hand was attacked with a knife and a control condition where the knife only struck the virtual table. Significantly greater P450 potentials were obtained in the attack condition confirming our expectations that participants had a strong illusion of the virtual hand being their own, which was also strongly supported by questionnaire responses. Higher levels of subjective virtual hand ownership correlated with larger P450 amplitudes. Mu-rhythm event-related desynchronization in the motor cortex and readiness potential (C3-C4) negativity were clearly observed when the virtual hand was threatened-as would be expected, if the real hand was threatened and the participant tried to avoid harm. Our results support the idea that event-related potentials may provide a promising non-subjective measure of virtual embodiment. They also support previous experiments on pain observation and are placed into context of similar experiments and studies of body perception and body ownership within cognitive neuroscience.
Article
Full-text available
An illusory sensation of ownership over a surrogate limb or whole body can be induced through specific forms of multisensory stimulation, such as synchronous visuotactile tapping on the hidden real and visible rubber hand in the rubber hand illusion. Such methods have been used to induce ownership over a manikin and a virtual body that substitute the real body, as seen from first-person perspective, through a head-mounted display. However, the perceptual and behavioral consequences of such transformed body ownership have hardly been explored. In Exp. 1, immersive virtual reality was used to embody 30 adults as a 4-y-old child (condition C), and as an adult body scaled to the same height as the child (condition A), experienced from the first-person perspective, and with virtual and real body movements synchronized. The result was a strong body-ownership illusion equally for C and A. Moreover there was an overestimation of the sizes of objects compared with a nonembodied baseline, which was significantly greater for C compared with A. An implicit association test showed that C resulted in significantly faster reaction times for the classification of self with child-like compared with adult-like attributes. Exp. 2 with an additional 16 participants extinguished the ownership illusion by using visuomotor asynchrony, with all else equal. The size-estimation and implicit association test differences between C and A were also extinguished. We conclude that there are perceptual and probably behavioral correlates of body-ownership illusions that occur as a function of the type of body in which embodiment occurs.
Article
Full-text available
The Information Packaging Hypothesis (Kita, 2000) holds that gestures play a role in conceptualising information for speaking. According to this view, speakers will gesture more when describing difficult-to-conceptualise information than when describing easy-to-conceptualise information. In the present study, 24 participants described ambiguous dot patterns under two conditions. In the dots-plus-shapes condition, geometric shapes connected the dots, and participants described the patterns in terms of those shapes. In the dots-only condition, no shapes were present, and participants generated their own geometric conceptualisations and described the patterns. Participants gestured at a higher rate in the dots-only condition than in the dots-plus-shapes condition. The results support the Information Packaging Hypothesis and suggest that gestures occur when information is difficult to conceptualise.
Article
Full-text available
What does it feel like to own, to control, and to be inside a body? The multidimensional nature of this experience together with the continuous presence of one's biological body, render both theoretical and experimental approaches problematic. Nevertheless, exploitation of immersive virtual reality has allowed a reframing of this question to whether it is possible to experience the same sensations towards a virtual body inside an immersive virtual environment as toward the biological body, and if so, to what extent. The current paper addresses these issues by referring to the Sense of Embodiment (SoE). Due to the conceptual confusion around this sense, we provide a working definition which states that SoE consists of three subcomponents: the sense of self-location, the sense of agency, and the sense of body ownership. Under this proposed structure, measures and experimental manipulations reported in the literature are reviewed and related challenges are outlined. Finally, future experimental studies are proposed to overcome those challenges, toward deepening the concept of SoE and enhancing it in virtual applications.
Article
Full-text available
The theory of embodied cognition can provide HCI practitioners and theorists with new ideas about interac-tion and new principles for better designs. I support this claim with four ideas about cognition: (1) interacting with tools changes the way we think and perceive – tools, when manipulated, are soon absorbed into the body schema, and this absorption leads to fundamental changes in the way we perceive and conceive of our environments; (2) we think with our bodies not just with our brains; (3) we know more by doing than by seeing – there are times when physically performing an activity is better than watching someone else perform the activity, even though our motor resonance system fires strongly during other person observa-tion; (4) there are times when we literally think with things. These four ideas have major implications for interaction design, especially the design of tangible, physical, context aware, and telepresence systems.
Article
Full-text available
This article presents an interactive technique for moving through an immersive virtual environment (or “virtual reality”). The technique is suitable for applications where locomotion is restricted to ground level. The technique is derived from the idea that presence in virtual environments may be enhanced the stronger the match between proprioceptive information from human body movements and sensory feedback from the computer-generated displays. The technique is an attempt to simulate body movements associated with walking. The participant “walks in place” to move through the virtual environment across distances greater than the physical limitations imposed by the electromagnetic tracking devices. A neural network is used to analyze the stream of coordinates from the head-mounted display, to determine whether or not the participant is walking on the spot. Whenever it determines the walking behavior, the participant is moved through virtual space in the direction of his or her gaze. We discuss two experimental studies to assess the impact on presence of this method in comparison to the usual hand-pointing method of navigation in virtual reality. The studies suggest that subjective rating of presence is enhanced by the walking method provided that participants associate subjectively with the virtual body provided in the environment. An application of the technique to climbing steps and ladders is also presented.
Conference Paper
Full-text available
A number of investigators have reported that distance judgments in virtual environments (VEs) are systematically smaller than distance judgments made in comparably-sized real environments. Many variables that may contribute to this difference have been investigated but none of them fully explain the distance compression. In this paper we asked whether seeing a fully-articulated visual representation of oneself (avatar) within a virtual environment would lead to more accurate estimations of distance. We found that participants who explored near space without the visual avatar underestimated egocentric distance judgments compared to those who similarly explored near space while viewing a fully-articulated avatar. These results are discussed with respect to the perceptual and cognitive mechanisms that may be involved in the observed effects as well as the benefits of visual feedback in the form of an avatar for VE applications.
Article
Full-text available
This paper describes a new measure for presence in immersive virtual environments (VEs) that is based on data that can be unobtrusively obtained during the course of a VE experience. At different times during an experience, a participant will occasionally switch between interpreting the totality of sensory inputs as forming the VE or the real world. The number of transitions from virtual to real is counted, and, using some simplifying assumptions, a probabilistic Markov chain model can be constructed to model these transitions. This model can be used to estimate the equilibrium probability of being “present” in the VE. This technique was applied in the context of an experiment to assess the relationship between presence and body movement in an immersive VE. The movement was that required by subjects to reach out and touch successive pieces on a three-dimensional chess board. The experiment included twenty subjects, ten of whom had to reach out to touch the chess pieces (the active group) and ten of whom only had to click a handheld mouse button (the control group). The results revealed a significant positive association in the active group between body movement and presence. The results lend support to interaction paradigms that are based on maximizing the match between sensory data and proprioception.
Article
Full-text available
Few HMD-based virtual environment systems display a rendering of the user's own body. Subjectively, this often leads to a sense of disembodiment in the virtual world. We explore the effect of being able to see one's own body in such systems on an objective measure of the accuracy of one form of space perception. Using an action-based response measure, we found that participants who explored near space while seeing a fully-articulated and tracked visual representation of themselves subsequently made more accurate judgments of absolute egocentric distance to locations ranging from 4 m to 6 m away from where they were standing than did participants who saw no avatar. A nonanimated avatar also improved distance judgments, but by a lesser amount. Participants who viewed either animated or static avatars positioned 3 m in front of their own position made subsequent distance judgments with similar accuracy to the participants who viewed the equivalent animated or static avatar positioned at their own location. We discuss the implications of these results on theories of embodied perception in virtual environments.
Article
Full-text available
this paper is funded by the U.K. Science and Engineering Research Council (SERC), and Department of Trade and Industry, through grant CTA/2 of the London Parallel Applications Centre. Thanks to Anthony Steed for his continued help with the experiments described in this paper. The Virtual Treadmill is the subject of a patent application in the UK and other countries. References
Conference Paper
Full-text available
Our physical bodies play a central role in shaping human experience in the world, understanding of the world, and interactions in the world. This paper draws on theories of embodiment — from psychology, sociology, and philosophy — synthesizing five themes we believe are particularly salient for interaction design: thinking through doing, performance, visibility, risk, and thick practice. We intro- duce aspects of human embodied engagement in the world with the goal of inspiring new interaction design ap- proaches and evaluations that better integrate the physical and computational worlds. Author Keywords Embodiment, bodies, embodied interaction, ubiquitous computing, phenomenology, interaction design
Article
Full-text available
When we talk to one another face-to-face, body gestures accompany our speech. Motion tracking technology enables us to include body gestures in avatar-mediated communication, by mapping one's movements onto one's own 3D avatar in real time, so the avatar is self-animated. We conducted two experiments to investigate (a) whether head-mounted display virtual reality is useful for researching the influence of body gestures in communication; and (b) whether body gestures are used to help in communicating the meaning of a word. Participants worked in pairs and played a communication game, where one person had to describe the meanings of words to the other. In experiment 1, participants used significantly more hand gestures and successfully described significantly more words when nonverbal communication was available to both participants (i.e. both describing and guessing avatars were self-animated, compared with both avatars in a static neutral pose). Participants 'passed' (gave up describing) significantly more words when they were talking to a static avatar (no nonverbal feedback available). In experiment 2, participants' performance was significantly worse when they were talking to an avatar with a prerecorded listening animation, compared with an avatar animated by their partners' real movements. In both experiments participants used significantly more hand gestures when they played the game in the real world. Taken together, the studies show how (a) virtual reality can be used to systematically study the influence of body gestures; (b) it is important that nonverbal communication is bidirectional (real nonverbal feedback in addition to nonverbal communication from the describing participant); and (c) there are differences in the amount of body gestures that participants use with and without the head-mounted display, and we discuss possible explanations for this and ideas for future investigation.
Article
Full-text available
Co-thought gestures are hand movements produced in silent, noncommunicative, problem-solving situations. In the study, we investigated whether and how such gestures enhance performance in spatial visualization tasks such as a mental rotation task and a paper folding task. We found that participants gestured more often when they had difficulties solving mental rotation problems (Experiment 1). The gesture-encouraged group solved more mental rotation problems correctly than did the gesture-allowed and gesture-prohibited groups (Experiment 2). Gestures produced by the gesture-encouraged group enhanced performance in the very trials in which they were produced (Experiments 2 & 3). Furthermore, gesture frequency decreased as the participants in the gesture-encouraged group solved more problems (Experiments 2 & 3). In addition, the advantage of the gesture-encouraged group persisted into subsequent spatial visualization problems in which gesturing was prohibited: another mental rotation block (Experiment 2) and a newly introduced paper folding task (Experiment 3). The results indicate that when people have difficulty in solving spatial visualization problems, they spontaneously produce gestures to help them, and gestures can indeed improve performance. As they solve more problems, the spatial computation supported by gestures becomes internalized, and the gesture frequency decreases. The benefit of gestures persists even in subsequent spatial visualization problems in which gesture is prohibited. Moreover, the beneficial effect of gesturing can be generalized to a different spatial visualization task when two tasks require similar spatial transformation processes. We concluded that gestures enhance performance on spatial visualization tasks by improving the internal computation of spatial transformations. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Article
Full-text available
Our body schema gives the subjective impression of being highly stable. However, a number of easily-evoked illusions illustrate its remarkable malleability. In the rubber-hand illusion, illusory ownership of a rubber-hand is evoked by synchronous visual and tactile stimulation on a visible rubber arm and on the hidden real arm. Ownership is concurrent with a proprioceptive illusion of displacement of the arm position towards the fake arm. We have previously shown that this illusion of ownership plus the proprioceptive displacement also occurs towards a virtual 3D projection of an arm when the appropriate synchronous visuotactile stimulation is provided. Our objective here was to explore whether these illusions (ownership and proprioceptive displacement) can be induced by only synchronous visuomotor stimulation, in the absence of tactile stimulation. To achieve this we used a data-glove that uses sensors transmitting the positions of fingers to a virtually projected hand in the synchronous but not in the asynchronous condition. The illusion of ownership was measured by means of questionnaires. Questions related to ownership gave significantly larger values for the synchronous than for the asynchronous condition. Proprioceptive displacement provided an objective measure of the illusion and had a median value of 3.5 cm difference between the synchronous and asynchronous conditions. In addition, the correlation between the feeling of ownership of the virtual arm and the size of the drift was significant. We conclude that synchrony between visual and proprioceptive information along with motor activity is able to induce an illusion of ownership over a virtual arm. This has implications regarding the brain mechanisms underlying body ownership as well as the use of virtual bodies in therapies and rehabilitation.
Article
Full-text available
A plausible assumption is that self-avatars increase the realism of immersive virtual environments (VEs), because self-avatars provide the user with a visual representation of his/her own body. Consequently having a self-avatar might lead to more realistic human behavior in VEs. To test this hypothesis we compared human behavior in VE with and without providing knowledge about a self-avatar with real human behavior in real-space. This comparison was made for three tasks: a locomotion task (moving through the content of the VE), an object interaction task (interacting with the content of the VE), and a social interaction task (interacting with other social entities within the VE). Surprisingly, we did not find effects of a self-avatar exposure on any of these tasks. However, participant’s VE and real world behavior differed significantly. These results challenge the claim that knowledge about the self-avatar substantially influences natural human behavior in immersive VEs.
Article
Full-text available
In this paper, I address the question as to why participants tend to respond realistically to situations and events portrayed within an immersive virtual reality system. The idea is put forward, based on the experience of a large number of experimental studies, that there are two orthogonal components that contribute to this realistic response. The first is 'being there', often called 'presence', the qualia of having a sensation of being in a real place. We call this place illusion (PI). Second, plausibility illusion (Psi) refers to the illusion that the scenario being depicted is actually occurring. In the case of both PI and Psi the participant knows for sure that they are not 'there' and that the events are not occurring. PI is constrained by the sensorimotor contingencies afforded by the virtual reality system. Psi is determined by the extent to which the system can produce events that directly relate to the participant, the overall credibility of the scenario being depicted in comparison with expectations. We argue that when both PI and Psi occur, participants will respond realistically to the virtual reality.
Article
Full-text available
Why is it that people cannot keep their hands still when they talk? One reason may be that gesturing actually lightens cognitive load while a person is thinking of what to say. We asked adults and children to remember a list of letters or words while explaining how they solved a math problem. Both groups remembered significantly more items when they gestured during their math explanations than when they did not gesture. Gesturing appeared to save the speakers' cognitive resources on the explanation task, permitting the speakers to allocate more resources to the memory task. It is widely accepted that gesturing reflects a speaker's cognitive state, but our observations suggest that, by reducing cognitive load, gesturing may also play a role in shaping that state.
Article
Full-text available
Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.
Article
Full-text available
A study by Slater, et al., [1995] indicated that naive subjects in an immersive virtual environment experience a higher subjective sense of presence when they locomote by walking-in-place (virtual walking) than when they push-button-fly (along the floor plane). We replicated their study, adding real walking as a third condition. Our study confirmed their findings. We also found that real walking is significantly better than both virtual walking and flying in ease (simplicity, straightforwardness, naturalness) as a mode of locomotion. The greatest difference in subjective presence was between flyers and both kinds of walkers. In addition, subjective presence was higher for real walkers than virtual walkers, but the difference was statistically significant only in some models. Follow-on studies show virtual walking can be substantially improved by detecting footfalls with a head accelerometer. As in the Slater study, subjective presence significantly correlated with subjects' degree of...
Article
A virtual reality study explored the potential for a virtual body (VB) to enhance a participant's spatial awareness of a virtual environment (VE) by providing an invariant, subtle point of reference for object positioning. The study used the ecological metric of perceived reachability as the manifestation of spatial awareness. Nine subjects entered a VE and performed a maximum virtual reach estimation task in which VB configuration (full-body, hand-only, no-body) and target height (low, medium, high) were manipulated. Estimations of reach were more significantly accurate for low target heights. This seemed most attributable to the influence of the more richly patterned visual background for that condition. A complex interaction between VB configuration and target height indicates that the specific VB used may impact observed performance. Subjective comments also indicate a perceived utility of a full-body virtual body as a reference point for spatial tasks. Results are discussed in regard to potential design implications and future research opportunities.
Conference Paper
Mediated experience increasingly involves some representation of ourselves, so-called avatars. Avatars are used to facilitate the interaction with others in social media or are integrated as part of interfaces, used for interacting with 3D spatial environments and objects in games and simulations. These avatars vary in the degree to which they are realistic, representative of our sense of self or social status, or embodied, that is connected via the computer interface to the user’s body via sensorimotor interaction. We review some of psychological effects of avatar identification and embodiment including evidence of the effects of avatar identification and embodiment on changes in behavior, arousal, learning, and self-construal. Furthermore, some avatar based changes in perception, cognition, and behavior may carry over and extend into changes into user’s real world perception and behavior.
Article
Immersive virtual reality allows people to inhabit avatar bodies that differ from their own, and this can produce significant psychological and physiological effects. The concept of homuncular flexibility (Lanier, 2006) proposes that users can learn to control bodies different from their own by changing the relationship between tracked and rendered motion. We examine the effects of remapping movements in the real world onto an avatar that moves in novel ways. In Experiment 1, participants moved their legs more than their arms in conditions where leg movements were more effective for the task. In Experiment 2, participants controlling 3-armed avatars learned to hit more targets than participants in 2-armed avatars. We discuss the implications of embodiment in novel bodies.
Book
When historian Charles Weiner found pages of Nobel Prize-winning physicist Richard Feynman's notes, he saw it as a "record" of Feynman's work. Feynman himself, however, insisted that the notes were not a record but the work itself. In Supersizing the Mind, Andy Clark argues that our thinking doesn't happen only in our heads but that "certain forms of human cognizing include inextricable tangles of feedback, feed-forward and feed-around loops: loops that promiscuously criss-cross the boundaries of brain, body and world." The pen and paper of Feynman's thought are just such feedback loops, physical machinery that shape the flow of thought and enlarge the boundaries of mind. Drawing upon recent work in psychology, linguistics, neuroscience, artificial intelligence, robotics, human-computer systems, and beyond, Supersizing the Mind offers both a tour of the emerging cognitive landscape and a sustained argument in favor of a conception of mind that is extended rather than "brain- bound." The importance of this new perspective is profound. If our minds themselves can include aspects of our social and physical environments, then the kinds of social and physical environments we create can reconfigure our minds and our capacity for thought and reason.
Conference Paper
We explore whether a gender-matched, calibrated self-avatar affects the perception of the affordance of stepping off of a ledge, or visual cliff, in an immersive virtual environment. Visual cliffs form demonstrations in many immersive virtual environments because they create compelling environments. Understanding the role that self-avatars contribute to making such environments compelling is an important problem. We conducted an experiment to find the threshold at which subjects on a ledge in an immersive virtual environment would report that they could step gracefully off of the ledge without losing their balance, and compared the threshold height at which their decision changed under the condition of having and not having a self-avatar. The results show that people unrealistically say they can step off a ledge that is approximately 50% of their eyeheight without a self-avatar, and realistically about 25% of their eyeheight with a self-avatar.
Article
Illusions have historically been of great use to psychology for what they can reveal about perceptual processes. We report here an illusion in which tactile sensations are referred to an alien limb. The effect reveals a three-way interaction between vision, touch and proprioception, and may supply evidence concerning the basis of bodily self-identification.
Conference Paper
The rubber hand illusion is a simple illusion where participants can be induced to report and behave as if a rubber hand is part of their body. The induction is usually done by an experimenter tapping both a rubber hand prop and the participant's real hand: the touch and visual feedback of the taps must be synchronous and aligned to some extent. The illusion is usually tested by several means including a physical threat to the rubber hand. The response to the threat can be measured by galvanic skin response (GSR): those that have the illusion showed a marked rise in GSR. Based on our own and reported experiences with immersive virtual reality (IVR), we ask whether a similar illusion is induced naturally within IVR? Does the participant report and behave as if the virtual arm is part of their body? We show that participants in a HMD-based IVR who see a virtual body can experience similar responses to threats as those in comparable rubber hand illusion experiments. We show that these responses can be negated by replacing the virtual body with an abstract cursor representing the hand, and that the responses are stable under some gradual forced distortion of tracker space so that proprioceptive and visual information are not matched.
Conference Paper
Previous research has shown that egocentric distance estimation suffers from compression in virtual environments when viewed through head mounted displays. Though many possible variables and factors have been investigated, the source of the compression is yet to be fully realized. Recent experiments have hinted in the direction of an unsatisfied feeling of presence being the cause. This paper investigates this presence hypothesis by exploring the benefit of providing self-embodiment to the user through the form of a virtual avatar, presenting an experiment comparing errors in egocentric distance perception through direct-blind walking between subjects with a virtual avatar and without. The result of this experiment finds a significant improvement with egocentric distance estimations for users equipped with a virtual avatar over those without.
Conference Paper
Humans have been shown to perceive and perform actions differently in immersive virtual environments (VEs) as compared to the real world. Immersive VEs often lack the presence of virtual characters; users are rarely presented with a representation of their own body and have little to no experience with other human avatars/characters. However, virtual characters and avatars are more often being used in immersive VEs. In a two-phase experiment, we investigated the impact of seeing an animated character or a self-avatar in a head-mounted display VE on task performance. In particular, we examined performance on three different behavioral tasks in the VE. In a learning phase, participants either saw a character animation or an animation of a cone. In the task performance phase, we varied whether participants saw a co-located animated self-avatar. Participants performed a distance estimation, an object interaction and a stepping stone locomotion task within the VE. We find no impact of a character animation or a self-avatar on distance estimates. We find that both the animation and the self-avatar influenced task performance which involved interaction with elements in the environment; the object interaction and the stepping stone tasks. Overall the participants performed the tasks faster and more accurately when they either had a self-avatar or saw a character animation. The results suggest that including character animations or self-avatars before or during task execution is beneficial to performance on some common interaction tasks within the VE. Finally, we see that in all cases (even without seeing a character or self-avatar animation) participants learned to perform the tasks more quickly and/or more accurately over time.
Article
The multimodal, 3D-graphical communication platforms known as virtual worlds have their historical roots in multi-user domains/dungeons (MUDs) and virtual reality (VR). Given the extensive research on these technologies and the novelty of virtual worlds as a topic of study in information systems (IS), it behooves us to learn from the concepts, theories and insights generated primarily by other disciplines that have focused on these technologies. Because neither MUDs nor VR have significant organizational application, thus locating them outside of the IS discipline’s purview, very little of this literature has found its way into IS research thus far. This article reviews the extant literature on virtual environments and seeks to make its insights accessible to IS research on virtual worlds. In particular, this will focus on concepts, theories and insights regarding embodiment and presence, which are afforded by the avatar, a distinguishing technological artifact of virtual worlds.
Article
This paper reviews the concepts of immersion and presence in virtual environments (VEs). We propose that the degree of immersion can be objectively assessed as the characteristics of a technology, and has dimensions such as the extent to which a display system can deliver an inclusive, extensive, surrounding, and vivid illusion of virtual environment to a participant. Other dimensions of immersion are concerned with the extent of body matching, and the extent to which there is a self-contained plot in which the participant can act and in which there is an autonomous response. Presence is a state of consciousness that may be concomitant with immersion, and is related to a sense of being in a place. Presence governs aspects of autonomic responses and higher-level behaviors of a participant in a VE. The paper considers single and multi-participant shared environments, and draws on the experience of Computer-Supported Cooperative Working (CSCW) research as a guide to understanding presence in shared environments, The paper finally outlines the aims of the FIVE Working Group, and the 1995 FIVE Conference in London, UK.
Article
Virtual Humans are becoming more and more popular and used in many applications such as the entertainment industry (in both film and games) and medical applications. This comprehensive book covers all areas of this growing industry including face and body motion, body modelling, hair simulation, expressive speech simulation and facial communication, interaction with 3D objects, rendering skin and clothes and the standards for Virtual Humans. Written by a team of current and former researchers at MIRALab, University of Geneva or VRlab, EPFL, this book is the definitive guide to the area. Explains the concept of avatars and autonomous virtual actors and the main techniques to create and animate them (body and face). Presents the concepts of behavioural animation, crowd simulation, intercommunication between virtual humans, and interaction between real humans and autonomous virtual humans Addresses the advanced topics of hair representation and cloth animation with applications in fashion design Discusses the standards for Virtual Humans, such as MPEG-4 Face Animation and MPEG-4 Body Animation.
Article
Thesis (M.S.E.)--University of Washington, 1995. Includes bibliographical references (leaves [82]-87).
Article
Spontaneous gestures that accompany speech are related to both verbal and spatial processes. We argue that gestures emerge from perceptual and motor simulations that underlie embodied language and mental imagery. We first review current thinking about embodied cognition, embodied language, and embodied mental imagery. We then provide evidence that gestures stem from spatial representations and mental images. We then propose the gestures-as-simulated-action framework to explain how gestures might arise from an embodied cognitive system. Finally, we compare this framework with other current models of gesture production, and we briefly outline predictions that derive from the framework.
Toybox Demo for Oculus Touch. https://youtu.be/ iFEMiyGMa58
  • Oculus
Oculus. Toybox Demo for Oculus Touch. https://youtu.be/ iFEMiyGMa58, 2015. Accessed 6-December-2015.
Toybox Demo for Oculus Touch
  • Oculus
Oculus. Toybox Demo for Oculus Touch. https://youtu.be/ iFEMiyGMa58, 2015. Accessed 6-December-2015.
Walking >walking-in-place >flying, in virtual environments
  • M Usoh
  • K Arthur
  • M C Whitton
  • R Bastos
  • A Steed
  • M Slater
  • F P Brooks
M. Usoh, K. Arthur, M. C. Whitton, R. Bastos, A. Steed, M. Slater, and F. P. Brooks, Jr. Walking >walking-in-place >flying, in virtual environments. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH '99, pages 359-364, New York, NY, USA, 1999. ACM Press/Addison-Wesley Publishing Co.
  • dourish
Oculus. Toybox Demo for Oculus Touch
  • Oculus