117 reads in the past 30 days
Developing English language learners’ speaking skills through applying a situated learning approach in VR-enhanced learning experiencesNovember 2024
·
517 Reads
·
3 Citations
Published by Springer Nature
Online ISSN: 1434-9957
·
Print ISSN: 1359-4338
117 reads in the past 30 days
Developing English language learners’ speaking skills through applying a situated learning approach in VR-enhanced learning experiencesNovember 2024
·
517 Reads
·
3 Citations
116 reads in the past 30 days
Evaluating the effectiveness of virtual reality for safety-relevant training: a systematic reviewAugust 2023
·
1,925 Reads
·
53 Citations
78 reads in the past 30 days
Concert experiences in virtual reality environmentsJune 2023
·
1,772 Reads
·
26 Citations
65 reads in the past 30 days
Evaluation of the novelty effect in immersive Virtual Reality learning experiencesJanuary 2024
·
507 Reads
·
52 Citations
58 reads in the past 30 days
Immersive virtual reality applied to the rehabilitation of patients with lower limb amputation: a small randomized controlled trial for feasibility studyMay 2024
·
167 Reads
·
4 Citations
Virtual Reality is a multidisciplinary journal that publishes original research on Virtual, Augmented, and Mixed Reality. Established in 1995, it covers real-time visualization, graphics, and applications, as well as the development and evaluation of systems, tools, techniques, and software that advance the field. The journal addresses a wide range of industries, including manufacturing, commerce, telecommunications, entertainment, gaming, and medicine. It features clear, well-written, and accessible articles that appeal to a multidisciplinary audience.
May 2025
Maria Izabel Rodrigues Severiano
·
Geslaine Janaina Bueno dos Santos
·
Flavio Gonçalves
·
[...]
·
Helio Afonso Ghizoni Teive
Hereditary Spastic Paraplegia (HSP) is a neurodegenerative condition characterized by weakness and spasticity in the lower limbs, requiring stabilization to improve essential movements such as walking. This study aims to investigate the benefits of Vestibular Rehabilitation (VR) with virtual reality (VRi) in individuals with HSP. A randomized pilot clinical trial was conducted, registered on the Rebec Platform (trial RBR-3JMX67), with 16 adult patients of both sexes diagnosed with HSP, allocated into two groups: Balance (BG) and Strength (SG). Participants underwent anamnesis, otorhinolaryngological and labyrinthine evaluation, followed by assessment with Lafayete dynamometer and ABC (confidence scale) and VAS (pain assessment) questionnaires at three time points: T0 (baseline), T1 (after 10 sessions), and T2 (after 20 sessions). VAS results showed no significant changes between T0 and T2, indicating no discomfort caused by rehabilitation. In the ABC questionnaire, both groups showed improvement in balance perception. Dynamometer assessment revealed a significant increase in peak force (p = 0.006) and mean force in hip abduction (p = 0.001). For the force development rate (FDR), there was a trend towards significance in BG. In dorsiflexion, both groups showed significant improvements in FDR, especially in BG (p = 0.011) and SG (p = 0.004). VRi game scores were significant for both groups, validating the effectiveness of rehabilitation. These findings confirm the benefits of vestibular rehabilitation with virtual reality in individuals with HSP, highlighting the need for further research to consolidate these results and fully explore the potential of this therapeutic approach.
May 2025
·
11 Reads
Chajuan Liu
·
Shuqi Meng
·
Weikang Zheng
·
Zhiyi Zhou
Immersive virtual reality (VR) holds immense potential, and its applications in the field of education have garnered close attention from numerous researchers, sparking widespread and in-depth research interest. This study attempts to illustrate the impact of virtual reality in the classroom on student experience and concentration. In the experimental design, students used a virtual reality teaching system to replicate ceramic production in a porcelain factory in Jingdezhen. To compare the effects of VR interactivity with traditional video teaching, students were divided into three groups and tested in three separate studies: sense of experience and emotion (immersion, empathy, and sense of presence), cognition and concentration, and quiz scores for students with and without a relevant knowledge base. The findings revealed that the interactive Jingdezhen Ceramics VR education system group outperformed the no-interaction and traditional video groups in terms of sense of experience and emotion; the interactive VR education system group scored higher in terms of course concentration and knowledge mastery; and the students’ scores were positively correlated with their knowledge base. Furthermore, examining the disparities between boys and females on the indicators found that female students scored better in empathy. These findings indicate that immersive and interactive VR education may give students with novel experiences that have emotional, attentional, and cognitive benefits.
May 2025
·
8 Reads
This study used two randomized experiments (Study 1: N = 226, Study 2: N = 88) to examine the interplay between spatial presence and immersive tendencies while viewing virtual reality (VR) videos (360-degree live-action and 3D graphics), with and without head-mounted display (HMD) devices, and their influence on behavior intentions for particulate matter (PM) mitigation. The results showed that participants who watched these videos with HMDs experienced higher spatial presence, which significantly mediated their PM mitigation behavior intentions. This mediation effect was not observed in participants who watched videos without HMDs. Additionally, immersive tendency moderated the effect of viewing 3D-graphics VR videos with HMDs on PM mitigation behavior intentions. These findings highlight the potential of VR video strategies to enhance behavioral changes related to PM risks and environmental issues.
May 2025
·
7 Reads
Despite the strong theoretical basis for driving neuroplasticity, there is a lack of consensus over the effectiveness of game-based therapies for improving upper limb (UL) function within six-months post-stroke. Therefore, this review synthesized the certainty of evidence underpinning the immediate, short-term and long-term effects of game-based rehabilitation on UL function within the first six months following stroke. Eight electronic and three grey literature databases were searched from their inception to 30th April 2024 for relevant articles. The Cochrane ROB 2 tool for risk of bias assessment and the GRADE approach for evidence synthesis for each outcome of interest were used. Forty-eight randomized controlled trials with a total of 2809 participants were included in the qualitative analysis, whereas data from only 41 studies were used in the meta-analyses. Significant improvements in UL function (Fugl-Meyer assessment-upper extremity), UL motor recovery (Brunnstrom motor recovery stage), independence in day-to-day activities (Barthel index), and cognitive function (Montreal cognitive assessment) in patients with acute and sub-acute stroke were found. A non-significant improvement was noted in the quantitative analysis for grip strength, hand dexterity (Box and block test), independence in day-to-day activities (Functional independence measure, Korean version of modified Barthel index), arm and hand motor ability (action research arm test, manual function test) and quality of life (European quality of life—5 dimension- 3 levels). The certainty of evidence was found to be low or very low for these outcomes. Although a positive immediate, short-term and long-term effect of game-based rehabilitation on improving UL function and an immediate effect on motor recovery and cognitive function were evident, there are concerns about the risk of bias in the included studies. Furthermore, heterogeneity of interventions, inconsistency, imprecision of findings, indirectness of outcomes, and low or very low quality of evidence preclude the recommendation of a specific game-based rehabilitation program over conventional therapy for improving UL function following acute and sub-acute stroke. Further robust clinical trials with homogenous methods and outcome measures are warranted to substantiate the effects of game-based rehabilitation on UL function post-stroke. Systematic review registration number: PROSPERO CRD42020190100.
May 2025
·
35 Reads
This study seeks to employ virtual reality (VR) to simulate augmented reality (AR) smart glasses to assess the influence of multimodal alarm design on human evacuation behavior during emergencies. Multimodality is regarded as a beneficial enhancement for future alarm systems due to its potential to enhance the visibility and effectiveness of alarms through various features, including auditory, visual, and olfactory elements. To this end, we developed and integrated additional human voice, visual, and olfactory reminder modalities based on the ISO T3 alarm, resulting in three distinctive multimodal alarm solutions: (i) a solution featuring a human voice prompt module, (ii) a solution incorporating both a human voice and a visual prompt module, and (iii) a solution that combines human voice, visual, and olfactory prompt modules. The study created a virtual airport scenario, simulating an event where participants waited for their luggage while wearing AR smart glasses. During this simulation, after the alarm was activated, the non-player characters in the airport stayed put and did not evacuate. Ninety-six individuals participated in the study, and the findings indicated that the multimodal alarm design significantly impacted participants' "decided" and "undecided" evacuation behaviors and time. The results suggest that integrating audio and visual modalities into ISO safety alarms could enhance emergency evacuation efficiency in complex environments, and the olfactory information cue has the potential to enhance individuals' sense of urgency when making evacuation decisions. This research confirms the importance of multimodality in improving evacuation effectiveness and offers insights for designing next-generation building safety alarms.
May 2025
·
35 Reads
Medical education is one of the domains that is currently being widely investigated leveraging the capabilities offered by virtual reality (VR) systems. The appeal of such technology is based on the potential cost-effectiveness, portability, safety of training in simulated environments, and the ability to enable training without the need of supervision. One of the approaches that can be utilized during technology-mediated educational activities is gamification, i.e., the use of game design elements in non-game contexts. This approach has the ability to make learning fun, memorable and more effective, as demonstrated by a substantial body of literature. However, whereas a number of studies have investigated the ability of gamification-based VR systems in enhancing learning and training in various domains, the adoption of gamification approaches in VR medical training, and in particular surgical training, is a topic that has been largely overlooked. To bridge this gap, we first co-designed with a pool of urology surgeons a gamification-based VR system for the laser enucleation of the prostate. Subsequently, we conducted a user study with seventeen urology residents to assess the usability and user experience. Our results provide evidence that gamification in VR medical training systems is a valuable strategy to enhance surgical trainees outcomes and motivations. However, our findings also revealed that the lack of realism of the physical aspects involved in real operations, such as force and tactile feedback and visual deformations of the simulated tissues, can drastically hamper the experience that surgeons desire from a VR simulator.
April 2025
·
9 Reads
Humans engage in a multitude of actions, some of which are rare but essential for data collection. Synthetic generation techniques are particularly effective in these scenarios, enriching the data for such uncommon actions. To address this need, we introduce a novel framework developed within Unreal Engine 5, designed to generate human action video data in hyper-realistic virtual environments. Our framework mitigates the scarcity and limited diversity of existing datasets for infrequent actions or routine tasks by utilizing synthetic motion generation through text-guided generative motion models, Gaussian splatting 3D reconstruction, and MetaHuman avatars. The utility of the framework is demonstrated by producing a synthetic video dataset depicting various human actions in diverse settings. To validate the effectiveness of the generated data, we trained VideoMAE, a state-of-the-art action recognition model, on the extended UCF101 dataset, incorporating both synthetic and real fall data, obtaining F1-scores of 0.95 and 0.97 when evaluated on the URFall and MCF datasets, respectively. The quality of the RGB-D videos generated represent a significant advance in the field. Additionally, a graph is generated from the rendered scene, detecting objects and their relationships, thus adding valuable contextual information to the video data. This capability to generate data across a wide range of actions and environments positions our framework as a valuable tool for broader applications, including digital twin creation and dataset augmentation.
April 2025
·
54 Reads
This research seeks to raise awareness about the challenges faced by people with visual impairments by immersing users in a virtual environment that simulates 18 different visual conditions. Through a series of tests, participants are tasked with performing simple activities while navigating the complexities of these impairments. The study, validated by 60 users, uses objective metrics like reaction time and accuracy to measure the impact of these conditions on task performance. An online pre- and post-test questionnaire also reveals a significant increase in empathy among participants. The results highlight the importance of direct experience in understanding the challenges of people with visual impairments and demonstrate the potential of such simulations to foster empathy and awareness. Ultimately, this application contributes to a broader understanding of visual impairments and underscores the need for universal design initiatives.
April 2025
·
35 Reads
This study assesses the integration of mixed reality (MR) technologies in medical first responder (MFR) training, focusing on identifying key factors influencing behavioral intention to use MR systems and practical implications for technology acceptance and enhanced realism through haptic feedback. Through a user-centered design approach, involving co-creation workshops, iterative development, and evaluations in pilot and field trials across six countries, this study evaluated technology acceptance, presence, user experience, and workload among MFRs. Both quantitative measures and qualitative feedback were collected to analyze the determinants of technology acceptance and user engagement. The MED1stMR training system, developed as a result, demonstrates that performance expectancy, effort expectancy, and social presence are significant predictors of behavioral intention to use MR training systems among MFRs. High technology acceptance and positive user experience were reported, with specific emphasis on the educational value of haptic feedback in skill training. Trainer feedback highlighted the importance of real-time performance metrics and openness to AI-driven training assistance for enhancing training outcomes. The study underscores the critical role of realistic patient interaction and the importance of aligning training challenges with users’ skills to create engaging MR training environments for MFRs. Identifying factors influencing behavioral intention offers valuable insights for the development of MR training systems, suggesting a focus on social presence and interactive capabilities to improve realism and educational value. The findings advocate for the integration of adaptive training features and further exploration of AI support in scenario optimization and performance enhancement.
April 2025
·
29 Reads
As we spend more time in virtual reality (VR) and the metaverse, it becomes increasingly important to understand how our experiences in the metaverse impact self-perception and behaviours in the physical world. This research thus examines how performing pro-environmental behaviours in VR influences environmental self-identity and pro-environmental interests and attitudes towards the physical world. Two experiments were conducted where participants engaged in one of two distinct VR tasks: planting trees (tree condition) or placing neutral objects (control condition) in a virtual environment. Study 1 (n = 91) showed that planting virtual trees significantly increased environmental self-identity compared to placing sheds in the control condition. Building on this, Study 2 (n = 97) found that heightened environmental self-identity following virtual tree planting mediated the relationship between condition and subsequent pro-environmental interests and attitudes towards physical nature. These findings demonstrate the link between performing pro-environmental behaviours in VR and pro-environmental engagement in the physical world. They underscore the influence of virtual behaviours on self-identity, and the potential of VR as a tool for environmental sustainability efforts.
April 2025
·
13 Reads
Concussion commonly occurs in sports and military venues and is difficult to diagnose and manage. A medical device is needed to perform an assessment for concussion and identify functional deficits. This report presents the developmental methodology, including user experience and human factors results, for a new virtual reality (VR)-based technology. Data collection occurred between 2020 and 2024, amassing 1008 participants between the ages of 18 and 40 years. The initial software was modified, informed by human factors and user experience data, including symptom report and need for examiner assistance. The second version resulted in less necessity for examiner intervention to ensure participant understanding of the VR tests. Symptom provocation during the VR tests demonstrated that less than 5% of participants had a > 2-point increase in headache, nausea, and mental fogginess across the tests; 9.2% had > 2-point increase in dizziness. This is in line with reports from similar clinical tests, demonstrating that the VR environment does not pose an additional risk. Our work demonstrates that clinical tests of functional neurology (i.e., oculomotor, vestibular, reaction) were translated into a virtual environment and multidimensional sensor-based data was able to be collected on a sample of participants, who did not experience unexpected symptom provocation during test completion. The approach described here regarding the translation of real-world clinical tests into the VR environment, towards the development of a medical device, may be useful to many other healthcare disciplines and fields. Pending results of the machine learning classification, this device may prove useful in decision support for concussion diagnostics and determination of recovery.
April 2025
·
14 Reads
Immersive technologies have seen growing popularity in recent years, with an increasing demand for advanced haptic interfaces to simulate realistic virtual experiences. As this technology has improved, haptic interfaces have been created to offer rich feedback for specific scenarios. However, their versatility is often limited. Modularity in haptics offers adaptability and customization that broadens the range of uses cases that can be provided from a singular device. This systematic review examines state-of-the-art modular haptic devices that offer unique advantages in terms of adaptability and customization. It reviews 35 papers on modular haptics published in scientific journals and conferences over the last decade. The modular systems are categorized by their method of modularity in positioning, construction, attachment, and end-effectors. The review discusses how each paper integrates modularity and provides an overview of its application in haptics. Finally, the review discusses the challenges of integrating modularity in haptic interfaces and based on this identifies potential future directions.
April 2025
·
15 Reads
Wildfires have become increasingly critical, affecting forests, air and water quality, soil health, and contributing to climate change. Unmanned aerial vehicle (UAV) have emerged as an effective solution for monitoring and managing wildfires, due to their rapid deployment, low cost, and reduced environmental impact. This study presents a digital twin (DT) system for UAVs that enables users to interact with virtual simulations and real UAVs for wildfire prevention and detection through a smartphone or a tablet. The proposed DT-based system leverages virtual reality and augmented reality (AR) and includes four key services: UAV flight data generator (UAV-FDG), visualization, control, and retrieval. Also, these services allow users to test and train UAV control with a virtual one before moving to real scenarios, reducing risks and costs. The integration of mobile devices increases accessibility and flexibility, enabling real-time monitoring and control of UAVs for wildfire management and facilitating deployment in diverse environments. The main contributions of this research include the proposal, implementation, and validation of a DT-based mobile system for visualization, control, and retrieval using AR, autonomous flight data attribution and movement, and the deployment of DTs of UAVs in real environments. The assessment results showcase the successful implementation of the proposed system, with positive participant feedback, supported by a positive system usability scale score of 79.75.
April 2025
·
7 Reads
While augmented reality shows promise for supporting human–robot collaboration, creating such interactive systems still poses great challenges. Addressing this, we introduce ARTHUR, an open-source authoring tool for augmented reality-supported human–robot collaboration. We thereby propose to combine desktop interfaces and touchscreen devices for effective authoring with head-mounted displays for testing and in-situ refinements. ARTHUR supports 20 types of multi-modal feedback to convey robot, task, and system state, 10 actions that enable the user to control the robot and system, and 18 conditions for feedback customization and triggering of actions. By combining these elements, users can create interaction spaces, controls, and information visualizations in augmented reality for collaboration with robot arms. To demonstrate the general applicability of ARTHUR for human–robot collaboration scenarios, we replicate representative examples from prior work. Further, in an evaluation with five domain-savvy participants, we reflect on the usefulness of our hybrid user interface approach and the supported functionality, highlighting its potential and directions for future work.
April 2025
·
26 Reads
We investigated the effects of combining VR-guided meditation with out-of-body experiences, such as levitation, on embodiment and mindfulness practices. Participants engaged in a self-view (video feedback) meditation session followed by a levitation phase in a VR environment. We observed a significant increase in participants’ EDA responses during the levitation phase. The results were influenced by several factors, including the duration of the meditation, the length of the levitation phase, and participants’ subjective experiences. Participants who reported higher scores on the physical levitation scale exhibited an open and curious attitude, accepting the experience without becoming overly absorbed. In contrast, individuals with the capacity to observe and distance themselves from their feelings, emotions, and experiences were less inclined to report a virtual levitation experience.
April 2025
·
7 Reads
Students have benefited greatly from immersive learning environments that use augmented reality. Some visual features of instructional material, such as color coding, have shown potential in improving learning performance. However, the effect in Immersive Learning Environments is still unknown in detail. The objective of this research is to know how the use of color-coded material affects the performance and mental effort of participants in Immersive Learning Environments in augmented reality. Therefore, an experimental study was executed with 29 students, in a study case for learning anatomy related to an exploratory knee arthroscopy surgical procedure, using an Immersive Learning Environment in augmented reality. The results suggested no differences in the participants’ performance in terms of time and percentage of correct answers. Participants who used non-coded material, on the other hand, suggested a high level of mental effort. These results provide valuable information for the design of Immersive Learning Environments, with the purpose of creating environments under conditions that foster learning.
April 2025
·
20 Reads
The burgeoning interest in Virtual Reality (VR) extends even to non-profit organizations, harnessing the immersive powers of VR for pro-social advocacy. VR User Experience (UX) offers unique affordances such as immersion, presence, and embodiment, engaging human senses more profoundly than conventional media channels. Empirical studies underscore VR's potential to heighten awareness, evoke empathy, and stimulate prosocial and parasocial thought-action tendencies in response to perspective-taking scenarios. In our systematic review, we examined 66 records from an initial screening of 7026 studies. Our research inquiries encompassed an exploration of the existing typology of prosocial outcomes studied in VR Perspective Taking (VRPT) literature, the identification of specific technology affordances acting as mediators, and studying how these distinct technological affordances interplay within the context of VRPT-mediated prosocial behaviors. Results indicate that VRPT empowers users to adopt diverse perspectives, from first-person immersion to third-person sympathetic engagement, with technology affordances playing a key role in mediating a positive UX. This review substantially advances our understanding of the intricate relationship between perspective-taking, technology affordances and prosocial user behaviors within the domain of VRPT research.
April 2025
·
25 Reads
This systematic review aims to evaluate the effectiveness of VR-enhanced intervention on anthropometric outcomes (weight and body mass index, BMI) and body image outcomes (body dissatisfaction, body avoidance, and body size perception) at post-intervention and follow-up assessment. We selected randomised controlled trials (RCTs) from the inception to October 9, 2023, using nine databases: PubMed, Cochrane Library, CINAHL, EMBASE, PsycINFO, Scopus, Web of Science, IEEE, ProQuest Dissertations and Theses, and ProQuest. A total of 16 RCTs were included, involving 1144 overweight and obese adults. Meta-analysis revealed that VR-enhanced intervention was comparable to conventional intervention in terms of weight (g = − 0.42 to − 0.10, p > 0.05) and BMI (g = − 0.59 to − 0.27, p > 0.05) reductions at post-intervention and follow-up assessments. Participants perceived their body dissatisfaction (g = − 0.46 to − 0.40, p > 0.05) and body avoidance (g = − 0.44, p > 0.05) to be similar between VR-enhanced interventions and conventional interventions at post-intervention and follow-up assessments. VR-enhanced interventions seem to be superior to conventional interventions for contour drawing rating (g = − 0.25, p = 0.04) at post-intervention assessment. However, the level of certainty was low for all outcomes. The VR-enhanced intervention can be considered a supplementary intervention for weight reduction and body image improvement among overweight and obese adults. However, further well-designed RCTs with large sample sizes are necessary.
April 2025
·
18 Reads
This article critically examines the evolution of digital embodiment within cyberspace and virtual reality (VR), analyzing how the narratives surrounding embodiment differ from the experiences of marginalized communities. Narratives of Cartesian disembodiment abounded in the discourse but overlooked that there was no true disembodiment in cyberspace, just anonymity coupled with an assumed identity that was often that of the white, male majority. Early proponents of VR saw it as an inherently embodied phenomenon, but this was supplanted by a dualist narrative that dominated pop culture and academia. As cyberspace evolved from text-based to pictorial, it was acknowledged that bodies impacted online experiences, but many insisted that this should not matter. In today’s cyberspace-VR merging, dominant players insist that the body should be celebrated, but this does not come coupled with measures to ensure that they are. This article reveals how virtual experiences reflect societal biases, despite narratives to the contrary, and calls for a conscious and inclusive approach to technology design to foster truly equitable digital spaces.
April 2025
·
83 Reads
This article analyses the state-of-the-art of virtual reality (VR) in higher education and provides an overview of the VR literature and identifies the need for the definition of key terms. Thus, it introduces three types of VR according to immersion level. Furthermore, analysis reveals a lack of systematic literature reviews (SLRs) on learning using immersive virtual reality (IVR). This article presents the results of the first SLR of the scientific literature on IVR in higher education indexed in Web of Science. A total of 291 articles were collected out of which 50 were selected for full-text analysis. We report three findings. First, we categorise and cluster the literature into formal aspects and research design and conclude that IVR research remains in its infancy. Furthermore, it lacks a sound and standardised research framework and design as well as comparative studies that are repeated for validation. Second, we analyse and discuss the current results and outcomes of the scientific research and identify the diversity of the reported research and learning outcomes with contradictory results, which are frequently due to specific and isolated scenarios. Third, this work is the first systematic scoping review on all studies from Web of Science that focuses on the use of IVR in higher education in general. We offer a unique overview of scientific articles on IVR in higher education without any limitation in time, scope or topic. Notably, the majority of the 50 articles do not present evidence-based and validated results; in particular, discussions and results related to pedagogical aspects are lacking.
April 2025
·
41 Reads
In Virtual Environment (VE), haptic interaction plays an important role in delivering both tactile and kinesthetic sensations, enabling users to perceive the physical properties of virtual objects. These sensory inputs have diverse applications in areas such as medical training, virtual reality (VR) gaming, education, etc. This analytical review aims to provide a comprehensive overview of haptic technologies for fluid interaction developed in the past twenty years. A total of 59 studies meeting the inclusion criteria were identified and examined. The review thoroughly discusses relevant papers on haptic rendering methods as well as haptic devices designed for fluid interaction. In addition, an analytical point of view is presented from four key aspects, including fluid simulation methods, haptic feedback modalities, evaluation approaches, and applications. Finally, this paper highlights the current research gaps and outlines future directions to advance the development of reliable and accurate haptic techniques for interaction with fluids.
March 2025
·
24 Reads
Unprecedented investments are being made in mathematics tutoring interventions for K-12 students, but results from these interventions are not always promising. Traditional online or distance math tutoring can treat learning as disembodied, and not give learners access to embodied resources like gestures, movements, and actions. Virtual Reality offers important opportunities for students joining from a distance to utilize virtual bodies in a fully immersive world, where students can also interact with dynamic mathematical objects. In the present study, we examined 20 middle school students engaging in a VR-based distance tutoring intervention over 7 weeks. We found some preliminary promising indications for the effect of the intervention on students’ mathematical learning, compared to a control group which received VR-based tutoring in English-Language Arts. By systematically qualitatively coding videos of students and tutors engaged in math tutoring, we identified affordances of the VR environment—including the use of gestures and whole-body movements related to students' mathematical reasoning, the presence of mathematical play in the environment, and unique interactions with dimensionality in the 3D space. We also found important constraints from our thematic coding of the interactions in the videos—including connection problems, issues with spatial orientation and virtual body positioning, and off-task or distracted students in the VR environment. We close with recommendations for future VR-based tutoring interventions.
March 2025
·
39 Reads
This paper explores a method for enhancing intrapersonal emotion regulation skills through self-talk in virtual reality. Difficulty regulating emotions is a pervasive aspect of psychopathology and can be improved through clinical therapeutic work. Here, we employed virtual reality and the technique of body ownership illusions to create a conversation between two virtual representations of the self, using an application known as ConVRSelf (Conversations with yourself in Virtual Reality). Forty-seven healthy participants (85% female, average age 22.9 ± 3.98) watched a short psycho-education video about intrapersonal emotion regulation skills, including awareness, acceptance, distraction, and self-soothing behaviors, based on the first steps of the Safety Plan Intervention (Stanley et al. in J Am Acad Child Adolesc Psychiatry 48:1005–1013, 2009). They were then randomly assigned to practice these strategies while discussing a current difficulty in their life utilizing a virtual reality scenario that enables self-talk while alternating between two avatars (experimental group) or the empty chair technique (control group). Results suggested that both groups showed a long-lasting improvement in intrapersonal emotion regulation skills and a reduction of self-reported symptoms of depression, anxiety, and stress, with a possible advantage for the ConVRSelf method. This suggests a new potential use of virtual reality for improving emotion regulation skills, which could be relevant for a spectrum of psychiatric disorders.
March 2025
·
13 Reads
Empirical study designs in HCI evolve in response to temporal realities and technological advancements. In this context, virtual reality (VR) shows potential for new empirical research designs, going beyond the (still) quite dominantly lab-based research roots in HCI. Previous work has been conducted to identify the use of VR for gathering non-homogeneous and representative sample populations and for conducting empirical studies in resource-constrained environments. Yet, it is unclear how VR empirical user study designs affect the participants’ behavior and experience, potentially influencing the study results compared to in-situ/in-lab studies. In this paper, we conducted a gesture elicitation study (GES) in a realistic physical smart room and its digital duplicate in VR. Sixty-six participants’ responses were collected using standardized questionnaires along with between-group gesture agreement analysis. Our comparison shows that the VR study produces a higher number of unique gesture proposals and similar best gestures to the in-person study for 95.4% of the referents, with minimum influence on the gesture proposals. We further discuss the usability, pragmatic and hedonic qualities, presence, task load, and implications of using VR for GESs, and highlight future directions for using VR-based empirical study designs. We found that VR can produce reliable data and improve participant experience with the same task load, making it viable to conduct remote GES and a substitute for conventional lab-based experiments.
March 2025
·
30 Reads
The proxemics theory explains the consistent social boundaries surrounding individuals as reported (Hall in The Hidden Dimension, Doubleday, Garden City, 1966), yet little is known about the social boundaries surrounding pairs or groups of people. The current study explored interpersonal proxemics behavior in a virtual environment, focusing on distances maintained towards individual pedestrians, pairs, and groups. Using virtual reality to simulate a city center, participants freely navigated it while their movements and gazes were captured. Importantly, the city was populated by pedestrians in different social configurations. Eye movements identified interactions defined by gaze-onsets towards a pedestrian’s head. Our results indicate that participants approached individuals with a median distance of 3.18 unity units aligned with the social space boundary as reported (Hall in The Hidden Dimension, Doubleday, Garden City, 1966). Distances kept from pairs and groups were similarly centered within the social space, revealing no significant difference in approaching behavior across different social configurations. The consistency in approaching distances suggests that personal and social spaces are not substantially altered, irrespective of the social context.
Journal Impact Factor™
Submission to first decision
Article processing charge