Preprint

Exploring Viewing Modalities in Cinematic Virtual Reality: A Systematic Review and Meta-Analysis of Challenges in Evaluating User Experience

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

Cinematic Virtual Reality (CVR) is a narrative-driven VR experience that uses head-mounted displays with a 360-degree field of view. Previous research has explored different viewing modalities to enhance viewers' CVR experience. This study conducted a systematic review and meta-analysis focusing on how different viewing modalities, including intervened rotation, avatar assistance, guidance cues, and perspective shifting, influence the CVR experience. The study has screened 3444 papers (between 01/01/2013 and 17/06/2023) and selected 45 for systematic review, 13 of which also for meta-analysis. We conducted separate random-effects meta-analysis and applied Robust Variance Estimation to examine CVR viewing modalities and user experience outcomes. Evidence from experiments was synthesized as differences between standardized mean differences (SMDs) of user experience of control group ("Swivel-Chair" CVR) and experiment groups. To our surprise, we found inconsistencies in the effect sizes across different studies, even with the same viewing modalities. Moreover, in these studies, terms such as "presence," "immersion," and "narrative engagement" were often used interchangeably. Their irregular use of questionnaires, overreliance on self-developed questionnaires, and incomplete data reporting may have led to unrigorous evaluations of CVR experiences. This study contributes to Human-Computer Interaction (HCI) research by identifying gaps in CVR research, emphasizing the need for standardization of terminologies and methodologies to enhance the reliability and comparability of future CVR research.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
In recent cybersickness research, there has been a growing interest in predicting cybersickness using real-time physiological data such as heart rate, galvanic skin response, eye tracking, postural sway, and electroencephalogram. However, the impact of individual factors such as age and gender, which are pivotal in determining cybersickness susceptibility, remains unknown in predictive models. Our research seeks to address this gap, underscoring the necessity for a more personalized approach to cybersickness prediction to ensure a better, more inclusive virtual reality experience. We hypothesize that a personalized cybersickness prediction model would outperform non-personalized models in predicting cybersickness. Evaluating this, we explored four personalization techniques: 1) data grouping, 2) transfer learning, 3) early shaping, and 4) sample weighing using an open-source cybersickness dataset. Our empirical results indicate that personalized models significantly improve prediction accuracy. For instance, with early shaping, the Deep Temporal Convolutional Neural Network (DeepTCN) model achieved a 69.7% reduction in RMSE compared to its non-personalized version. Our study provides evidence of personalization techniques' benefits in improving cybersickness prediction. These findings have implications for developing personalized cybersickness prediction models tailored to individual differences, which can be used to develop personalized cybersickness reduction techniques in the future.
Article
Full-text available
In recent years, the creative media landscape has witnessed growing interests surrounding the utilization of virtual reality (VR) as a novel visual narrative approach for both filmmakers and audiences. This trend is accompanied with an increase in studies aimed at scientifically examining the characteristics and principles of immersive visual storytelling. This paper intends to contribute to this growing field by offering a comprehensive review on the current research development in Cinematic Virtual Reality (CVR), which employs VR technology to produce immersive, cinematic experiences for audiences. While extant research has focused on the content generation techniques and human performance implicated in virtual environments, such investigations may not fully explain the medium adaptation differences or emotional dimensions of narrated immersive experiences. These aspects are especially crucial in the context of visual storytelling through VR film, 360-degree video production, or other narrated experiences. The proposed study systematically categorizes CVR-related research, revealing the field's current state by narrowing the focus to specific topics and themes within CVR literature, and highlighting key sub-domains of interest centered on viewers’ experience measurement techniques. The findings of this review are expected to establish formal categories for implementing visual CVR to achieve immersive visual storytelling and provide a comprehensive analysis of current viewer experience measurements.
Article
Full-text available
Cybersickness is one of the greatest barriers to the adoption of virtual reality. A growing body of research has focused on identifying the characteristics of cybersickness and finding ways to mitigate it through the utilization of data from VR content, physiological signals, and body movement, along with artificial intelligence techniques. In this work, we extend prior research on cybersickness prediction by considering the role of different data modalities. We propose a novel deep learning model named multimodal, attention-based cybersickness (MAC), which learns temporal sequences and characteristics of video flows, eye movement, head movement, and electrodermal activity. Based on data collected from 27 participants, we demonstrate the effectiveness of MAC, showing an F1-score of 0.87. Our experimental results further show not only the influences of gender and prior VR experience but also the effectiveness of the attention mechanism on model performance, emphasizing the importance of considering the characteristics of data types and users in cybersickness modeling.
Article
Full-text available
Body ownership illusions (BOIs) occur when participants experience that their actual body is replaced by a body shown in virtual reality (VR). Based on a systematic review of the cumulative evidence on BOIs from 111 research papers published in 2010 to 2021, this article summarizes the findings of empirical studies of BOIs. Following the PRISMA guidelines, the review points to diverse experimental practices for inducing and measuring body ownership. The two major components of embodiment measurement, body ownership and agency, are examined. The embodiment of virtual avatars generally leads to modest body ownership and slightly higher agency. We also find that BOI research lacks statistical power and standardization across tasks, measurement instruments, and analysis approaches. Furthermore, the reviewed studies showed a lack of clarity in fundamental terminology, constructs, and theoretical underpinnings. These issues restrict scientific advances on the major components of BOIs, and together impede scientific rigor and theory-building.
Article
Full-text available
Cybersickness is a drawback of virtual reality (VR), which also affects the cognitive and motor skills of users. The Simulator Sickness Questionnaire (SSQ) and its variant, the Virtual Reality Sickness Questionnaire (VRSQ), are two tools that measure cybersickness. However, both tools suffer from important limitations which raise concerns about their suitability. Two versions of the Cybersickness in VR Questionnaire (CSQ-VR), a paper-and-pencil and a 3D-VR version, were developed. The validation of the CSQ-VR and a comparison against the SSQ and the VRSQ were performed. Thirty-nine participants were exposed to three rides with linear and angular accelerations in VR. Assessments of cognitive and psychomotor skills were performed at baseline and after each ride. The validity of both versions of the CSQ-VR was confirmed. Notably, CSQ-VR demonstrated substantially better internal consistency than both SSQ and VRSQ. Additionally, CSQ-VR scores had significantly better psychometric properties in detecting a temporary decline in performance due to cybersickness. Pupil size was a significant predictor of cybersickness intensity. In conclusion, the CSQ-VR is a valid assessment of cybersickness with superior psychometric properties to SSQ and VRSQ. The CSQ-VR enables the assessment of cybersickness during VR exposure, and it benefits from examining pupil size, a biomarker of cybersickness.
Article
Full-text available
Cinematic virtual reality (CVR) offers filmmakers a wide range of possibilities to explore new techniques regarding movie scripting, shooting and editing. Despite the many experiments performed so far both with both live action and computer-generated movies, just a few studies focused on analyzing how the various techniques actually affect the viewers’ experience. Like in traditional cinema, a key step for CVR screenwriters and directors is to choose from which perspective the viewers will see the scene, the so-called point of view (POV). The aim of this paper is to understand to what extent watching an immersive movie from a specific POV could impact the narrative engagement (NE), i.e., the viewers’ sensation of being immersed in the movie environment and being connected with its characters and story. Two POVs that are typically used in CVR, i.e., first-person perspective (1-PP) and external perspective (EP), are investigated through a user study in which both objective and subjective metrics were collected. The user study was carried out by leveraging two live action 360° short films with distinct scripts. The results suggest that the 1-PP experience could be more pleasant than the EP one in terms of overall NE and narrative presence, or even for all the NE dimensions if the potential of that POV is specifically exploited.
Article
Full-text available
Social VR enables people to interact over distance with others in real-time. It allows remote people, typically represented as avatars, to communicate and perform activities together in a shared virtual environment, extending the capabilities of traditional social platforms like Facebook and Netflix. This paper explores the benefits and drawbacks provided by a lightweight and low-cost Social VR platform (SocialVR), in which users are captured by several cameras and reconstructed in real-time. In particular, the paper contributes with (1) the design and evaluation of an experimental protocol for Social VR experiences; (2) the report of a production workflow for this new type of media experiences; and (3) the results of experiments with both end-users (N = 15 pairs) and professionals (N = 22 companies) to evaluate the potential of the SocialVR platform. Results from the questionnaires and semi-structured interviews show that end-users rated positively towards the experiences provided by the SocialVR platform, which enabled them to sense emotions and communicate effortlessly. End-users perceived the photo-realistic experience of SocialVR similar to face-to-face scenarios and appreciated this new creative medium. From a commercial perspective, professionals confirmed the potential of this communication medium and encourage further research for the adoption of the platform in the commercial landscape. Supplementary information: The online version contains supplementary material available at 10.1007/s10055-022-00651-5.
Article
Full-text available
Content creators have been trying to produce engaging and enjoyable Cinematic Virtual Reality (CVR) experiences using immersive media such as 360-degree videos. However, a complete and flexible framework, like the filmmaking grammar toolbox for film directors, is missing for creators working on CVR, especially those working on CVR storytelling with viewer interactions. Researchers and creators widely acknowledge that a viewer-centered story design and a viewer’s intention to interact are two intrinsic characteristics of CVR storytelling. In this paper, we stand on that common ground and propose Adaptive Playback Control (APC) as a set of guidelines to assist content creators in making design decisions about the story structure and viewer interaction implementation during production. Instead of looking at everything CVR covers, we set constraints to focus only at cultural heritage oriented content using a guided-tour style. We further choose two vital elements for interactive CVR: the narrative progression (director vs. viewer control) and visibility of viewer interaction (implicit vs. explicit) as the main topics at this stage. We conducted a user study to evaluate four variants by combining these two elements, and measured the levels of engagement, enjoyment, usability, and memory performance. One of our findings is that there were no differences in the objective results. Combining objective data with observations of the participants’ behavior we provide guidelines as a starting point for the application of the APC framework. Creators need to choose if the viewer will have control over narrative progression and the visibility of interaction based on whether the purpose of a piece is to invoke emotional resonance or promote efficient transfer of knowledge. Also, creators need to consider the viewer’s natural tendency to explore and provide extra incentives to invoke exploratory behaviors in viewers when adding interactive elements. We recommend more viewer control for projects aiming at viewer’s participation and agency, but more director control for projects focusing on education and training. Explicit (vs. implicit) control will also yield higher levels of engagement and enjoyment if the viewer’s uncertainty of interaction consequences can be relieved.
Article
Full-text available
360-degree experiences such as cinematic virtual reality and 360-degree videos are becoming increasingly popular. In most examples, viewers can freely explore the content by changing their orientation. However, in some cases, this increased freedom may lead to viewers missing important events within such experiences. Thus, a recent research thrust has focused on studying mechanisms for guiding viewers' attention while maintaining their sense of presence and fostering a positive user experience. One approach is the utilization of diegetic mechanisms, characterized by an internal consistency with respect to the narrative and the environment, for attention guidance. While such mechanisms are highly attractive, their uses and potential implementations are still not well understood. Additionally, acknowledging the user in 360-degree experiences has been linked to a higher sense of presence and connection. However, less is known when acknowledging behaviors are carried out by attention guiding mechanisms. To close these gaps, we conducted a within-subjects user study with five conditions of no guide and virtual arrows, birds, dogs, and dogs that acknowledge the user and the environment. Through our mixed-methods analysis, we found that the diegetic virtual animals resulted in a more positive user experience, all of which were at least as effective as the non-diegetic arrow in guiding users towards target events. The acknowledging dog received the most positive responses from our participants in terms of preference and user experience and significantly improved their sense of presence compared to the non-diegetic arrow. Lastly, three themes emerged from a qualitative analysis of our participants' feedback, indicating the importance of the guide's blending in, its acknowledging behavior, and participants' positive associations as the main factors for our participants' preferences.
Conference Paper
Full-text available
Cybersickness prediction is one of the significant research challenges for real-time cybersickness reduction. Researchers have proposed different approaches for predicting cybersickness from bio-physiological data (e.g., heart rate, breathing rate, electroencephalogram). However, collecting bio-physiological data often requires external sensors, limiting locomotion and 3D-object manipulation during the virtual reality (VR) experience. Limited research has been done to predict cybersickness from the data readily available from the integrated sensors in head-mounted displays (HMDs) (e.g., head-tracking, eye-tracking, motion features), allowing free locomotion and 3D-object manipulation. This research proposes a novel deep fusion network to predict cybersickness severity from heterogeneous data readily available from the integrated HMD sensors. We extracted 1755 stereoscopic videos, eye-tracking, and head-tracking data along with the corresponding self-reported cybersickness severity collected from 30 participants during their VR gameplay. We applied several deep fusion approaches with the heterogeneous data collected from the participants. Our results suggest that cybersickness can be predicted with an accuracy of 87.77% and a root-mean-square error of 0.51 when using only eye-tracking and head-tracking data. We concluded that eye-tracking and head-tracking data are well suited for a standalone cybersickness prediction framework.
Article
Full-text available
Cinematic virtual reality offers 360-degree moving image experiences that engage a viewer's body as its position defines the momentary perspective over the surrounding simulated space. While a 360-degree narrative space has been demonstrated to provide highly immersive experiences, it may also affect information intake and the recollection of narrative events. The present study hypothesizes that the immersive quality of cinematic VR induces a viewer's first-person perspective in observing a narrative in contrast to a camera perspective. A first-person perspective is associated with increase in emotional engagement, sensation of presence, and a more vivid and accurate recollection of information. To determine these effects, we measured viewing experiences, memory characteristics, and recollection accuracy of participants watching an animated movie either using a VR headset or a stationary screen. The comparison revealed that VR viewers experience a higher level of presence in the displayed environment than screen viewers and that their memories of the movie are more vivid, evoke stronger emotions, and are more likely to be recalled from a first-person perspective. Yet, VR participants can recall fewer details than screen viewers. Overall, these results show that while cinematic virtual reality viewing involves more immersive and intense experiences, the 360-degree composition can negatively impact comprehension and recollection.
Article
Full-text available
Cinematic Virtual Reality (CVR) is a form of immersive storytelling widely used to create engaging and enjoyable experiences. However, issues related to the Narrative Paradox and Fear of Missing Out (FOMO) can negatively affect the user experience. In this paper, we review the literature about designing CVR content with the consideration of the viewer’s role in the story, the target scenario, and the level of viewer interaction, all aimed to resolve these issues. Based on our explorations, we propose a “Continuum of Interactivity” to explore appropriate spaces for creating CVR experiences to archive high levels of engagement and immersion. We also discuss two properties to consider when enabling interaction in CVR, the depth of impact and the visibility. We then propose the concept framework Adaptive Playback Control (APC), a machine-mediated narrative system with implicit user interaction and backstage authorial control. We focus on “swivel-chair” 360-degree video CVR with the aim of providing a framework of mediated CVR storytelling with interactivity. We target content creators who develop engaging CVR experiences for education, entertainment, and other applications without requiring professional knowledge in VR and immersive systems design.
Conference Paper
Full-text available
immersive viewing experiences. However, as users watch one part of the 360° view, they will necessarily miss out on events happening in other parts of the sphere. Consequently, fear of missing out (FOMO) is unavoidable. However, users can also experience the joy of missing out (JOMO). In a repeated measures, mixed methods design, we examined the fear and joy of missing out (FOMO and JOMO) and sense of presence in two repeat viewings of a 360° film using a head-mounted display. We found that users experienced both FOMO and JOMO. FOMO was caused by the users’ awareness of parallel events in the spherical view, but users also experienced JOMO. FOMO did not compromise viewers’ sense of presence, and FOMO also decreased in the second viewing session, while JOMO remained constant. The findings suggest that FOMO and JOMO can be two integral qualities in an immersive video viewing experience and that FOMO may not be as negative a factor as previously thought.
Article
Full-text available
Background. Standardized questionnaires are well-known, reliable, and inexpensive instruments to evaluate user experience (UX). Although the structure, content, and application procedure of the three most recognized questionnaires (AttrakDiff, UEQ, and meCUE) are known, there is no systematic literature review (SLR) that classifies how these questionnaires have been used in primary studies reported academically. is SLR seeks to answer five research questions (RQs), starting with identifying the uses of each questionnaire over the years and by geographic region (RQ1) and the median number of participants per study (how many participants is considered enough when evaluating UX?) (RQ2). is work also aims to establish whether these questionnaires are combined with other evaluation instruments and with which complementary instruments are they used more frequently (RQ3). In addition, this review intends to determine how the three questionnaires have been applied in the fields of ubiquitous computing and ambient intelligence (RQ4) and also in studies that incorporate nontraditional interfaces, such as haptic, gesture, or speech interfaces, to name a few (RQ5). Methods. A systematic literature review was conducted starting from 946 studies retrieved from four digital databases. e main inclusion criteria being the study describes a primary study reported academically, where the standardized questionnaire is used as a UX evaluation instrument in its original and complete form. In the first phase, 189 studies were discarded by screening the title, abstract, and keyword list. In the second phase, 757 studies were full-text reviewed, and 209 were discarded due to the inclusion/exclusion criteria. e 548 resulting studies were analyzed in detail. Results. AttrakDiff is the questionnaire that counts the most uses since 2006, when the first studies appeared. However, since 2017, UEQ has far surpassed AttrakDiff in uses per year. e contribution of meCUE is still minimal. Europe is the region with the most extended use, followed by Asia. Within Europe, Germany greatly exceeds the rest of countries (RQ1). e median number of participants per study is 20, considering the aggregated data from the three questionnaires. However, this median rises to 30 participants in journal studies while it stays in 20 in conference studies (RQ2). Almost 4 in 10 studies apply the questionnaire as the only evaluation instrument. e remaining studies used between one and five complementary instruments, among which the System Usability Scale (SUS) stands out (RQ3). About 1 in 4 studies analyzed belong to ubiquitous computing and ambient intelligence fields, in which UEQ increases the percentage of uses when compared to its general percentage, particularly in topics such as IoT and wearable interfaces. However, AttrakDiff remains the predominant questionnaire for studies in smart cities and homes and in-vehicle information systems (RQ4). Around 1 in 3 studies include nontraditional interfaces, being virtual reality and gesture interfaces the most numerous. Percentages of UEQ and meCUE uses in these studies are higher than their respective global percentages, particularly in studies using virtual reality and eye tracking interfaces. AttrakDiff maintains its overall percentage in studies with tangible and gesture interfaces and exceeds it in studies with nontraditional visual interfaces, such as displays in windshields or motorcycle helmets (RQ5).
Article
Full-text available
The VESPACE project aims to revive an evening of theatre at the Foire Saint-Germain in Paris in the 18th century, by recreating spaces, atmospheres and theatrical entertainment in virtual reality. The venues of this fair have disappeared without leaving any archaeological traces, so their digital reconstruction requires the use of many different sources, including the expertise of historians, historians of theatre and literature. In this article, we present how we have used video game creation tools to enable the use of virtual reality in three key stages of research in the human sciences and particularly in history or archaeology: preliminary research, scientific dissemination and mediation with the general public. In particular, we detail the methodology used to design a three-dimensional (3D) model that is suitable for both research and virtual reality visualization, meets the standards of scientific work regarding precision and accuracy, and the requirements of a real-time display. This model becomes an environment in which experts can be immersed within their fields of research and expertise, and thus extract knowledge reinforcing the model created –through comments, serendipity and new perspectives– while enabling a multidisciplinary workflow. We also present our tool for annotating and consulting sources, relationships and hypotheses in immersion, called PROUVÉ. This tool is designed to make the virtual reality experience go beyond a simple image and to convey scientific information and theories in the same way an article or a monograph does. Finally, this article offers preliminary feedback on the use of our solutions with three target audiences: the researchers from our team, the broader theatre expert community and the general public. Highlights: • Immersive Virtual Reality is used to enhance the digital reconstruction of an 18th-century theatre, by allowing experts to dive into their research topic. • Virtual Reality (VR) can also be used to disseminate the digital model through the scientific community and beyond while giving access to all kinds of sources that were used to build it. • A quick survey shows that VR is a powerful tool to share theories and interpretations related to archaeological or historical tri-dimensional data.
Chapter
Full-text available
When watching omnidirectional movies with Head-Mounted Displays, viewers can freely choose the direction of view, and thus the visible section of the movie. However, looking around all the time can be exhausting and having content in the full 360° area can cause the fear to miss something. For making watching more comfortable, we implemented new methods and conducted three experiments: (1) exploring methods to inspect the full omnidirectional area by moving the head, but not the whole body; (2) comparing head, body and movie rotation and (3) studying how the reduction of the 360° area influences the viewing experience. For (3), we compared the user behavior watching a full 360°, a 225° and a 180° movie via HMD. The investigated techniques for inspecting the full 360° area in a fixed sitting position (experiments 1 and 2) perform well and could replace the often-used swivel chair. Reducing the 360° area (experiment 3), 225° movies resulted in a better score than 180° movies.
Article
Full-text available
In virtual reality (VR), users can experience symptoms of motion sickness, which is referred to as VR sickness or cybersickness. The symptoms include but are not limited to eye fatigue, disorientation, and nausea, which can impair the VR experience of users. Though many studies have attempted to reduce the discomfort, they produced conflicting results with varying degrees of VR sickness. In particular, a visually improved VR does not necessarily result in decreased VR sickness. To understand these unexpected results, we surveyed the causes of VR sickness and measurement of symptoms. We reorganized the causes of the VR sickness into three major factors (hardware, content, and human factors) and investigated the sub-component of each factor. We then surveyed frequently used measures of VR sickness, both subjective and objective approaches. We also investigated emerging approaches for reducing VR sickness and proposed a multimodal fidelity hypothesis to give an insight into future studies.
Article
Full-text available
The use of head-mounted displays (HMD) for virtual reality (VR) application-based purposes including therapy, rehabilitation, and training is increasing. Despite advancements in VR technologies, many users still experience sickness symptoms. VR sickness may be influenced by technological differences within HMDs such as resolution and refresh rate, however, VR content also plays a significant role. The primary objective of this systematic review and meta-analysis was to examine the literature on HMDs that report Simulator Sickness Questionnaire (SSQ) scores to determine the impact of content. User factors associated with VR sickness were also examined. A systematic search was conducted according to PRISMA guidelines. Fifty-five articles met inclusion criteria, representing 3,016 participants (mean age range 19.5–80; 41% female). Findings show gaming content recorded the highest total SSQ mean 34.26 (95%CI 29.57–38.95). VR sickness profiles were also influenced by visual stimulation, locomotion and exposure times. Older samples (mean age ≥35 years) scored significantly lower total SSQ means than younger samples, however, these findings are based on a small evidence base as a limited number of studies included older users. No sex differences were found. Across all types of content, the pooled total SSQ mean was relatively high 28.00 (95%CI 24.66–31.35) compared with recommended SSQ cut-off scores. These findings are of relevance for informing future research and the application of VR in different contexts.
Article
Full-text available
We propose a novel authoring and viewing system for generating multiple experiences with a single 360° video and efficiently transferring these experiences to the user. An immersive video contains much more interesting information within the 360° environment than normal videos. There can be multiple interesting areas within a 360° frame at the same time. Due to the narrow field of view in virtual reality head-mounted displays, a user can only view a limited area of a 360° video. Hence, our system is aimed at generating multiple experiences based on interesting information in different regions of a 360° video and efficient transferring of these experiences to prospective users. The proposed system generates experiences by using two approaches: (1) Recording of the user’s experience when the user watches a panoramic video using a virtual reality head-mounted display, and (2) tracking of an arbitrary interesting object in a 360° video selected by the user. For tracking of an arbitrary interesting object, we have developed a pipeline around an existing simple object tracker to adapt it for 360° videos. This tracking algorithm was performed in real time on a CPU with high precision. Moreover, to the best of our knowledge, there is no such existing system that can generate a variety of different experiences from a single 360° video and enable the viewer to watch one 360° visual content from various interesting perspectives in immersive virtual reality. Furthermore, we have provided an adaptive focus assistance technique for efficient transferring of the generated experiences to other users in virtual reality. In this study, technical evaluation of the system along with a detailed user study has been performed to assess the system’s application. Findings from evaluation of the system showed that a single 360° multimedia content has the capability of generating multiple experiences and transfers among users. Moreover, sharing of the 360° experiences enabled viewers to watch multiple interesting contents with less effort.
Article
Full-text available
The published literature has produced several definitions for the sense of presence in a simulated environment, as well as various methods for measuring it. The variety of conceptualizations makes it difficult for researchers to interpret, compare, and evaluate the presence ratings obtained from individual studies. Presence has been measured by employing questionnaires, physiological indices, behavioral feedbacks, and interviews. A systematic literature review was conducted to provide insight into the definitions and measurements of presence in studies from 2002 to 2019, with a focus on questionnaires and physiological measures. The review showed that scholars had introduced various definitions of presence that often originate from different theoretical standpoints and that this has produced a multitude of different questionnaires that aim to measure presence. At the same time, physiological studies that investigate the physiological correlates of the sense of presence have often shown ambiguous results or have not been replicated. Most of the scholars have preferred the use of questionnaires, with Witmer and Singer's Presence Questionnaire being the most prevalent. Among the physiological measures, electroencephalography was the most frequently used. The conclusions of the present review aim to stimulate future structured efforts to standardize the use of the construct of presence, as well as inspire the replication of the findings reported in the published literature.
Article
Full-text available
Three hundred sixty–degree (360°) immersive video applications for Head Mounted Display (HMD) devices offer great potential in providing engaging forms of experiential media solutions especially in Cultural Heritage education. Design challenges emerge though by this new kind of immersive media due to the 2D form of resources used for their construction, the lack of depth, the limited interaction and the need to address the sense of presence. In addition, the use of Virtual Reality (VR) headsets often causes nausea, or motion sickness effects imposing further implications in moderate motion design tasks. This paper introduces a methodological categorisation of tasks and techniques for the design of 360° immersive video applications. Following the design approach presented, a testbed application has been created as an immersive interactive virtual tour at the historical centre of the city of Rethymno in Crete, Greece, which has undergone user trials. Based on the analysis of the results of this study, a set of design guidelines for the implementation of 360° immersive video virtual tours is proposed.
Conference Paper
Full-text available
This research examines the reflexive dimensions of cinematic virtual reality (CVR) storytelling. We created Anonymous, an interactive CVR piece that employs a reflexive storytelling method. This method is based on distancing effects and is used to elicit audience awareness and self-reflection about loneliness and death. To understand the audience’s experiences, we conducted in-depth interviews to study which design factors and elements prompted reflexive thoughts and feelings. Our findings highlight how the audience experience was impacted by four reflexive dimensions: abstract and minimal aesthetics, everyday materials and textures, the restriction of control, and multiple, disembodied points of view. We use our findings to discuss how these dimensions can inform the design of VR storytelling experiences that provoke self and social reflection.
Article
Full-text available
The merger of game-based approaches and Virtual Reality (VR) environments that can enhance learning and training methodologies have a very promising future, reinforced by the widespread market-availability of affordable software and hardware tools for VR-environments. Rather than passive observers, users engage in those learning environments as active participants, permitting the development of exploration-based learning paradigms. There are separate reviews of VR technologies and serious games for educational and training purposes with a focus on only one knowledge area. However, this review covers 135 proposals for serious games in immersive VR-environments that are combinations of both VR and serious games and that offer end-user validation. First, an analysis of the forum, nationality, and date of publication of the articles is conducted. Then, the application domains, the target audience, the design of the game and its technological implementation, the performance evaluation procedure, and the results are analyzed. The aim here is to identify the factual standards of the proposed solutions and the differences between training and learning applications. Finally, the study lays the basis for future research lines that will develop serious games in immersive VR-environments, providing recommendations for the improvement of these tools and their successful application for the enhancement of both learning and training tasks.
Article
Background Through the combination of virtual reality (VR) technology with techniques from theater, filmmaking, and gaming, individuals from the Game Research and Immersive Design Laboratory (GRID Lab) at Ohio University have developed an approach to train soft skills such as communication, problem-solving, teamwork, and interpersonal skills which shows great promise. Objectives The purpose of this article is to provide an overview of VR and cinematic-VR (cine-VR). This article serves as a preface to the VR research included in this special issue. Methods In this article, we define VR, review key terminology, present a case study, and offer future directions. Results Prior research with cine-VR has demonstrated the effectiveness in improving provider attitudes and cultural self-efficacy. While cine-VR may differ from other types of VR applications, we have been able to leverage the strengths of cine-VR to create training programs which are user friendly and highly effective. Early projects on diabetes care and opioid use disorder were sufficiently successful that the team received additional funding to pursue series addressing elder abuse/neglect and intimate partner violence. Their work has gone beyond health care and is currently being leveraged for law enforcement training as well. While this article will explore Ohio University’s approach to cine-VR training, details of their research including efficacy can be found in McCalla et al, Wardian et al, and Beverly et al. Conclusion When produced correctly, cine-VR has the potential to become a mainstay component of training for soft skill applications across a multitude of industries.
Article
Virtual reality (VR) is rapidly growing, with the potential to change the way we create and consume content. In VR, users integrate multimodal sensory information they receive, to create a unified perception of the virtual world. In this survey, we review the body of work addressing multimodality in VR, and its role and benefits in user experience, together with different applications that leverage multimodality in many disciplines. These works thus encompass several fields of research, and demonstrate that multimodality plays a fundamental role in VR; enhancing the experience, improving overall performance, and yielding unprecedented abilities in skill and knowledge transfer.
Book
With reference to traditional film theory and frameworks drawn from fields such as screenwriting studies and anthropology, this book explores the challenges and opportunities for both practitioners and viewers offered by the 360-degree storytelling form. It focuses on cinematic virtual reality (CVR), a format that involves immersive, high quality, live action or computer-generated imagery (CGI) that can be viewed through head mounted display (HMD) goggles or via online platforms such as YouTube. This format has surged in popularity in recent years due to the release of affordable high quality omnidirectional (360-degree) cameras and consumer grade HMDs. The book interrogates four key concepts for this emerging medium: immersion, presence, embodiment and proximity through an analysis of innovative case studies and with reference to practitioner interviews. In doing so, it highlights the specificity of the format and provides a critical account of practitioner approaches to the concept development, writing and realisation of short narrative CVR works. The book concludes with an account of the author’s practice-led research into the form, providing a valuable example of creative practice in the field of immersive media.
Article
In prevention science and related fields, large meta-analyses are common, and these analyses often involve dependent effect size estimates. Robust variance estimation (RVE) methods provide a way to include all dependent effect sizes in a single meta-regression model, even when the exact form of the dependence is unknown. RVE uses a working model of the dependence structure, but the two currently available working models are limited to each describing a single type of dependence. Drawing on flexible tools from multilevel and multivariate meta-analysis, this paper describes an expanded range of working models, along with accompanying estimation methods, which offer potential benefits in terms of better capturing the types of data structures that occur in practice and, under some circumstances, improving the efficiency of meta-regression estimates. We describe how the methods can be implemented using existing software (the "metafor" and "clubSandwich" packages for R), illustrate the proposed approach in a meta-analysis of randomized trials on the effects of brief alcohol interventions for adolescents and young adults, and report findings from a simulation study evaluating the performance of the new methods.
Article
Virtual reality (VR) is a powerful medium for 360 storytelling, yet content creators are still in the process of developing cinematographic rules for effectively communicating stories in VR. Traditional cinematography has relied for over a century in well-established techniques for editing, and one of the most recurrent resources for this are cinematic cuts that allow content creators to seamlessly transition between scenes. One fundamental assumption of these techniques is that the content creator can control the camera, however, this assumption breaks in VR: users are free to explore the 360 around them. Recent works have studied the effectiveness of different cuts in 360 content, but the effect of directional sound cues while experiencing these cuts has been less explored. In this work, we provide the first systematic analysis of the influence of directional sound cues in users behavior across 360 movie cuts, providing insights that can have an impact on deriving conventions for VR storytelling.
Article
Self-presentation in online digital social spaces has been a long standing research interest in HCI and CSCW. As online social spaces evolve towards more embodied digital representations, it is important to understand how users construct and experience their self and interact with others' self in new and more complicated ways, as it may introduce new opportunities and unseen social consequences. Using findings of an interview study (N=30), in this paper we report an in-depth empirical investigation of the presentation and perception of self in Social Virtual Reality (VR) - 3D virtual spaces where multiple users can interact with one another through VR head-mounted displays and full-body tracked avatars. This study contributes to the growing body of CSCW literature on social VR by offering empirical evidence of how social VR platforms afford new phenomena and approaches of novel identity practices and by providing potential design implications to further support such practices. We also expand the existing research agenda in CSCW on the increasing complexity of people's self-presentation in emerging novel sociotechnical systems.
Chapter
We introduce Time Expansion Coefficient (that is, α), which refers to the ratio of the actual time to the VR image time, in this paper. VR film designers adjust time expansion coefficient according to different narrative types in VR films to optimize users’ immersive perception. We evaluated five types of narrative which include linear narrative with fixed shot, linear narrative with moving shot, circular narrative, multi-view narrative and interactive narrative. The results show that circular narrative and multi-view narrative are most affected by the time expansion coefficient. Interactive narrative and multi-view narrative are almost equally affected while linear narrative is less affected. In addition, when time expansion coefficient is greater than 1 (that is, α > 1), immersion in all five narrative types we refer to is improved.
Conference Paper
Cinematic Virtual Reality allows viewers to watch films without the limitation of screen edges, whilst controlling their own viewpoint. The loss of screen edges and camera control leads to the problem that filmmakers cannot precisely edit each shot using traditional techniques, as the viewer’s direction and field of view is dynamic. As such, many established film making methods should be reconsidered. In this paper, we present our initial exploration for the implementation of montage in Virtual Reality (VR) films, focusing on the investigation of transition effects. A pilot study is presented, which compared three transition effects. Two popular existing transition effects (cut and fade) were applied, along with a third method (VR portal). The VR Portal was designed and selected specifically to meet the requirements for montage in VR films. We present the preliminary results and our insights, concluding with future plans.
Conference Paper
With the advent of 360° film narratives, traditional tools and techniques used for storytelling are being reconsidered. VR cinema, as a narrative medium, provides users with the liberty to choose where to look and to change their point-of-view constantly. This freedom to frame the visual content themselves brings about challenges for the storytellers in carefully guiding the users so as to convey a narrative effectively. Thus researchers and filmmakers exploring VR cinema are evaluating new storytelling methods to create efficient user experiences. In this paper, we present, through empirical analysis, the significance of perceptual cues in VR cinema, and its impact on guiding the users’ attention to different plot-points in the narrative. The study focuses on examining the experiential fidelity using “Dragonfly”, a 360° film created using the existing guidelines for VR cinema. We posit that the insights derived would help better understand the evolving grammar of VR storytelling. We also present a set of additional guidelines for effective planning of perceptual cues in VR cinema.
Article
Accessibility in immersive media is a relevant research topic, still in its infancy. This article explores the appropriateness of two rendering modes (fixed-positioned and always-visible) and two guiding methods (arrows and auto-positioning) for subtitles in 360° video. All considered conditions have been implemented and integrated in an end-to-end platform (from production to consumption) for their validation and evaluation. A pilot study with end users has been conducted with the goals of determining the preferred options by users, the options that result in a higher presence, and of gathering extra valuable feedback from the end users. The obtained results reflect that, for the considered 360° content types, always-visible subtitles were more preferred by end users and received better results in the presence questionnaire than the fixed-positioned subtitles. Regarding guiding methods, participants preferred arrows over auto-positioning because arrows were considered more intuitive and easier to follow and reported better results in the presence questionnaire.