Creating and animating a realistic 3D human face has been an important task in computer graphics. The capability of capturing the 3D face of a human subject and reanimate it quickly will find many applications in games, training simulations, and interactive 3D graphics. In this paper, we propose a system to capture photorealistic 3D faces and gener...
... Additionally, current advances in capturing individualized human bodies either by using depth cameras  or photogrammetry methods [1,16] motivated a closer look into the effects of realism and personalized avatars [25,28,29,37]. Still, elaborate individualized ready-to-animate high quality virtual characters of users used to be a labor-intensive and time-consuming process, which only recently could be optimized to be applicable for prolonged and extensive embodiment studies [1,16]. ...
This article reports the impact of the degree of personalization and individualization of users' avatars as well as the impact of the degree of immersion on typical psychophysical factors in embodied Virtual Environments. We investigated if and how virtual body ownership (including agency), presence, and emotional response are influenced depending on the specific look of users' avatars, which varied between (1) a generic hand-modeled version, (2) a generic scanned version, and (3) an individualized scanned version. The latter two were created using a state-of-the-art photogrammetry method providing a fast 3D-scan and post-process workflow. Users encountered their avatars in a virtual mirror metaphor using two VR setups that provided a varying degree of immersion, (a) a large screen surround projection (L-shape part of a CAVE) and (b) a head-mounted display (HMD). We found several significant as well as a number of notable effects. First, personalized avatars significantly increase body ownership, presence, and dominance compared to their generic counterparts, even if the latter were generated by the same photogrammetry process and hence could be valued as equal in terms of the degree of realism and graphical quality. Second, the degree of immersion significantly increases the body ownership, agency, as well as the feeling of presence. These results substantiate the value of personalized avatars resembling users' real-world appearances as well as the value of the deployed scanning process to generate avatars for VR-setups where the effect strength might be substantial, e.g., in social Virtual Reality (VR) or in medical VR-based therapies relying on embodied interfaces. Additionally, our results also strengthen the value of fully immersive setups which, today, are accessible for a variety of applications due to the widely available consumer HMDs.
... Their rig is based on three-scale layers, ranging from the coarse geometry to static and transient fine details on the scale of folds and wrinkles. Casas et al.  generate a user blendshape model with textures using an RGBD camera, which considers only the front face. It is not clear how to model a full head that includes ears and hair. ...
We present a novel image-based representation for dynamic 3D avatars, which allows effective handling of various hairstyles and headwear, and can generate expressive facial animations with fine-scale details in real-time. We develop algorithms for creating an image-based avatar from a set of sparsely captured images of a user, using an off-the-shelf web camera at home. An optimization method is proposed to construct a topologically consistent morphable model that approximates the dynamic hair geometry in the captured images. We also design a real-time algorithm for synthesizing novel views of an image-based avatar, so that the avatar follows the facial motions of an arbitrary actor. Compelling results from our pipeline are demonstrated on a variety of cases.
We present a method for performing real-time facial animation of a 3D avatar from binocular video. Existing facial animation methods fail to automatically capture precise and subtle facial motions for driving a photo-realistic 3D avatar "in-the-wild" (i.e., variability in illumination, camera noise). The novelty of our approach lies in a light-weight process for specializing a personalized face model to new environments that enables extremely accurate real-time face tracking anywhere. Our method uses a pre-trained high-fidelity personalized model of the face that we complement with a novel illumination model to account for variations due to lighting and other factors often encountered in-the-wild (e.g., facial hair growth, makeup, skin blemishes). Our approach comprises two steps. First, we solve for our illumination model's parameters by applying analysis-by-synthesis on a short video recording. Using the pairs of model parameters (rigid, non-rigid) and the original images, we learn a regression for real-time inference from the image space to the 3D shape and texture of the avatar. Second, given a new video, we fine-tune the real-time regression model with a few-shot learning strategy to adapt the regression model to the new environment. We demonstrate our system's ability to precisely capture subtle facial motions in unconstrained scenarios, in comparison to competing methods, on a diverse collection of identities, expressions, and real-world environments.
This book introduces state-of-the-art research on virtual reality, simulation and serious games for education and its chapters presented the best papers from the 4th Asia-Europe Symposium on Simulation and Serious Games (4th AESSSG) held in Turku, Finland, December 2018. The chapters of the book present a multi-facet view on different approaches to deal with challenges that surround the uptake of educational applications of virtual reality, simulations and serious games in school practices. The different approaches highlight challenges and potential solutions and provide future directions for virtual reality, simulation and serious games research, for the design of learning material and for implementation in classrooms. By doing so, the book is a useful resource for both students and scholars interested in research in this field, for designers of learning material, and for practitioners that want to embrace virtual reality, simulation and/or serious games in their education.
We present an efficient face reconstruction and real-time facial expression for VR interaction driven by RGB-D videos. A RGB-D camera is first used to capture depth images, and a coarse face model is then rapidly reconstructed. The user’s personalized avatar is generated using pre-defined face model template and shape morphing techniques. We track the user’s head motion and face expression using a RGB camera. A set of facial features are located and labelled on the colour images. Corresponding facial features are automatically labelled on the reconstructed face model. The user’s virtual avatar is driven by the set of facial features using Laplacian deformation. We demonstrate that our algorithm is able to rapidly create a personalized face model using depth images and achieve realtime facial expression for VR interaction using live RGB videos. Our algorithm can be used in online learning environments that allow learners to interact with simulated and controlled virtual agents.
Understanding unit operations is an essential part in chemical engineering course. An important example is the continuous distillation column, which operation is often seen as a black box, where the incoming feed will undergo separation process inside the column to produce desired products. Despite having learned the concepts on how they work, students may find it difficult to comprehend and visualize what is going on inside a distillation column and how to connect various theories involved in the design and calculations. By developing a virtual visualization tool, such as augmented reality (AR), students can better visualize the process, such as fluid flow profiles and different components that make up a distillation column. Although the idea of incorporating AR for higher education learning is not entirely new, this is the first initiative to implement virtual technology for chemical engineering curriculum in Singapore, which serves as a novel pedagogical approach to complement the conventional pen-and-paper teaching method. Besides enhancing the students’ learning experience, it is believed that the AR application would provide a way to improve the students’ motivation and interest to learn the subject as well as a complementary tool for laboratory demonstration, as it is practically safe and time-saving.
The use of virtual learning technologies has become of great interest in twenty-first century science education research. Simulations and other virtual learning technologies have been shown to enhance conceptual understandings of abstract scientific concepts and foster motivation and interest in science learning. This book chapter reports on research of South African pre-service teachers’ use of simulations in physical sciences learning. In adopting an explanatory sequential mixed method approach, we investigated the experiences of fifty (n = 50). Pre-service physical science teachers in the use of PhET simulations explored instructional scaffolding within simulations that supported learner engagement. Further to this, we report on attitudinal changes of these pre-service teachers towards the subject, before and after virtual learning interventions. Findings from investigations suggest that learning by simulations in virtual spaces enhances attitudes towards physical sciences, with post-test attitude scores being significantly higher than pre-test attitude scores. From qualitative data sets, pre-service teachers asserted that their visualization of microscientific phenomena was enhanced. Evidence also suggested that self-directed learning was promoted by the use of simulations. From the research, themes such as convenience of use, interest in science learning, and enablement of guided inquiry emerged. However, pre-service teachers did express concern that simulations lacked authenticity, by failing to replicate hands-on laboratory experiences that enhance the development of manipulative science process skills associated with practical work. Drawing from this research and other studies in this domain, we reflect on the role of virtual learning technologies like simulations in the learning process. We also provide instructional recommendations for science educators and designers of virtual learning material.