About the lab
The Virtual Worlds and Digital Games Group is concerned with technologies, systems and occurrences of Mediated Reality (including but not limited to Virtual and Augmented Reality (VR/AR), Mixed Reality (MR), Diminished Reality (DR), as well as individual manifestations of digital games (pervasive games, mobile MR games, Alternate Reality Games (ARG) etc.).
Featured projects (1)
The goal of this project is to develop and evaluate possible tracking solutions in an outdoor environment. These new approaches are supported by neural networks algorithms. Moreover, modern user interfaces will be evaluated in the case of a usage in road maintenance services.
Featured research (5)
Visualizations of virtual objects in the real environment is often done by a simplified representation with simple surfaces and without reference to the surrounding environment. The seamless fusion of the virtual and real environment is, however, an essential factor in many areas, which is of particular importance when calculating lighting in mixed realities on mobile devices. Current approaches focus on approximations, which allow the calculation of diffuse lighting, whereby the rendering of glossy reflection properties is often neglected. The aim of this book is to enable the visualization of mirror-like reflective surfaces in mixed reality. In order to achieve this goal, various approaches are explored enabling high-quality visualization of virtual objects in realtime with a focus on the use of common hardware such as cameras, sensors in mobile devices, and partially depth sensors. Complete ambient lighting can be estimated, which enables detailed reflections. The results provide a novel way to embed complex and simple geometric shapes with glossy surfaces in the real world which offers a higher level of detail in the reflections without using additional hardware. About the author Tobias Schwandt´s professional and personal focus at the TU Ilmenau is the area of Mixed-Reality (MR). Within his dissertation, he particularly concerned himself with the topic of illumination of virtual content in AR, its influence on the real environment, the reconstruction of the environment light, and the manipulation of real geometry by virtual content.
Over the last decades, the interior of cars has been constantly changing. A promising, yet unexplored, modality are large stereoscopic 3D (S3D) dashboards. Replacing the traditional car dashboard with a large display and applying binocular depth cues, such a user interface (UI) could provide novel possibilities for research and industry. In this book, the author introduces a development environment for such a user interface. With it, he performed several driving simulator experiments and shows that S3D can be used across the dashboard to support menu navigation and to highlight elements without impairing driving performance. The author demonstrates that S3D has the potential to promote safe driving when used in combination with virtual agents during conditional automated driving. Further, he present results indicating that S3D navigational cues improve take-over maneuvers in conditional automated vehicles. Finally, investigating the domain of highly automated driving, he studied how users would interact with and manipulate S3D content on such dashboards and present a user-defined gesture set. About the author Florian Weidner received the master’s degree (M. Sc.) in media computer science from the TU Dresden in 2015 and the PhD degree from the Ilmenau University of Technology in 2021.
Environment textures are used for the illumination of virtual objects within a virtual scene. Using these textures is crucial for high-quality lighting and reflection. In the case of an augmented reality context, the lighting is very important to seamlessly embed a virtual object within the real world scene. To ensure this, the lighting of the environment has to be captured according to the current light information. In this paper, we present a novel approach by stitching the current camera information onto a cube map. This cube map is enhanced in every single frame and is fed into a neural network to estimate missing parts. Finally, the output of the neural network and the currently stitched information is fused to make even mirror-like reflections possible on mobile devices. We provide an image stream stitching approach combined with a neural network to create plausible and high-quality environment textures that may be used for image-based lighting within mixed reality environments.
In recent years, stereoscopic 3D (S3D) displays have shown promising results on user experience, for navigation, and critical warnings when applied in cars. However, previous studies have only investigated these displays in non-interactive use-cases. So far, interacting with stereoscopic 3D content in cars has not been studied. Hence, we investigated how people interact with large S3D dashboards in automated vehicles (SAE level 4). In a user-elicitation study (N=23), we asked participants to propose interaction techniques for 24 referents while sitting in a driving simulator. Based on video recordings and motion tracking data of 1104 proposed interactions containing gestures and other input modalities, we grouped the gestures per task. Overall, we can report a chance-corrected agreement rate of k = 0.232 and by that, a medium agreement among participants. Based on the agreement rates, we defined two sets of gestures: a basic and a holistic version. Our results show that participants intuitively interact with S3D dashboards and that they prefer mid-air gestures that either directly manipulate the virtual object or operate on a proxy object. We further compare our results with similar results in different settings and provide insights on factors that have shaped our gesture set.