Lab

Virtual Worlds and Digital Games


About the lab

The Virtual Worlds and Digital Games Group is concerned with technologies, systems and occurrences of Mediated Reality (including but not limited to Virtual and Augmented Reality (VR/AR), Mixed Reality (MR), Diminished Reality (DR), as well as individual manifestations of digital games (pervasive games, mobile MR games, Alternate Reality Games (ARG) etc.).

Featured research (8)

Integrating taste in AR/VR applications has various promising use cases - from social eating to the treatment of disorders. Despite many successful AR/VR applications that alter the taste of beverages and food, the relationship between olfaction, gustation, and vision during the process of multisensory integration (MSI) has not been fully explored yet. Thus, we present the results of a study in which participants were confronted with congruent and incongruent visual and olfactory stimuli while eating a tasteless food product in VR. We were interested (1) if participants integrate bi-modal congruent stimuli and (2) if vision guides MSI during congruent/incongruent conditions. Our results contain three main findings: First, and surprisingly, participants were not always able to detect congruent visual-olfactory stimuli when eating a portion of tasteless food. Second, when confronted with tri-modal incongruent cues, a majority of participants did not rely on any of the presented cues when forced to identify what they eat; this includes vision which has previously been shown to dominate MSI. Third, although research has shown that basic taste qualities like sweetness, saltiness, or sourness can be influenced by congruent cues, doing so with more complex flavors (e.g., zucchini or carrot) proved to be harder to achieve. We discuss our results in the context of multimodal integration, and within the domain of multisensory AR/VR. Our results are a necessary building block for future human-food interaction in XR that relies on smell, taste, and vision and are foundational for applied applications such as affective AR/VR.
Augmented Reality (AR) and Virtual Reality (VR) are pushing from the labs towards consumers, especially with social applications. These applications require visual representations of humans and intelligent entities. However, displaying and animating photo-realistic models comes with a high technical cost while low-fidelity representations may evoke eeriness and overall could degrade an experience. Thus, it is important to carefully select what kind of avatar to display. This article investigates the effects of rendering style and visible body parts in AR and VR by adopting a systematic literature review. We analyzed 72 papers that compare various avatar representations. Our analysis includes an outline of the research published between 2015 and 2022 on the topic of avatars and agents in AR and VR displayed using head-mounted displays, covering aspects like visible body parts (e.g., hands only, hands and head, full-body) and rendering style (e.g., abstract, cartoon, realistic); an overview of collected objective and subjective measures (e.g., task performance, presence, user experience, body ownership); and a classification of tasks where avatars and agents were used into task domains (physical activity, hand interaction, communication, game-like scenarios, and education/training). We discuss and synthesize our results within the context of today's AR and VR ecosystem, provide guidelines for practitioners, and finally identify and present promising research opportunities to encourage future research of avatars and agents in AR/VR environments.
Visualizations of virtual objects in the real environment is often done by a simplified representation with simple surfaces and without reference to the surrounding environment. The seamless fusion of the virtual and real environment is, however, an essential factor in many areas, which is of particular importance when calculating lighting in mixed realities on mobile devices. Current approaches focus on approximations, which allow the calculation of diffuse lighting, whereby the rendering of glossy reflection properties is often neglected. The aim of this book is to enable the visualization of mirror-like reflective surfaces in mixed reality. In order to achieve this goal, various approaches are explored enabling high-quality visualization of virtual objects in realtime with a focus on the use of common hardware such as cameras, sensors in mobile devices, and partially depth sensors. Complete ambient lighting can be estimated, which enables detailed reflections. The results provide a novel way to embed complex and simple geometric shapes with glossy surfaces in the real world which offers a higher level of detail in the reflections without using additional hardware. About the author Tobias Schwandt´s professional and personal focus at the TU Ilmenau is the area of Mixed-Reality (MR). Within his dissertation, he particularly concerned himself with the topic of illumination of virtual content in AR, its influence on the real environment, the reconstruction of the environment light, and the manipulation of real geometry by virtual content.
Over the last decades, the interior of cars has been constantly changing. A promising, yet unexplored, modality are large stereoscopic 3D (S3D) dashboards. Replacing the traditional car dashboard with a large display and applying binocular depth cues, such a user interface (UI) could provide novel possibilities for research and industry. In this book, the author introduces a development environment for such a user interface. With it, he performed several driving simulator experiments and shows that S3D can be used across the dashboard to support menu navigation and to highlight elements without impairing driving performance. The author demonstrates that S3D has the potential to promote safe driving when used in combination with virtual agents during conditional automated driving. Further, he present results indicating that S3D navigational cues improve take-over maneuvers in conditional automated vehicles. Finally, investigating the domain of highly automated driving, he studied how users would interact with and manipulate S3D content on such dashboards and present a user-defined gesture set. About the author Florian Weidner received the master’s degree (M. Sc.) in media computer science from the TU Dresden in 2015 and the PhD degree from the Ilmenau University of Technology in 2021.

Lab head

Wolfgang Broll

Members (7)

Tobias Schwandt
  • Technische Universität Ilmenau
Christian Kunert
  • Technische Universität Ilmenau
Elhassan Makled
  • Technische Universität Ilmenau
Kathrin Knutzen
  • Technische Universität Ilmenau
Luís Fernando De Souza Cardoso
  • Technische Universität Ilmenau
Gunjan Kumari
  • Technische Universität Ilmenau
Christoph Gerhardt
  • Technische Universität Ilmenau

Alumni (4)

Jan Herling
Jan Herling
Philipp Lensing
Philipp Lensing
Chao-Yang Fu
  • Technische Universität Ilmenau
Rajat Sharma
  • Technische Universität Ilmenau