About
30
Publications
4,925
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
634
Citations
Introduction
Skills and Expertise
Current institution
Publications
Publications (30)
Generative AI in Virtual Reality enables users to create detailed immersive worlds with a rich variety However current worldbuilding systems often lack an understanding of the fundamental aspects of human-AI cocreation resulting in a disconnect between user intent and AIgenerated content This paper investigates the co-creative process between users...
Systematic reviews (SRs) are vital to gathering and structuring knowledge, yet descriptions of their procedures are often inadequate. In human-computer interaction (HCI), SRs are still uncommon but gaining momentum, which prompted us to explore how SRs are reported at CHI—the flagship HCI conference venue. To assess the reporting quality of CHI rev...
Tile-based locomotion (TBL) is a popular locomotion technique for computer, console, and board games. However, despite its simplicity and unconventional movement, the transfer of TBL to virtual reality (VR) as a game platform remains unexplored. To fill this gap, we introduce TBL for VR on the example of two techniques: a controller and a feet-base...
Digital eye strain (DES), caused by prolonged exposure to digital screens, stresses the visual system and negatively affects users’ well-being and productivity. While DES is well-studied in computer displays, its impact on users of virtual reality (VR) head-mounted displays (HMDs) is largely unexplored—despite that some of their key properties (e.g...
Interacting with a group of people requires to direct the attention of the whole group, thus requires feedback about the crowd’s attention. In face-to-face interactions, head and eye movements serve as indicator for crowd attention. However, when interacting online, such indicators are not available. To substitute this information, gaze visualizati...
Webcam-based eye-tracking promises easy and quick data collection without the need for specific or additional eye-tracking hardware. This makes it especially attractive for educational research, in particular for modern formats, such as MOOCs. However, in order to fulfill its promises, webcam-based eye tracking has to overcome several challenges, m...
With innovations in the field of gaze and eye tracking, a new concentration of research in the area of gaze-tracked systems and user interfaces has formed in the field of Extended Reality (XR). Eye trackers are being used to explore novel forms of spatial human–computer interaction, to understand human attention and behavior, and to test expectatio...
Pupil dilation may reveal the outcomes of binary decisions. Hereby, pupils dilate stronger for stimuli that are deemed targets by the beholder than for distractors. Respective findings are built on average pupil dynamics, aggregated over multiple trials and participants rather than on single trials. Further, the reported differences between targets...
Although our pupils slightly dilate when we look at an intended target, they do not when we look at irrelevant distractors. This finding suggests that it may be possible to decode the intention of an observer, understood as the outcome of implicit covert binary decisions, from the pupillary dynamics over time. However, few previous works have inves...
Augmented and virtual reality (AR/VR) has entered the mass market and, with it, will soon eye tracking as a core technology for next generation head-mounted displays (HMDs). In contrast to existing gaze interfaces, the 3D nature of AR and VR requires estimating a user's gaze in 3D. While first applications, such as foveated rendering, hint at the c...
Eye tracking is expected to become an integral part of future augmented reality (AR) head-mounted displays (HMDs) given that it can easily be integrated into existing hardware and provides a versatile interaction modality. To augment objects in the real world, AR HMDs require a three-dimensional understanding of the scene, which is currently solved...
In this demonstration we introduce VRSpinning, a seated locomotion approach based around stimulating the user's vestibular system using a rotational impulse to induce the perception of linear self-motion. Currently, most approaches for locomotion in VR use either concepts like teleportation for traveling longer distances or present a virtual motion...
Redirected walking (RDW) allows virtual reality (VR) users to walkinfinitely while staying inside a finite physical space through subtleshifts (gains) of the scene to redirect them back inside the volume.All prior approaches measure the feasibility of RDW techniquesbased on if the user perceives the manipulation, leading to rathersmall applicable g...
Current approaches for locomotion in virtual reality are either creating a visual-vestibular conflict, which is assumed to cause simulator sickness, or use metaphors such as teleportation to travel longer distances, lacking the perception of self motion. We propose VRSpinning, a seated locomotion approach based around stimulating the user's vestibu...
Mobile virtual reality (VR) head-mounted displays (HMDs) are steadily becoming part of people's everyday life. Most current interaction approaches rely either on additional hardware (e.g. Daydream Controller) or offer only a liMassachusetts Institute of Technologyed interaction concept (e.g. Google Cardboard). We explore a solution where a conventi...