Conference Paper

Eye Gesture in a Mixed Reality Environment

Conference Paper

Eye Gesture in a Mixed Reality Environment

If you want to read the PDF, try requesting it from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Concerning XR S spaces, an additional criterion comes into play: the interaction technique should scale between HMDs and HHDs, i.e., the technique should remain intuitive to the users even when they switch devices. However, previous research has mostly focused individually on the various interaction modalities offered by XR technologies including in-air [45][46][47][48][49][50][51][52][53][54], touchbased [55][56][57][58][59], tangible [60][61][62], head-, gaze-, or speech-based [45,55,58,[63][64][65], and multimodal [55,58,64] input techniques to select and manipulate virtual objects as well as to navigate in space. Many of these approaches require tracking parts of the users' bodies or external interaction devices. ...
... However, the respective sensors may have to be reset to maintain accuracy and correct drift errors [66]. Further approaches (e.g., [63][64][65]) used head-mounted eye trackers to provide input via fixations or eye movements. A detailed overview of approaches for eye tracking and head movement detection is provided by [67]. ...
... Nukarinen et al. [63] evaluated two different gaze-based selection techniques where the object to be selected was focused via gaze and the selection was confirmed by either pressing a button or by keeping the gaze fixed on the object for a certain period of time. While this approach is limited to a single type of gaze input, Hassoumi and Hurter [65] presented an approach for the gaze-based input of numerical codes (i.e., each gaze gesture was mapped to a specific digit). To do so, the virtual numeric keypad was augmented with small dots that continuously traced the shape of the available digits such that digits could be entered by keeping the eyes fixed on the respective moving dot. ...
Article
Full-text available
Extensive research has outlined the potential of augmented, mixed, and virtual reality applications. However, little attention has been paid to scalability enhancements fostering practical adoption. In this paper, we introduce the concept of scalable extended reality (XRS), i.e., spaces scaling between different displays and degrees of virtuality that can be entered by multiple, possibly distributed users. The development of such XRS spaces concerns several research fields. To provide bidirectional interaction and maintain consistency with the real environment, virtual reconstructions of physical scenes need to be segmented semantically and adapted dynamically. Moreover, scalable interaction techniques for selection, manipulation, and navigation as well as a world-stabilized rendering of 2D annotations in 3D space are needed to let users intuitively switch between handheld and head-mounted displays. Collaborative settings should further integrate access control and awareness cues indicating the collaborators’ locations and actions. While many of these topics were investigated by previous research, very few have considered their integration to enhance scalability. Addressing this gap, we review related previous research, list current barriers to the development of XRS spaces, and highlight dependencies between them.
... form, speed, phase) to trigger actions when the correlation of the users eye movement and the path of the presented stimuli exceeds a certain threshold. These targets can be represented by virtual moving objects on a computer screen, (tracked) real world objects [27] or moving objects in a virtual or augmented reality environment [7,9,16]. Studies have shown that this uncalibrated interaction approach works well for different audiences from infants to elderly as well as normal sighted and corrected to normal sighted people [1,4,8,21]. ...
... There may also be several different moving objects visible at same time, all of which need to be distinguishable in order to be able to select different targets or actions. Different algorithms and approaches have been developed and evaluated, but most implementations use either the Euclidean distance or Pearson's product-moment correlation [4,7,9,16,21,27,29]. There are also alternative approaches, like the slope-based correlation, which are more robust in some situations, but only used in a minority of implementations [3]. ...
Conference Paper
Full-text available
In this position paper, we encourage the use of novel 3D gaze tracking possibilities in the field of gaze-based interaction. Smooth pursuit offers great benefits over other gaze interaction approaches, like the ability to work with uncalibrated eye trackers, but also has disadvantages like the produced visual clutter in more complex user interfaces. We examine the basic concept of smooth pursuits, its hardware and algorithmic requirements and how this can be applied to real world problems. Then we evaluate how the recent change in availability of 3D eye tracking hardware can be used to approach the challenges of 2D smooth pursuit interaction. We take a look at different research opportunities, show concrete ideas and discuss why they are relevant for future research.
... Hassoumi et. al. [33] found that eye movements provided interaction with high accuracy in mixed reality environments. Therefore, a new method, eye tracking technology were introduced in MR to help people with ALS or other motor disorders when interacting with computing devices. ...
Article
Full-text available
Aim: summarize the application of mixed reality technology in the field of art design and analyse its shortcomings and trends. Method: by searching the Web of science as the main source of information to explain the basic concept and development of MR and state of art in product design, display design and interactive design. Discuss the pros and cons of mixed reality technology in human-computer interaction, hardware, and user experience. Result: Mixed reality technology is widely used in art, design, especially in product design, display design, and interactive design. The future trend of mixed reality technology is summarized the combining status of market and industry. Conclusion: The application of mixed reality technology in the field of art design is already drawing more and more attention from industry, but it still lacks enough attention from academia community. This paper systematically introduces the state-of-the-art applications and trends of MR in the field of art design which helps readers to get an insight on this fascinating research area.
ResearchGate has not been able to resolve any references for this publication.