Figure 1 - uploaded by Oleg Špakov
Content may be subject to copyright.
The setup. The Dikablis gaze tracker with haptic actuators in the ends of the temple bars.  

The setup. The Dikablis gaze tracker with haptic actuators in the ends of the temple bars.  

Source publication
Conference Paper
Full-text available
The best way to construct user interfaces for smart glasses is not yet known. We investigated the use of eye tracking in this context in two experiments. The eye and head movements were combined so that one can select the object to interact by looking at it and then change a setting in that object by turning the head horizontally. We compared three...

Contexts in source publication

Context 1
... beeping sound indicated that the forward-looking camera could not see the AR tag (see Figure 1). This situation could be corrected by turning the head back towards the monitor. ...
Context 2
... used a Dikablis binocular eye tracker from Ergoneers. The experiment was conducted while the participants were seated in front of a 19" display (4:3, 1280x1024; see Figure 1). The gaze tracker projected the gaze vector onto planes that were defined by AR tags in the environment. ...
Context 3
... stimulation was given using Minebea Linear Vibration Motors (LVM8, Matsushita Electric Industrial Co., Japan). The haptic actuators were connected to Gigaport HD USB sound card and attached to the ends of the Dikablis gaze tracker temple bars (see sub-image in Figure 1). Because the actuators were close to the ears, some participants could hear the sound produced by the actuators. ...
Context 4
... software displayed a tag and two boxes on a light-grey screen (see Figure 1). The tag (128x128 pixels) served the gaze tracker only and had no function for the user. ...
Context 5
... The starting number was between 2 to 38, excluding 20 (the target number) and all odd numbers except the nearest two to the target number (19 and 21). Therefore, participants still had 20 starting numbers, but the number adjustment range was extended from 10 to 18. ...

Citations

... Discrete operations are also promising application areas for head gestures. HeadTurn [26,44] allows users to change numeric values by rotating their head left or right, HeadPager [33] enables users to turn pages by leaning their heads to the left or the right area, and HeadNod [25] allows users to quickly answer yes or no in a dialogue by nodding or shaking their head. ...
... While this is not necessarily optimal, this depicts a fixed extreme point in the possible design space from which other prototypes may be derived. The design itself was derived from earlier prototypes mapping audio from or to sight [46,47,115] and interaction techniques relying on head roll or yaw [30,53,108,91]. Sources are targeted with head rotation and a sphere-cast along the users' head orientation. Alteration is mapped to the head's tilt or roll, with a knob-like metaphor. ...
Conference Paper
Many people utilize audio equipment to escape from noises around them, leading to the desired isolation but also dangerously reduced awareness. Mediation of sounds through smarter headphones (e.g., hearables) could address this by providing nonuniform interaction with sounds while retaining a comfortable, yet informative soundscape. In a week-long event sampling study (n = 12), we found that users mostly desire muting or a distinct "quiet but- audible" volume for sound sources. A follow-up study (n = 12) compared a reduced interaction granularity with a continuous one in VR. Usability and workload did not differ significantly for the two granularities but a set of four states can be considered sufficient for most scenarios, namely: ”muted”, ”quieter”, ”louder” and ”unchanged”, allowing for smoother interaction flows. We provide implications for the design of interactive auditory mediated reality systems enabling users to be safe, comfortable and less isolated from their surroundings, while re-gaining agency over their sense of hearing.
... Non-verbal behaviors that induce head motion (e.g., head turns and nodding) are important in human-human and human-robot communication [8,18,24,27]. Head motion is a proxy for eye movement and attention [23], since eye and head movements are highly coordinated [19] and eye gaze is usually centered in the egocentric view [3]. Finally, head motion can signal a person's internal states and be used to predict influential statements in group discussions [18], for example. ...
Conference Paper
The recent availability of lightweight, wearable cameras allows for collecting video data from a "first-person' perspective, capturing the visual world of the wearer in everyday interactive contexts. In this paper, we investigate how to exploit egocentric vision to infer multimodal behaviors from people wearing head-mounted cameras. More specifically, we estimate head (camera) motion from egocentric video, which can be further used to infer non-verbal behaviors such as head turns and nodding in multimodal interactions. We propose several approaches based on Convolutional Neural Networks (CNNs) that combine raw images and optical flow fields to learn to distinguish regions with optical flow caused by global ego-motion from those caused by other motion in a scene. Our results suggest that CNNs do not directly learn useful visual features with end-to-end training from raw images alone; instead, a better approach is to first extract optical flow explicitly and then train CNNs to integrate optical flow and visual information.
Conference Paper
Head-based interactions are very handy for virtual reality (VR) head-worn display (HWD) systems. A useful head-based interaction technique could help users to interact with VR environments in a hands-free manner (i.e., without the need of a hand-held de-vice). Moreover, it can sometimes be seamlessly integrated with other input modalities to provide richer interaction possibilities. This paper explores the potential of a new approach that we call DepthMove to allow interactions that are based on head motions along the depth dimension. With DepthMove, a user can interact with a VR system proactively by moving the head perpendicular to the VR HWD forward or backward. We use two user studies to investigate, model, and optimize DepthMove by taking into con-sideration user performance, subjective response, and social ac-ceptability. The results allow us to determine the optimal and comfortable DepthMove range. We also distill recommendations that can be used to guide the design interfaces that use DepthMove for efficient and accurate interaction in VR HWD systems. A third study is conducted to demonstrate the usefulness of DepthMove relative to other techniques in four application scenarios.
Conference Paper
In this systematic literature review, we study the role of user logging in virtual reality research. By categorizing literature according to data collection methods and identifying reasons for data collection, we aim to find out how popular user logging is in virtual reality research. In addition, we identify publications with detailed descriptions about logging solutions. Our results suggest that virtual reality logging solutions are relatively seldom described in detail despite that many studies gather data by body tracking. Most of the papers gather data to witness something about a novel functionality or to compare different technologies without discussing logging details. The results can be used for scoping future virtual reality research.