Figure 2 - uploaded by Jari Kangas
Content may be subject to copyright.
The participant held the device in front of the tracker (left). An example list of names visible at the mobile's display (right). 

The participant held the device in front of the tracker (left). An example list of names visible at the mobile's display (right). 

Source publication
Conference Paper
Full-text available
Anticipating the emergence of gaze tracking capable mobile devices, we are investigating the use of gaze as an input modality in handheld mobile devices. We conducted a study of combining gaze gestures with vibrotactile feedback. Gaze gestures were used as an input method in a mobile device and vibrotactile feedback as a new alternative way to give...

Contexts in source publication

Context 1
... participant's hand was also supported from elbow to prevent fatigue during the experiment. See Figure 2 for the arrangement. ...
Context 2
... task was to find a specific name in a list, select the name and make a simulated call. The participant saw a list of 18 names (part of which is shown in Figure 2). When the application was started the list focus was on the topmost entry. ...

Similar publications

Article
Full-text available
We conducted a user study evaluating five selection techniques for augmented reality in optical see-through head-mounted displays (OST-HMDs). The techniques we studied aim at supporting mobile usage scenarios where the devices do not need external tracking tools or special environments, and therefore we selected techniques that rely solely on track...
Conference Paper
Full-text available
Haptic feedback can improve the usability of gaze gestures in mobile devices. However, the benefit is highly sensitive to the exact timing of the feedback. In practical systems the processing and transmission of signals takes some time, and the feedback may be delayed. We conducted an experiment to determine limits on the feedback delays. The resul...

Citations

... On the downside, increasing the number of gestures introduces some complexity and comes with problems, as complex gestures may be difficult to recall cognitively, and they may be challenging to initiate and perform physically [61]. Gaze gestures found applications in gaming [36,37,61], authentication [4,44,46,48,69] and also as a generic input method for mobile devices [41]. The difference between Gaze gestures and Pursuits is that Gaze gestures, in recent implementations, are performed from memory rather than by following a stimulus. ...
Conference Paper
Full-text available
Gaze is promising for hands-free interaction on mobile devices. However, it is not clear how gaze interaction methods compare to each other in mobile settings. This paper presents the first experiment in a mobile setting that compares three of the most commonly used gaze interaction methods: Dwell time, Pursuits, and Gaze gestures. In our study, 24 participants selected one of 2, 4, 9, 12 and 32 targets via gaze while sitting and while walking. Results show that input using Pursuits is faster than Dwell time and Gaze gestures especially when there are many targets. Users prefer Pursuits when stationary, but prefer Dwell time when walking. While selection using Gaze gestures is more demanding and slower when there are many targets, it is suitable for contexts where accuracy is more important than speed. We conclude with guidelines for the design of gaze interaction on handheld mobile devices.
... Since the interaction via gaze gestures is usually facilitated without a graphical user interface, the related studies focus more on vibrotactile feedback (Rantala et al., 2020). It was found that the implementation of vibrotactile feedback can reduce response time as well as improve the user's subjective evaluation (Kangas et al., 2014). K€ opsel et al. (2016) compared visual, haptic, and auditory feedback modalities. ...
Article
We present an eye typing interface with one-point calibration, which is a two-stage design. The characters are clustered in groups of four characters. Users select a cluster by gazing at it in the first stage and then select the desired character by following its movement in the second stage. A user study was conducted to explore the impact of auditory and visual feedback on typing performance and user experience of this novel interface. Results show that participants can quickly learn how to use the system, and an average typing speed of 4.7 WPM can be reached without lengthy training. The subjective data of participants revealed that users preferred visual feedback over auditory feedback while using the interface. The user study indicates that this eye typing interface can be used for walk-up-and-use interactions, as it is easily understood and robust to eye-tracking inaccuracies. Potential areas of application, as well as possibilities for further improvements, are discussed.
... In 2018 Steil et al. [1], presented a work related to the task of predicting users' gaze behaviour (overt visual attention) in the near future. Kangas et al. [2] described a study of combining gaze gestures with vibrotactile feedback. In this study, gaze gestures were used as input for a mobile device and vibrotactile feedback as a new alternative way to give confirmation of interaction events. ...
Article
Full-text available
In this paper, a smartphone-based learning monitoring system is presented. During pandemics, most of the parents are not used to simultaneously deal with their home office activities and the monitoring of the home school activities of their children. Therefore, a system allowing a parent, teacher or tutor to assign a task and its corresponding execution time to children, could be helpful in this situation. In this work, a mobile application to assign academic tasks to a child, measure execution time, and monitor the child’s attention, is proposed. The children are the users of a mobile application, hosted on a smartphone or tablet device, that displays an assigned task and keeps track of the time consumed by the child to perform this task. Time measurement is performed using face recognition, so it is possible to infer the attention of the child based on the presence or absence of a face. The app also measures the time that the application was in the foreground, as well as the time that the application was sent to the background, to measure boredom. The parent or teacher assigns a task using a desktop application specifically designed for this purpose. At the end of the time set by the user, the application sends to the parent or teacher statistics about the execution time of the task and the degree of attention of the child.
... Although large individual differences were observed, the auditory feedback did modify the oculomotor behaviour and improved task performance. Kangas et al. [19] compared off-screen gaze interaction using gaze gestures (looking right then left to activate a command) with vibrotactile feedback and no feedback. All 12 participants performed the gaze interaction faster and preferred the vibrotactile feedback over no feedback. ...
Article
Purpose: Eye gaze interfaces have been used by people with severe physical impairment to interact with various assistive technologies. If used to control robots, it would be beneficial if individuals could gaze directly at targets in the physical environment rather than have to switch their gaze between a screen with representations of robot commands and the physical environment to see the response of their selection. By using a homogeneous transformation technique, eye gaze coordinates can be mapped between the reference coordinate frame of eye tracker and the coordinate frame of objects in the physical environment. Feedback about where the eye tracker has determined the eye gaze is fixated is needed so users can select targets more accurately. Screen-based assistive technologies can use visual feedback, but in a physical environment, other forms of feedback need to be examined. Materials and methods: In this study, an eye gaze system with different feedback conditions (i.e., visual, auditory, vibrotactile, and no-feedback) was tested when participants received visual feedback on a display (on-screen) and when looking directly at the physical environment (off-screen). Target selection tasks in both screen conditions were performed by ten non-disabled adults, three non-disabled children, and two adults and one child with cerebral palsy. Results: Tasks performed with gaze fixation feedback modalities were accomplished faster and with higher success than tasks performed without feedback, and similar results were observed in both screen conditions. No significant difference was observed in performance across the feedback modalities, but participants had personal preferences. Conclusion: The homogeneous transformation technique enabled the use of a stationary eye tracker to select target objects in the physical environment, and auditory and vibrotactile feedback enabled participants to be more accurate selecting targets than without it. • Implications for Rehabilitation • Being able to select target objects in the physical environment by eye gaze could make it easier for children with disabilities to control assistive robots, because in this way they do not have to change their focus between a computer screen with commands and the robot. • Providing auditory or vibrotactile feedback when using an eye gaze system made it faster and easier to know if a target was being gazed upon. • Being able to select targets in the environment using eye gaze could be beneficial for other assistive technology, too, such as destination selection for power wheelchairs.
... Or, the gestures can be off-screen (Isokoski, 2000), which frees the screen for other purposes. Simple gestures that start from the screen and then go off-screen and back by crossing one of the display borders have been used for mode change during gaze-based gaming (Istance, Bates, Hyrskykari, & Vickers, 2008), and controlling a mobile phone (Kangas et al., 2014b) or a smart wrist watch (Akkil et al., 2015). Figure 4 illustrates an on-screen gesture implementation in a game. ...
... However, with short dwell times where the user moves the gaze away from the target very fast or when rapid gaze gestures are used, other feedback modalities may be useful. For example, if a person controls a mobile phone by off-screen gaze gestures, haptic feedback on the hand-held phone can inform the user of the successful action (Kangas et al., 2014b). Haptic feedback may also be preferred in situations where privacy is needed; haptic feedback can only be felt by the person wearing or holding the device. ...
Chapter
Gaze provides an attractive input channel for human-computer interaction because of its capability to convey the focus of interest. Gaze input allows people with severe disabilities to communicate with eyes alone. The advances in eye tracking technology and its reduced cost make it an increasingly interesting option to be added to the conventional modalities in every day applications. For example, gaze-aware games can enhance the gaming experience by providing timely effects at the right location, knowing exactly where the player is focusing at each moment. However, using gaze both as a viewing organ as well as a control method poses some challenges. In this chapter, we will give an introduction to using gaze as an input method. We will show how to use gaze as an explicit control method and how to exploit it subtly in the background as an additional information channel. We will summarize research on the application of different types of eye movements in interaction and present research-based design guidelines for coping with typical challenges. We will also discuss the role of gaze in multimodal, pervasive and mobile interfaces and contemplate with ideas for future developments.
... Since visual feedback is sub-optimal during eye tracking in various situations [20] and auditory feedback might not be suitable for noisy industrial conditions, vibro-tactile is used a second communication channel, as it is generally perceived well in gaze-interaction scenarios [16]. For that, a Microsoft band 2 is employed, which enables three different vibration modes. ...
Conference Paper
Due to the explicit and implicit facets of gaze-based interaction , eye tracking is a major area of interest within the field of cognitive industrial assistance systems. In this position paper, we describe a scenario which includes a wearable platform built around a mobile eye tracker, which can support and guide an industrial worker throughout the execution of a maintenance task. The potential benefits of such a solution are discussed and the key components are outlined.
... Implicit approaches, on the other hand, record and interpret the user's visual attention for estimating her information needs and adapting the interface (e.g., [6,25]). Over the last few years, a trend towards increasing pervasiveness of gaze-based interaction can be observed [9], enabling gaze-based interaction with public displays [51], mobile phones [19,28], smart watches [20], and display-free interaction with urban spaces [3]. ...
Article
Flying an aircraft is a mentally demanding task where pilots must process a vast amount of visual, auditory and vestibular information. They have to control the aircraft by pulling, pushing and turning different knobs and levers, while knowing that mistakes in doing so can have fatal outcomes. Therefore, attempts to improve and optimize these interactions should not increase pilots’ mental workload. By utilizing pilots’ visual attention, gaze-based interactions provide an unobtrusive solution to this. This research is the first to actively involve pilots in the exploration of gaze-based interactions in the cockpit. By distributing a survey among 20 active commercial aviation pilots working for an internationally operating airline, the paper investigates pilots’ perception and needs concerning gaze-based interactions. The results build the foundation for future research, because they not only reflect pilots’ attitudes towards this novel technology, but also provide an overview of situations in which pilots need gaze-based interactions.
... Some of the earliest eye typing systems required the user to glance at a few defined directions in specific order to compose a character [Rosen and Durfee 1978]. Gaze gestures have also been used to control a computer [Porta and Turina 2008], play games [Istance et al. 2010], control mobile phones Kangas et al. 2014] and smart watches [Akkil et al. 2015;Hansen et al. 2016]. ...
Conference Paper
In gesture-based user interfaces, the effort needed for learning the gestures is a persistent problem that hinders their adoption in products. However, people's natural gaze paths form shapes during viewing. For example, reading creates a recognizable pattern. These gaze patterns can be utilized in human-technology interaction. We experimented with the idea of inducing specific gaze patterns by static drawings. The drawings included visual hints to guide the gaze. By looking at the parts of the drawing, the user's gaze composed a gaze gesture that activated a command. We organized a proof-of-concept trial to see how intuitive the idea is. Most participants understood the idea without specific instructions already on the first round of trials. We argue that with careful design the form of objects and especially their decorative details can serve as a gaze-based user interface in smart homes and other environments of ubiquitous computing.
... Some of the earliest eye typing systems required the user to glance at a few defined directions in specific order to compose a character [Rosen and Durfee 1978]. Gaze gestures have also been used to control a computer [Porta and Turina 2008], play games [Istance et al. 2010], control mobile phones Kangas et al. 2014] and smart watches [Akkil et al. 2015;Hansen et al. 2016]. ...
Conference Paper
In gesture-based user interfaces, the effort needed for learning the gestures is a persistent problem that hinders their adoption in products. However, people's natural gaze paths form shapes during viewing. For example, reading creates a recognizable pattern. These gaze patterns can be utilized in human-technology interaction. We experimented with the idea of inducing specific gaze patterns by static drawings. The drawings included visual hints to guide the gaze. By looking at the parts of the drawing, the user's gaze composed a gaze gesture that activated a command. We organized a proof-of-concept trial to see how intuitive the idea is. Most participants understood the idea without specific instructions already on the first round of trials. We argue that with careful design the form of objects and especially their decorative details can serve as a gaze-based user interface in smart homes and other environments of ubiquitous computing.
... Krejtz et al. [29] demonstrated that verbal audio descriptions can be used to guide a person's gaze to a target. Gaze guidance has also been combined with vibrotactile feedback [25], visual feedback in AR [45], and non-verbal auditory feedback [34]. In addition, subconscious approaches to gaze guidance have been suggested that use image modulations in the user's periphery [4,37]. ...
Conference Paper
Full-text available
Exploring a city panorama from a vantage point is a popular tourist activity. Typical audio guides that support this activity are limited by their lack of responsiveness to user behavior and by the difficulty of matching audio descriptions to the panorama. These limitations can inhibit the acquisition of information and negatively affect user experience. This paper proposes Gaze-Guided Narratives as a novel interaction concept that helps tourists find specific features in the panorama (gaze guidance) while adapting the audio content to what has been previously looked at (content adaptation). Results from a controlled study in a virtual environment (n=60) revealed that a system featuring both gaze guidance and content adaptation obtained better user experience, lower cognitive load, and led to better performance in a mapping task compared to a classic audio guide. A second study with tourists situated at a vantage point (n=16) further demonstrated the feasibility of this approach in the real world.