Conference PaperPDF Available

Gaze Gestures and Haptic Feedback in Mobile Devices


Abstract and Figures

Anticipating the emergence of gaze tracking capable mobile devices, we are investigating the use of gaze as an input modality in handheld mobile devices. We conducted a study of combining gaze gestures with vibrotactile feedback. Gaze gestures were used as an input method in a mobile device and vibrotactile feedback as a new alternative way to give confirmation of interaction events. Our results show that vibrotactile feedback significantly improved the use of gaze gestures. The tasks were completed faster and rated easier and more comfortable when vibrotactile feedback was provided.
Content may be subject to copyright.
Gaze Gestures and Haptic Feedback in Mobile Devices
Jari Kangas, Deepak Akkil, Jussi Rantala, Poika Isokoski,
Päivi Majaranta and Roope Raisamo
Tampere Unit for Computer-Human Interaction, School of Information Sciences
University of Tampere, Finland
{jari.kangas, deepak.akkil, jussi.e.rantala, poika.isokoski,
paivi.majaranta, roope.raisamo}
Anticipating the emergence of gaze tracking capable mobile
devices, we are investigating the use of gaze as an input
modality in handheld mobile devices. We conducted a
study of combining gaze gestures with vibrotactile
feedback. Gaze gestures were used as an input method in a
mobile device and vibrotactile feedback as a new
alternative way to give confirmation of interaction events.
Our results show that vibrotactile feedback significantly
improved the use of gaze gestures. The tasks were
completed faster and rated easier and more comfortable
when vibrotactile feedback was provided.
Author Keywords
Gaze tracking; gaze interaction; haptic feedback.
ACM Classification Keywords
H.5.2. User interfaces: Input devices and strategies.
Availability of low-cost miniature video cameras and low-
power computing hardware is now making it possible to
build affordable mobile devices such as smartphones or
computing tablets with gaze tracking capability. Also,
prototypes of affordable head mounted gaze trackers are
being built (e.g. [10]), which can be used together with
mobile devices. The conventional way of interacting by eye
gaze is to fixate the gaze to an on-screen object for a pre-
defined dwell time. However, this is not optimal in mobile
contexts because objects on small displays tend to be small
and the handheld device is moving slightly [4].
In some cases the gaze accuracy problems can be corrected
by using other modalities, like touch input [14, 15]. Gaze
gestures provide an alternative method of gaze interaction.
They are known to be more tolerant to small calibration
shifts, display and tracker movement, and also suitable for
interaction with small display objects [2, 3, 4, 7]. The use of
gaze gestures has been studied, for example, in gaming [9],
text typing [16], control [12] and drawing [5].
Even though gaze gestures have been shown to be a
potential interaction method, one existing challenge is to
provide sufficient feedback of the interaction. Dybdal et al.
[4] found that user’s cognitive load is higher in gesture
based interaction than in dwell time based interaction. They
proposed that the interface design should be improved, e.g.,
by adding audio feedback of gestures. In general,
appropriate real-time feedback of user’s actions is known to
reduce mental load and make the interaction easy and fast
[11]. In the context of gaze gestures, instantaneous
feedback would help in confirming that the input has been
recognized and in coping with errors such as incorrectly
recognized eye strokes. An early feedback already during a
gesture could also be beneficial for correct completion of
the gesture [12].
While the use of visual and audio feedback is possible when
using gaze gestures, they both have limitations. Visual
feedback on a mobile display can be difficult to perceive
during eye movements, and may be too late if given after
the gesture completion. Also, auditory feedback is not
suitable for private interaction or in noisy environments [1].
To date, little attention has been paid to studying the use of
haptic feedback with gaze input. This could possibly be due
to the fact that providing haptic stimulation to the user has
required separate feedback devices. However, in the context
of mobile interaction, the input and output capabilities are
combined in a single device. Most current mobile devices
are equipped with vibrotactile actuators. These actuators
have been shown to be useful, for example, in improving
both the performance and subjective experience of virtual
keyboard use on touchscreen devices [1, 6]. Combining
vibrotactile feedback with gaze gestures could allow for
novel types of interaction.
Motivated by this, our aim was to find out whether the use
of vibrotactile feedback is beneficial when using gaze
gestures to control a mobile device. In this paper, we
present a study on comparing different types of vibrotactile
feedback while performing list-based tasks using gaze
gestures. List task was chosen as it is familiar to users, fast
to do and requires a number of different simple commands.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted. To copy otherwise, or republish, to
post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from
CHI 2014, April 26 - May 01 2014, Toronto, ON, Canada
Copyright 2014 ACM 978-1-4503-2473-1/14/04…$15.00.
We selected a simplified setup to get basic results of the
utility of vibrotactile feedback. Task completion times and
subjective experiences were measured to investigate
whether the addition of vibrotactile feedback improves the
interaction. As far as we know, the combination of gaze
input and haptic output has not been explored previously.
The experiment was designed to resemble a scenario where
a person operates a handheld mobile phone through gaze
gestures. The person is not able to use other input
modalities. For test purpose we implemented an application
that simulated a contact list in a mobile phone. The main
objective was to study the effects of vibrotactile feedback
on efficiency and subjective ratings. The secondary
objective was to study the user’s overall perception of gaze
gestures as an input method for a mobile device.
We recruited 12 volunteer participants (2 females and 10
males), aged between 16 and 50 years from the University
community. All participants were familiar with mobile
technology, and ten were familiar with haptic feedback. Ten
participants were also familiar with gaze tracking
technology. Two of the participants had corrected vision.
We used a Tobii T60 gaze tracker and Nokia Lumia 900
mobile device to simulate an eyetracking capable mobile
phone. Once gaze input data was collected by the Tobii
device, gaze gestures were recognized in a PC and
transferred to the phone through a USB based socket
connection. The control logic for the application ran on the
mobile device based on asynchronous gaze gesture events
sent from the PC. The vibrotactile feedback was generated
by the phone’s built-in vibration motor.
Gaze Gestures
We used two stroke gaze gestures, and utilized off-screen
gazing [8] to make the input area larger than the display.
In the beginning of a gesture the user was always looking at
the display of the phone. The first stroke of a gesture moved
over the edge of the device, and the second stroke returned
back to the device. In order to differentiate between normal
eye movement and gaze gesture commands, we defined a
maximum time of 500 ms (long enough for a fixation based
on pilot testing) for the duration of the fixation outside of
the device. Gestures lasting longer were not recognized.
The following gestures were used:
Up crossed the top edge of the device and moved
the selection one position upwards in the list
Down crossed the bottom edge of the device and
moved the selection one position downwards.
Select crossed the right edge of the device and
activated the presently selected name.
Cancel crossed the left edge of the device and
returned to the list.
Haptic Feedback
The system recognized the first stroke and the full gesture
separately making it possible to give feedback half-way
through the gestures. The length of the vibrotactile
feedback was set to 20 milliseconds that we found to be
long enough to be felt by all participants in pilot testing.
To study what type of feedback would be the most efficient,
we defined four haptic feedback conditions (see Figure 1):
No: No haptic feedback.
Out: Haptic feedback given when a stroke,
originating from inside the device to outside the
device, was recognized.
Full: Haptic feedback given when the second
stroke, originating from outside the device to
inside the device, was recognized.
Both: Haptic feedback combining Out and Full.
In the beginning of the experiment each participant filled in
a pre-experiment questionnaire. This was followed by an
introduction to the experiment and calibration of the gaze
tracker. Then the participant was instructed to hold the
mobile device in his/her hand so that the haptic feedback
could be felt. The back of the holding hand touched a small
piece of foam mounted on top of the eye tracker’s display.
This allowed small movements of the device, but reduced
the likelihood of the phone drifting away from the intended
position. The participant’s hand was also supported from
elbow to prevent fatigue during the experiment. See Figure
2 for the arrangement.
The task was to find a specific name in a list, select the
name and make a simulated call. The participant saw a list
of 18 names (part of which is shown in Figure 2). When the
Figure 1. Haptic feedback conditions for the Up gesture.
Figure 2. The participant held the device in front of the
tracker (left). An example list of names visible at the
mobile’s display (right).
application was started the list focus was on the topmost
entry. The list was ordered alphabetically and long enough
not to fit on the display. After a successful call the system
would pause for five seconds before automatically returning
to the list of names. During this pause, the participant was
given the next name to find. The search for subsequent
names started with focus on the previously called name.
A block of four calls was completed in each feedback
condition. In addition each block contained a short practice
session where the participants could ensure that the system
worked and they could feel the haptic feedback. After the
four calls the participants evaluated the easiness and
comfortableness of using the particular feedback condition
by filling in rating scales ranging from 1 (very
uncomfortable / very difficult) to 7 (very comfortable / very
easy). All the participants used the same name list and
made calls to the same names to eliminate variability due to
the position of the names on the list. The names were
different in different feedback conditions. However, they
were chosen so that the same number of gestures was
required to complete each block. The order of the feedback
conditions was counterbalanced between participants using
a Latin square. The test design was within subjects.
As we expected a noticeable learning effect in the use, we
conducted two identical sessions in the experiment. That is,
all four feedback conditions were performed twice. The
first session was intended for training the participants. After
completing both sessions, a final questionnaire was used to
determine which of the four feedback conditions the
participants felt was the easiest / most comfortable to use.
Task completion times
The median block completion times in seconds for all eight
blocks are shown in Table 1 in the order of presentation
(from T1 to T8) to show the effect of learning. The
feedback conditions varied in each block according to our
counterbalancing scheme. The block completion times were
longest in the first two blocks.
In order to eliminate the learning effect, only data from the
second session (T5-T8) was used in further analysis. The
block completion times for the four different feedback
conditions are shown in Figure 3. The largest difference
(17%) in the completion times was between the median
time for condition No and the median time for condition
Our data distributions did not meet the normality
assumption of ANOVA. Because of this, we used a simple
permutation test to estimate the significance of the observed
pairwise completion time differences between the
conditions in the sample. Assuming a null hypothesis of no
difference we computed 10,000 resamples with random
reversals of the differences between the conditions. Then
we observed how often the median difference was more
extreme than the observed value. The results showed that
completion times for condition Out were significantly lower
than for conditions No and Full (p<=0.05 in both cases).
The differences between other feedback conditions were
not statistically significant.
Gestures per Action
In most of the test cases the participants made unnecessary
gestures to complete the task. We defined a Gestures per
Action (GPA) measure as the ratio of the number of
performed gestures to the minimum number of gestures
needed. The value of GPA increases if the user does wrong
selections, overshoots the focus and needs further gestures
to correct these errors.
Table 2 shows the average GPA for the different conditions.
The participants performed more gestures to complete the
task in condition No than in any other condition. The
biggest difference is between conditions No and Out, where
about 15% more gestures were used to complete the task in
condition No.
Subjective evaluations
The results of subjective evaluations gathered after each
condition showed that condition No was rated as
significantly (permutation test as before) more
uncomfortable and difficult to use compared to the other
conditions (Figure 4).
Also, in the post-experimental questionnaire, six out of
twelve participants felt that condition Both was overall the
most comfortable to use. In terms of easiness, eight
participants preferred condition Both. Notably, none of the
participants preferred condition No after the experiment.
Table 2. Gestures per action for different feedback conditions.
Table 1. The median block completion times in seconds in
different time slots of the experiment.
Figure 3. The block completion times in seconds for different
feedback conditions.
Ten participants said that their eyes were more tired after
the experiment. Eight participants would use gaze gestures
in a mobile device if that was available. When asked about
possible use situations, most listed special situations where
the hands were not available.
Our results showed that the use of haptic feedback
increased both the efficiency and subjective experiences of
gaze-based interaction. Out of the three different haptic
feedback conditions, the most efficient ones in terms of task
completion times were Out and Both that provided feedback
when the gaze was outside of the display borders. This
indicates that the participants could make use of the
feedback while doing the gestures. The Gestures per Action
measure suggested that the addition of feedback helped
because it allowed carrying out the tasks with fewer errors.
Conditions Out and Both that contained the haptic feedback
when gaze was outside the device were faster than the other
conditions. Participants also subjectively preferred the
haptic feedback conditions over the non-haptic condition.
There was a significant learning effect during the first one
or two blocks, but after that block completion times leveled
off. However, overall the experiment was very short and
further learning might happen in extended use.
Overall we can conclude that haptic feedback on gaze
events can improve user performance and satisfaction on
handheld devices especially in cases where visual or audio
feedback is difficult to arrange. Our findings could be
utilized in any gaze-based mobile applications, for example,
for giving simple commands without touch input. Further
work investigating the combination of gaze input and haptic
output should be undertaken to understand the benefits
more widely. Gaze gestures combined with haptic feedback
provide an easy input method when hands are busy. Further
work is also needed to study the effect in combinations with
other input and output modalities.
We thank the members of TAUCHI who provided helpful
comments on different versions of this paper. This work
was funded by the Academy of Finland, projects Haptic
Gaze Interaction (decisions numbers 260026 and 260179)
and Mind Picture Image (decision 266285).
1. Brewster, S., Chohan, F., and Brown, L. Tactile
Feedback for Mobile Interactions. In CHI’07, 159-162,
ACM Press, 2007.
2. Drewes, H., De Luca, A., and Schmidt, A. Eye-Gaze
Interaction for Mobile Phones. In Mobility’07, 364-371,
ACM Press, New York, 2007.
3. Drewes, H., and Schmidt, A. Interacting with the
Computer using Gaze Gestures. In Proc. INTERACT
2007, 475488. Springer, 2007.
4. Dybdal, M.L., San Agustin, J., and Hansen, J.P. Gaze
Input for Mobile Devices by Dwell and Gestures. In
ETRA’12, 225-228, ACM Press, New York, 2012.
5. Heikkilä, H., and Räihä, K.-J. Simple Gaze Gestures and
the Closure of the Eyes as an Interaction Technique. In
ETRA’12, 147-154, ACM Press, New York, 2012.
6. Hoggan, E., Brewster, S. A., and Johnston, J.
Investigating the Effectiveness of Tactile Feedback for
Mobile Touchscreens. In CHI’08, 1573-1582, ACM
Press, 2008.
7. Hyrskykari, A., Istance, H. and Vickers, S. Gaze
Gestures or Dwell-Based Interaction. In ETRA’12, 229-
232, ACM Press, New York, 2012.
8. Isokoski, P. Text Input Methods for Eye Trackers Using
Off-Screen Targets. In ETRA’00, 15-21, ACM Press,
9. Istance, H., Hyrskykari, A., Immonen, L.,
Mansikkamaa, S., and Vickers, S. Designing Gaze
Gestures for Gaming: an Investigation of Performance.
In ETRA’10, 323-330, ACM Press, 2010.
10. Lukander, K., Jagadeesan, S., Chi, H., and Müller, K.
OMG! - A New Robust, Wearable and Affordable Open
Source Mobile Gaze Tracker. In MobileHCI’13, 408-
411, 2013.
11. Nielsen, J. Noncommand User Interfaces. In
Communications of the ACM, 36(4), 83-99, 1993.
12. Porta, M, and Turina, M. Eye-S: a Full-Screen Input
Modality for Pure Eye-based Communication. In
ETRA’08, 27-34, ACM Press, New York, 2008.
13. Rubine, D. Combining Gestures and Direct
Manipulation. In CHI’92, 659-660, ACM, 1992.
14. Stellmach, S., and Dachselt, R. Look & Touch: Gaze-
Supported Target Accuisition. In CHI’12, 2981-2990,
ACM, 2012.
15. Stellmach, S., and Dachselt, R. Still Looking:
Investigating Seamless Gaze-supported Selection,
Positioning, and Manipulation of Distant Targets. In
CHI’13, 285-294, ACM, 2013.
16. Wobbrock, J.O., Rubinstein, J., Sawyer, M.W., and
Duchowski, A.T. Longitudinal Evaluation of Discrete
Consecutive Gaze Gestures for Text Entry. In ETRA’08,
11-18, ACM Press, New York, 2008.
Figure 4. Boxplots of the answers (in scale 1 to 7) to the
question of “how comfortable” (left) and “how easy to use”
(right) was the given condition.
... Or, the gestures can be off-screen (Isokoski, 2000), which frees the screen for other purposes. Simple gestures that start from the screen and then go off-screen and back by crossing one of the display borders have been used for mode change during gaze-based gaming (Istance, Bates, Hyrskykari, & Vickers, 2008), and controlling a mobile phone (Kangas et al., 2014b) or a smart wrist watch (Akkil et al., 2015). Figure 4 illustrates an on-screen gesture implementation in a game. ...
... However, with short dwell times where the user moves the gaze away from the target very fast or when rapid gaze gestures are used, other feedback modalities may be useful. For example, if a person controls a mobile phone by off-screen gaze gestures, haptic feedback on the hand-held phone can inform the user of the successful action (Kangas et al., 2014b). Haptic feedback may also be preferred in situations where privacy is needed; haptic feedback can only be felt by the person wearing or holding the device. ...
Gaze provides an attractive input channel for human-computer interaction because of its capability to convey the focus of interest. Gaze input allows people with severe disabilities to communicate with eyes alone. The advances in eye tracking technology and its reduced cost make it an increasingly interesting option to be added to the conventional modalities in every day applications. For example, gaze-aware games can enhance the gaming experience by providing timely effects at the right location, knowing exactly where the player is focusing at each moment. However, using gaze both as a viewing organ as well as a control method poses some challenges. In this chapter, we will give an introduction to using gaze as an input method. We will show how to use gaze as an explicit control method and how to exploit it subtly in the background as an additional information channel. We will summarize research on the application of different types of eye movements in interaction and present research-based design guidelines for coping with typical challenges. We will also discuss the role of gaze in multimodal, pervasive and mobile interfaces and contemplate with ideas for future developments.
... On the downside, increasing the number of gestures introduces some complexity and comes with problems, as complex gestures may be difficult to recall cognitively, and they may be challenging to initiate and perform physically [61]. Gaze gestures found applications in gaming [36,37,61], authentication [4,44,46,48,69] and also as a generic input method for mobile devices [41]. The difference between Gaze gestures and Pursuits is that Gaze gestures, in recent implementations, are performed from memory rather than by following a stimulus. ...
Conference Paper
Full-text available
Gaze is promising for hands-free interaction on mobile devices. However, it is not clear how gaze interaction methods compare to each other in mobile settings. This paper presents the first experiment in a mobile setting that compares three of the most commonly used gaze interaction methods: Dwell time, Pursuits, and Gaze gestures. In our study, 24 participants selected one of 2, 4, 9, 12 and 32 targets via gaze while sitting and while walking. Results show that input using Pursuits is faster than Dwell time and Gaze gestures especially when there are many targets. Users prefer Pursuits when stationary, but prefer Dwell time when walking. While selection using Gaze gestures is more demanding and slower when there are many targets, it is suitable for contexts where accuracy is more important than speed. We conclude with guidelines for the design of gaze interaction on handheld mobile devices.
... Since the interaction via gaze gestures is usually facilitated without a graphical user interface, the related studies focus more on vibrotactile feedback (Rantala et al., 2020). It was found that the implementation of vibrotactile feedback can reduce response time as well as improve the user's subjective evaluation (Kangas et al., 2014). K€ opsel et al. (2016) compared visual, haptic, and auditory feedback modalities. ...
We present an eye typing interface with one-point calibration, which is a two-stage design. The characters are clustered in groups of four characters. Users select a cluster by gazing at it in the first stage and then select the desired character by following its movement in the second stage. A user study was conducted to explore the impact of auditory and visual feedback on typing performance and user experience of this novel interface. Results show that participants can quickly learn how to use the system, and an average typing speed of 4.7 WPM can be reached without lengthy training. The subjective data of participants revealed that users preferred visual feedback over auditory feedback while using the interface. The user study indicates that this eye typing interface can be used for walk-up-and-use interactions, as it is easily understood and robust to eye-tracking inaccuracies. Potential areas of application, as well as possibilities for further improvements, are discussed.
... In 2018 Steil et al. [1], presented a work related to the task of predicting users' gaze behaviour (overt visual attention) in the near future. Kangas et al. [2] described a study of combining gaze gestures with vibrotactile feedback. In this study, gaze gestures were used as input for a mobile device and vibrotactile feedback as a new alternative way to give confirmation of interaction events. ...
Full-text available
In this paper, a smartphone-based learning monitoring system is presented. During pandemics, most of the parents are not used to simultaneously deal with their home office activities and the monitoring of the home school activities of their children. Therefore, a system allowing a parent, teacher or tutor to assign a task and its corresponding execution time to children, could be helpful in this situation. In this work, a mobile application to assign academic tasks to a child, measure execution time, and monitor the child’s attention, is proposed. The children are the users of a mobile application, hosted on a smartphone or tablet device, that displays an assigned task and keeps track of the time consumed by the child to perform this task. Time measurement is performed using face recognition, so it is possible to infer the attention of the child based on the presence or absence of a face. The app also measures the time that the application was in the foreground, as well as the time that the application was sent to the background, to measure boredom. The parent or teacher assigns a task using a desktop application specifically designed for this purpose. At the end of the time set by the user, the application sends to the parent or teacher statistics about the execution time of the task and the degree of attention of the child.
... Although large individual differences were observed, the auditory feedback did modify the oculomotor behaviour and improved task performance. Kangas et al. [19] compared off-screen gaze interaction using gaze gestures (looking right then left to activate a command) with vibrotactile feedback and no feedback. All 12 participants performed the gaze interaction faster and preferred the vibrotactile feedback over no feedback. ...
Purpose: Eye gaze interfaces have been used by people with severe physical impairment to interact with various assistive technologies. If used to control robots, it would be beneficial if individuals could gaze directly at targets in the physical environment rather than have to switch their gaze between a screen with representations of robot commands and the physical environment to see the response of their selection. By using a homogeneous transformation technique, eye gaze coordinates can be mapped between the reference coordinate frame of eye tracker and the coordinate frame of objects in the physical environment. Feedback about where the eye tracker has determined the eye gaze is fixated is needed so users can select targets more accurately. Screen-based assistive technologies can use visual feedback, but in a physical environment, other forms of feedback need to be examined. Materials and methods: In this study, an eye gaze system with different feedback conditions (i.e., visual, auditory, vibrotactile, and no-feedback) was tested when participants received visual feedback on a display (on-screen) and when looking directly at the physical environment (off-screen). Target selection tasks in both screen conditions were performed by ten non-disabled adults, three non-disabled children, and two adults and one child with cerebral palsy. Results: Tasks performed with gaze fixation feedback modalities were accomplished faster and with higher success than tasks performed without feedback, and similar results were observed in both screen conditions. No significant difference was observed in performance across the feedback modalities, but participants had personal preferences. Conclusion: The homogeneous transformation technique enabled the use of a stationary eye tracker to select target objects in the physical environment, and auditory and vibrotactile feedback enabled participants to be more accurate selecting targets than without it. • Implications for Rehabilitation • Being able to select target objects in the physical environment by eye gaze could make it easier for children with disabilities to control assistive robots, because in this way they do not have to change their focus between a computer screen with commands and the robot. • Providing auditory or vibrotactile feedback when using an eye gaze system made it faster and easier to know if a target was being gazed upon. • Being able to select targets in the environment using eye gaze could be beneficial for other assistive technology, too, such as destination selection for power wheelchairs.
... Some of the earliest eye typing systems required the user to glance at a few defined directions in specific order to compose a character [Rosen and Durfee 1978]. Gaze gestures have also been used to control a computer [Porta and Turina 2008], play games [Istance et al. 2010], control mobile phones Kangas et al. 2014] and smart watches [Akkil et al. 2015;Hansen et al. 2016]. ...
Conference Paper
In gesture-based user interfaces, the effort needed for learning the gestures is a persistent problem that hinders their adoption in products. However, people's natural gaze paths form shapes during viewing. For example, reading creates a recognizable pattern. These gaze patterns can be utilized in human-technology interaction. We experimented with the idea of inducing specific gaze patterns by static drawings. The drawings included visual hints to guide the gaze. By looking at the parts of the drawing, the user's gaze composed a gaze gesture that activated a command. We organized a proof-of-concept trial to see how intuitive the idea is. Most participants understood the idea without specific instructions already on the first round of trials. We argue that with careful design the form of objects and especially their decorative details can serve as a gaze-based user interface in smart homes and other environments of ubiquitous computing.
... Since visual feedback is sub-optimal during eye tracking in various situations [20] and auditory feedback might not be suitable for noisy industrial conditions, vibro-tactile is used a second communication channel, as it is generally perceived well in gaze-interaction scenarios [16]. For that, a Microsoft band 2 is employed, which enables three different vibration modes. ...
Conference Paper
Due to the explicit and implicit facets of gaze-based interaction , eye tracking is a major area of interest within the field of cognitive industrial assistance systems. In this position paper, we describe a scenario which includes a wearable platform built around a mobile eye tracker, which can support and guide an industrial worker throughout the execution of a maintenance task. The potential benefits of such a solution are discussed and the key components are outlined.
The rapid proliferation of Massive Open Online Courses (MOOC) has resulted in many-fold increase in sharing the global classrooms through customized online platforms, where a student can participate in the classes through her personal devices, such as personal computers, smartphones, tablets, etc. However, in the absence of direct interactions with the students during the delivery of the lectures, it becomes difficult to judge their involvements in the classroom. In academics, the degree of student's attention can indicate whether a course is efficacious in terms of clarity and information. An automated feedback can hence be generated to enhance the utility of the course. The precision of discernment in the context of human attention is a subject of surveillance. However, visual patterns indicating the magnitude of concentration can be deciphered by analyzing the visual emphasis and the way an individual visually gesticulates, while contemplating the object of interest. In this paper, we develop a methodology called Gestsatten which captures the learner's attentiveness from his visual gesture patterns. In this approach, the learner's visual gestures are tracked along with the region of focus. We consider two aspects in this approach -- first, we do not transfer learner's video outside her device, so we apply in-device computing to protect her privacy; second, considering the fact that a majority of the learners use handheld devices like smartphones to observe the MOOC videos, we develop a lightweight approach for in-device computation. A three level estimation of learner's attention is performed based on these information. We have implemented and tested Gestatten over 48 participants from different age groups, and we observe that the proposed technique can capture the attention level of a learner with high accuracy (average absolute error rate is 8.68%), which meets her ability to learn a topic as measured through a set of cognitive tests.
Full-text available
The two cardinal problems recognized with gaze-based interaction techniques are: how to avoid unintentional commands, and how to overcome the limited accuracy of eye tracking. Gaze gestures are a relatively new technique for giving commands, which has the potential to overcome these problems. We present a study that compares gaze gestures with dwell selection as an interaction technique. The study involved 12 participants and was performed in the context of using an actual application. The participants gave commands to a 3D immersive game using gaze gestures and dwell icons. We found that gaze gestures are not only a feasible means of issuing commands in the course of game play, but they also exhibited performance that was at least as good as or better than dwell selections. The gesture condition produced less than half of the errors when compared with the dwell condition. The study shows that gestures provide a robust alternative to dwell-based interaction with the reliance on positional accuracy being substantially reduced.
Conference Paper
Full-text available
While eye tracking has a high potential for fast selection tasks, it is often regarded as error-prone and unnatural, especially for gaze-only interaction. To improve on that, we propose gaze-supported interaction as a more natural and effective way combining a user's gaze with touch input from a handheld device. In particular, we contribute a set of novel and practical gaze-supported selection techniques for distant displays. Designed according to the principle gaze suggests, touch confirms they include an enhanced gaze-directed cursor, local zoom lenses and more elaborated techniques utilizing manual fine positioning of the cursor via touch. In a comprehensive user study with 24 participants, we investigated the potential of these techniques for different target sizes and distances. All novel techniques outperformed a simple gaze-directed cursor and showed individual advantages. In particular those techniques using touch for fine cursor adjustments (MAGIC touch) and for cycling through a list of possible close-to-gaze targets (MAGIC tab) demonstrated a high overall performance and usability.
Conference Paper
Full-text available
This paper presents a study of finger-based text entry for mobile devices with touchscreens. Many devices are now coming to market that have no physical keyboards (the Apple iPhone being a very popular example). Touchscreen keyboards lack any tactile feedback and this may cause problems for entering text and phone numbers. We ran an experiment to compare devices with a physical keyboard, a standard touchscreen and a touchscreen with tactile feed- back added. We tested this in both static and mobile envi- ronments. The results showed that the addition of tactile feedback to the touchscreen significantly improved finger- based text entry, bringing it close to the performance of a real physical keyboard. A second experiment showed that higher specification tactile actuators could improve per- formance even further. The results suggest that manufactur- ers should use tactile feedback in their touchscreen devices to regain some of the feeling lost when interacting on a touchscreen with a finger. Author Keywords
Conference Paper
Full-text available
We present a study investigating the use of vibrotactile feedback for touch-screen keyboards on PDAs. Such key- boards are hard to use when mobile as keys are very small. We conducted a laboratory study comparing standard but- tons to ones with tactile feedback added. Results showed that with tactile feedback users entered significantly more text, made fewer errors and corrected more of the errors they did make. We ran the study again with users seated on an underground train to see if the positive effects trans- ferred to realistic use. There were fewer beneficial effects, with only the number of errors corrected significantly im- proved by the tactile feedback. However, we found strong subjective feedback in favour of the tactile display. The results suggest that tactile feedback has a key role to play in improving interactions with touch screens. Author Keywords
Conference Paper
While eye tracking has a high potential for fast selection tasks, it is often regarded as error-prone and unnatural, especially for gaze-only interaction. To improve on that, we propose gaze-supported interaction as a more natural and effective way combining a user's gaze with touch input from a handheld device. In particular, we contribute a set of novel and practical gaze-supported selection techniques for distant displays. Designed according to the principle gaze suggests, touch confirms they include an enhanced gaze-directed cursor, local zoom lenses and more elaborated techniques utilizing manual fine positioning of the cursor via touch. In a comprehensive user study with 24 participants, we investigated the potential of these techniques for different target sizes and distances. All novel techniques outperformed a simple gaze-directed cursor and showed individual advantages. In particular those techniques using touch for fine cursor adjustments (MAGIC touch) and for cycling through a list of possible close-to-gaze targets (MAGIC tab) demonstrated a high overall performance and usability.
Conference Paper
We present a novel, robust, affordable and wearable, mobile gaze tracker. The tracker takes a model-based approach to tracking gaze and maps the calculated gaze on to a scene video. The system is built from standard off-the-shelf components, and is the first to our knowledge using a 3D printed frame. The system will be published as open source, and the total cost of the components for building the system is 350€. The model-based tracking provides a solution robust to changing lighting conditions and frame slippage on the head of the user.
This paper investigates whether it is feasible to interact with the small screen of a smartphone using eye movements only. Two of the most common gaze-based selection strategies, dwell time selections and gaze gestures are compared in a target selection experiment. Finger-strokes and accelerometer-based interaction, i. e. tilting, are also considered. In an experiment with 11 subjects we found gaze interaction to have a lower performance than touch interaction but comparable to the error rate and completion time of accelerometer (i.e. tilt) interaction. Gaze gestures had a lower error rate and were faster than dwell selections by gaze, especially for small targets, suggesting that this method may be the best option for hands-free gaze control of smartphones.
We created a set of gaze gestures that utilize the following three elements: simple one-segment gestures, off-screen space, and the closure of the eyes. These gestures are to be used as the moving tool in a gaze-only controlled drawing application. We tested our gaze gestures with 24 participants and analyzed the gesture durations, the accuracy of the stops, and the gesture performance. We found that the difference in gesture durations between short and long gestures was so small that there is no need to choose between them. The stops made by closing both eyes were accurate, and the input method worked well for this purpose. With some adjustments and with the possibility for personal settings, the gesture performance and the accuracy of the stops can become even better.
Conference Paper
We investigate how to seamlessly bridge the gap between users and distant displays for basic interaction tasks, such as object selection and manipulation. For this, we take advantage of very fast and implicit, yet imprecise gaze- and headdirected input in combination with ubiquitous smartphones for additional manual touch control. We have carefully elaborated two novel and consistent sets of gaze-supported interaction techniques based on touch-enhanced gaze pointers and local magnification lenses. These conflict-free sets allow for fluently selecting and positioning distant targets. Both sets were evaluated in a user study with 16 participants. Overall, users were fastest with a touch-enhanced gaze pointer for selecting and positioning an object after some training. While the positive user feedback for both sets suggests that our proposed gaze- and head-directed interaction techniques are suitable for a convenient and fluent selection and manipulation of distant targets, further improvements are necessary for more precise cursor control.
To date, several eye input methods have been developed, which, however, are usually designed for specific purposes (e.g. typing) and require dedicated graphical interfaces. In this paper we pre-sent Eye-S, a system that allows general input to be provided to the computer through a pure eye-based approach. Thanks to the "eye graffiti" communication style adopted, the technique can be used both for writing and for generating other kinds of commands. In Eye-S, letters and general eye gestures are created through se-quences of fixations on nine areas of the screen, which we call hotspots. Being usually not visible, such sensitive regions do not interfere with other applications, that can therefore exploit all the available display space.