ChapterPDF Available

Gaze-Based Interaction for VR Environments

Authors:
Chapter

Gaze-Based Interaction for VR Environments

Abstract and Figures

In this paper we propose a steering mechanism for VR headset utilizing eye tracking. Based on the fovea region traced by the eyetracker assembled into VR headset the visible 3D ray is generated towards the focal point of sight. The user can freely look around the virtual scene and is able to interact with objects indicated by the eyes. The paper gives an overview of the proposed interaction system and addresses the effectiveness and precision issues of such interaction modality.
Content may be subject to copyright.
This is a pre-print of a contribution published in Advances in Intelligent Systems and
Computing book series (AISC, volume 1062) Choraś M., Choraś R. (Editors) published by
Springer, Cham. The final authenticated version is available online at:
https://doi.org/10.1007/978-3-030-31254-1_6
Cite this paper as:
Piotrowski P., Nowosielski A. (2020) Gaze-Based Interaction for VR Environments. In:
Choraś M., Choraś R. (eds) Image Processing and Communications. Advances in Intelligent
Systems and Computing, vol 1062, pp 41-48. Springer, Cham
Gaze-based interaction for VR environments
Patryk Piotrowski and Adam Nowosielski[0000000177297867]
West Pomeranian University of Technology, Szczecin
Faculty of Computer Science and Information Technology
˙
Zo lnierska 52, 71-210, Szczecin, Poland
patryk.piotrowski19@gmail.com, anowosielski@wi.zut.edu.pl
Abstract. In this paper we propose a steering mechanism for VR head-
set utilizing eye tracking. Based on the fovea region traced by the eye-
tracker assembled into VR headset the visible 3D ray is generated to-
wards the focal point of sight. The user can freely look around the virtual
scene and is able to interact with objects indicated by the eyes. The pa-
per gives an overview of the proposed interaction system and addresses
the effectiveness and precision issues of such interaction modality.
Keywords: gaze-based interaction ·virtual reality ·eye tracking ·gaze-
operated games.
1 Introduction
Virtual reality systems are computer-generated environments where an user ex-
periences sensations perceived by the human senses. These systems are based
primarily on providing video and audio signals, and offer the opportunity to
interact directly with the created scene with the help of touch or other form
of manipulation using hands. Vision systems for virtual reality environments
consist most frequently of head-mounted goggles, which are equipped with two
liquid crystal displays placed opposite the eyes in a way that enables stereoscopic
vision. The image displayed in the helmet is rendered independently for the left
and right eye, and then combined into the stereopair. More and more solutions
appear on the market, and the most popular include: HTC Vive, Oculus Rift
CV1, Playstation VR (dedicated for Sony Playstation 4), Google Cardboard
(dedicated for Android mobile devices).
The virtual reality solutions are delivered with controllers which aim is to
increase the level of user’s immersion with elements of the virtual environment.
Interestingly, many novel interfaces offer hands-free control of electronic devices.
The touchless interaction there is based on recognition of user actions performed
by the whole body [1] or specific parts of the body (e.g. hands [2], head [3]). A
completely new solution, not widely used and known, is the control through
the sight, i.e. gaze-bazed interaction. The operation of such systems is based on
eyetracking, a technique of gathering real-time data concerning gaze direction of
human eyes [4]. The technology is based on tracking and analysing the movement
of the eyes using cameras or sensors that register the eye area [4]. The latest
2 P. Piotrowski and A. Nowosielski
solutions on the market introduce eye tracking capabilities to virtual reality
environments [5].
In the paper we propose a novel steering mechanism based on ray-casting for
human-computer interfaces. Based on the fovea region traced by the eye tracker,
assembled into VR headset, the visible 3D ray is generated towards the focal
point of sight. Thanks to head movements the user can freely look around the
virtual scene and is able to interact with objects indicated by eyes.
The paper is structured as follows. In Sect. 2 the related works are addressed.
Then in Sect. 3 the concept of gaze-based interaction in virtual reality environ-
ment is proposed. An example application is presented in Sect. 4. The proposed
system is evaluated in Sect. 5. Final conclusions and a summary are provided in
Sect. 6.
2 Related works
Most of the eye tracking solutions have been utilized for analysis of the eye
movement for advertising industry, cognitive research and analysis of individual
patterns of behaviours [6–8]. The eye tracking systems are recognized tools for
analysis of the layout and operation efficiency of human-computer interaction
systems [9]. They are now regarded also as input modality for people with dis-
abilities [10]. For some users who suffer from certain motor impairments, the gaze
interaction may be the only option available. Typical way of interaction in such
systems assume fixations and dwell times. User is expected to look at specific
element of the interface for a predefined time period called the dwell-time and
after that the system assumes the selection (equivalent to mouse clicking). Such
solution is used for navigating through graphical user interface or for eye-typing
using the on-screen keyboard. Some innovations to this technique has been pro-
posed. In [11] a cascading dwell gaze typing technique dynamically adjust the
dwell time of keys in an on-screen keyboard. Depending on the probability dur-
ing typing some keys are easier to select by decreasing their dwell times other are
harder to choose (increased dwell times). A completely different approach was
presented in [10]. Here, a dwell-free eye-typing technique has been proposed.
Users are expected to look at or near the desired letters without stopping to
dwell.
New solution replacing the traditional technique of fixations and dwell time
has been proposed in [12]. Authors has proposed gaze gestures. In contrast to the
classical way of interaction using the eye sight, a gaze gesture uses eye motion and
are insensitive to accuracy problems and immune against calibration shift [12].
This is possible because gaze is not used directly for pointing and only the
information of relative eye movements are required. The gaze gestures have also
been reported to be successful in interacting with games [13].
Apart from using gaze to control computer systems other interesting applica-
tions can be found in the scientific literature. To accelerate the raytracing in [14]
fewer primary rays are generated in the peripheral regions of vision. For the
fovea region traced by the eye tracker the sampling frequency is maximised [14].
Gaze-based interaction for VR environments 3
The above examples show the multitude of applications of eye tracking sys-
tems. The novelty is now installing these systems in virtual reality headsets
which offers new possibilities of application.
3 Gaze-driven ray-cast interface concept
The concept of using an eye tracker for steering in the virtual reality environment
assumes the usage of the eye focus point to interact with objects of the virtual
scene. An overview of the system built upon this concept is presented in Fig. 1
and the process of interaction consists of the following steps:
mapping the direction of the user’s eye focus on the screen coordinates,
generating a primary ray (raycasting using the sphere) from the coordinates
of the user’s eye focus direction,
intersection analysis with scene objects,
indication of the object pointed by the sight,
handling the event associated with the object,
rendering a stereopair for virtual reality goggles.
Fig. 1. Scheme of the VR system with gaze-based interaction.
The main idea, then, is to generate a ray which takes into account the posi-
tion, rotation and direction of the eye’s focus on the virtual scene. During the
initial mapping, coordinates are taken for the left and right eyes independently,
and the final value of the focal point is the result of their averaging. In case
of intersection detection with the scene object the appropriate procedures are
executed.
Figure 2 presents diagram of the interaction process with a scene object
using eye focus direction. Four states can be distinguished for the object: no
4 P. Piotrowski and A. Nowosielski
interaction, beginning, continuing (during), and ending interaction. Start of the
interaction is crucial since it might be triggered with the eyesight solely (after a
predefined dwell time) or with the use of hand operated controller.
Fig. 2. Diagram showing the interaction process with a scene object using eye focus
direction.
4 Implementation and Application
Based on the concept presented in Sect. 3 a sight-operated interaction system,
named Gaze Interaction Engine (marked with a red border in Fig. 1), for the
virtual reality environment has been developed. This solution has the form of a
UnityAsset module for the Unity environment. It is hereby made available to the
public and can be accessed through the web page [15]. The developed gaze-based
interaction system is designed for the virtual reality HTC Vive hardware and
eye tracker from Pupil Labs [5]. In our research the eye tracker has been set to
receive 640 x 480 pixel infrared eye image with 120 frames per second.
Our interface can be employed to create computer games and multimedia
applications. A good example of using the tool was presented during the event
devoted to games creation GryfJam in Szczecin (Poland) on 17th and 18th of
May 2019. One of the authors of the paper, Patryk Piotrowski, with the help
of Micha l Chwesiuk developed a simple game for virtual reality glasses with
manipulation only with the gaze. The game, named KurzoCITY, belongs to the
genre of arcade games. The player’s goal is to collect as many grains as possible
on the farm under the pressure of competition from virtual poultry (see Fig.3
for a game preview).
The eye tracker used in the helmet analyzes the movement of the eyeballs.
For each frame a ray is generated from the player’s eyes to the focal point. A
Gaze-based interaction for VR environments 5
Fig. 3. The use of Gaze Interaction Engine in the KurzoCITY game: screen view (top)
and stereo pair for virtual reality googles (bottom).
look at the grain allows it to be collected. Over time, the level of difficulty of
the game increases by adding new opponents and raising the number of grains
to collected by the player. The game ends when the poultry collect a total of
three seeds.
5 Evaluations
The game described in the preceding section, as already mentioned, had been
developed during the GryfJam event. Using the developed game and event’s
participants, tests of the effectiveness of the Gaze Interaction Engine were con-
ducted. We observed high playability which indicates that proposed gaze-steering
mechanism is successful. Nevertheless, some problems and imperfections of the
eye tracking system have been noticed. Among over 30 participants of all our ex-
periments, we found 2 who were not able to pass the calibration process entirely.
The greatest setup difficulty was fitting the helmet and adjusting the distance
between the lenses which ensure correct detection of the pupil. With an un-
matched arrangement, the position of the pupil can not be determined correctly
and the examples are presented in Fig. 4. The top left sample, for comparison
purposes, contains a correct case. The eye is in the center, corneal reflections are
visible, the center of pupil is annotated with the red dot and the pupil border is
surrounded by a red border.
The second problem encountered was decalibration of the eye tracker at the
time of use. Expressions which appear on the face may cause slight shifts of the
entire headset and in the effect render the eye tracker erroneous.
6 P. Piotrowski and A. Nowosielski
Fig. 4. Calibration problems: the appearance of the eye seen by the eye tracking system
mounted in the virtual reality helmet.
Encountered problems, described above, can be classified as hardware related.
To evaluate the accuracy of the eye-based interaction an additional experiment
has been conducted. We prepared a grid with 26 separate interactable buttons
(divided into three rows, occupying approximately half field of view vertically
and 100% field of view horizontally). The goal of each participant was to press
the highlighted button by focusing the eyes on it with the dwell-time equal 600
ms and visual progress indicator provided. We measured the time of pressing
randomly highlighted buttons and the accuracy of the process itself. There were
17 participants (volunteers from students and employees of our university) who
performed between 2 and 6 sessions. There have been 70 sessions in total and
each sessions consisted of pressing 23.6 buttons on average. Results are presented
in the graphical form in Fig. 5.
The averaged time of pressing a random button equal 1.79 second. It includes
the 600 ms dwell-time, required for the interaction to take place. The precision
seems to be more problematic here. We registered the averaged (over all par-
ticipants and sessions) error rate of 5.51%. The error have been calculated as
the ratio of pressing the improper button (most often the adjacent one) for the
total number of presses. These results indicate that interfaces composed of many
components arranged close to each other may be problematic to operate using
current eye tracking solutions for the virtual reality helmets. However, when the
number of interactive elements in the scene decreases and the size of these ele-
ments increase the interaction is quite convenient. The proposed game is a good
proof here. With the relatively small dwell time (set to 200 ms compared to
600 ms in the pressing buttons experiment) very high level of interaction among
participants have been observed.
Gaze-based interaction for VR environments 7
Fig. 5. Evaluation results: performance (top) and accuracy (bottom) of 17 participants.
6 Conclusion
The proposed interaction system for virtual reality environments enables the
effective implementation of multimedia applications and games, operated using
the eyesight. The visible 3D ray is generated towards the focal point of sight
to facilitate user with the interaction process where free head movements are
present. The eye control is faster compared to, for example, additional hand-
operated controllers. In order for the motor reaction to take place, a stimulus
and a nerve impulse are required after visual observation. These stages are elim-
inated. Eye trackers mounted in the VR headsets can significantly help people
with disabilities offering unusual possibilities and for a wide range of recipients
can offer new opportunities for interaction in human-computer interfaces and
games.
References
1. Giorio, C., Fascinari, M.: Kinect in Motion – Audio and Visual Tracking by Example.
Packt Publishing, Birmingham (2013)
2. Nowosielski, A.: Evaluation of Touchless Typing Techniques with Hand Movement.
In: Burduk, R., et al. (eds) Proceedings of the 9th International Conference on
Computer Recognition Systems CORES 2015. AISC, vol. 403, pp. 441–449. Springer,
Cham (2016)
3. Nowosielski, A.: 3-Steps Keyboard: Reduced Interaction Interface for Touchless Typ-
ing with Head Movements. In: Kurzynski M, Wozniak M, Burduk R (eds.) Pro-
8 P. Piotrowski and A. Nowosielski
ceedings of the 10th International Conference on Computer Recognition Systems
CORES 2017. AISC, vol. 578, pp. 229–237. Springer, Cham (2018)
4. Mantiuk, R., Kowalik, M., Nowosielski, A., Bazyluk, B.: Do-It-Yourself Eye Tracker:
Low-Cost Pupil-Based Eye Tracker for Computer Graphics Applications. LNCS, vol.
7131, pp. 115–125 (2012)
5. Pupil Labs GmbH. Eye tracking for Virtual and Augmented Reality, https://pupil-
labs.com/vr-ar/. Last accessed 15 Jun 2019
6. Wedel, M., Pieters, R.: A Review of Eye-Tracking Research in Marketing. In: Mal-
hotra N.K. (ed.) Review of Marketing Research (Review of Marketing Research,
Volume 4), Emerald Group Publishing Limited, pp. 123–147 (2008)
7. Berkovsky, S., Taib, R., Koprinska, I., Wang, E., Zeng, Y., Li, J., Kleitman, S.:
Detecting Personality Traits Using Eye-Tracking Data. Proceedings of the 2019 CHI
Conference on Human Factors in Computing Systems. CHI ’19. pp. 221:1–221:12.
ACM New York, NY, USA (2019)
8. Jankowski, J., Ziemba, P., Watr´obski, J., Kazienko, P.: Towards the Tradeoff Be-
tween Online Marketing Resources Exploitation and the User Experience with the
Use of Eye Tracking. In: Nguyen NT, Trawi´nski B, Fujita H, Hong TP (eds) In-
telligent Information and Database Systems. ACIIDS 2016. LNCS, vol. 9621, pp.
330–343. Springer, Berlin, Heidelberg (2016)
9. Jacob, R.J.K., Karn, K.S.: Commentary on Section 4 - Eye Tracking in Human-
Computer Interaction and Usability Research: Ready to Deliver the Promises. In:
Hy¨on¨a J, Radach R, Deubel H (eds) The Mind’s Eye, pp. 573–605. North-Holland
(2003)
10. Kristensson, P.O., Vertanen, K.: The potential of dwell-free eye-typing for fast
assistive gaze communication. In: Spencer SN (Ed.) Proceedings of the Symposium
on Eye Tracking Research and Applications (ETRA ’12), pp. 241–244. ACM, New
York, NY, USA (2012)
11. Mott, M.E., Williams, S., Wobbrock, J.O., Morris, M.R.: Improving Dwell-Based
Gaze Typing with Dynamic, Cascading Dwell Times. In: Proceedings of the 2017
CHI Conference on Human Factors in Computing Systems (CHI ’17), pp. 2558–2570.
ACM, New York, NY, USA (2017)
12. Drewes, H., Schmidt, A.: Interacting with the Computer Using Gaze Gestures. In:
Baranauskas C., Palanque P., Abascal J., Barbosa S.D.J. (eds) Human-Computer
Interaction – INTERACT 2007. INTERACT 2007. LNCS, vol. 4663, pp. 475–488.
Springer, Berlin, Heidelberg (2007)
13. Istance, H., Hyrskykari, A., Immonen, L., Mansikkamaa, S., Vickers, S. Designing
gaze gestures for gaming: an investigation of performance. Proceedings of the 2010
Symposium on Eye-Tracking Research & Applications (ETRA ’10), pp. 323–330.
ACM New York, NY, USA (2010)
14. Siekawa, A., Chwesiuk, M., Mantiuk, R., Pi´orkowski, R.: Foveated Ray Tracing for
VR Headsets. MultiMedia Modeling. LNCS, vol. 11295, pp. 106–117 (2019)
15. Piotrowski, P., Nowosielski, A. (2019) Gaze Interaction Engine (project page),
https://github.com/patryk191129/GazeInteractionEngine
... Some researchers are studying the possibility of forming interactive interfaces for influencing the virtual environment by changing the trajectory of the gaze (Paletta et al., 2020). Research in the field of education is also beginning to appear, for example, in Piotrowski and Nowosielski (2020), the possibility of constructing a predictive system for students' skill level was studied by analyzing their trajectory of gaze in a virtual environment. Part of the research is also aimed at analyzing the usability of virtual reality (VR) interfaces by studying the gaze trajectory fixed in virtual three-dimensional space (Orlosky et al., 2019). ...
Article
Full-text available
The concept of using eye-tracking in virtual reality for education has been researched in various fields over the past years. With this review, we aim to discuss the recent advancements and applications in this area, explain the technological aspects, highlight the advantages of this approach and inspire interest in the field. Eye-tracking has already been used in science for many decades and now has been substantially reinforced by the addition of virtual and augmented reality technologies. The first part of the review is a general overview of eye-tracking concepts, technical parts, and their applications. In the second part, the focus shifted toward the application of eye-tracking in virtual reality. The third part, first the description of the recently emerged concept of eye-tracking in virtual reality is given, followed by the current applications to education and studying, which has not been thoroughly described before. We describe the main findings, technological aspects, and advantages of this approach.
... Figure 4 shows the mechanism of the VR system with eye-tracking. This diagram was modified from the gaze-based interaction scheme for VR environments given by Piotrowski and Nowosielski (2019). ...
Article
Full-text available
This study aims at investigating how consumers experience the retail environment visually, thus establishing a foundation for deeper insights into visual merchandising strategies. Specifically, we experimentally recorded and analyzed the visual attention and emotional arousal of the consumers in a test setting and examined the influence of various elements as well as gender differences in the recorded consumer responses. We conducted an experiment utilizing eye-tracking and virtual reality to analyze visual attention and emotional arousal in response to spatial and design elements in an immersive retail environment. We examined real-time measures of consumer interest and emotional responses during the retail experience. Valid gaze data from 24 male and 22 female participants were used for the analysis of total dwell time (TDT), total fixation count (TFC), and average pupil diameter (APD). The visual attention and emotional arousal of consumers showed different responses to specific areas of interest according to different spatial arrangements in the sales and service areas. This study statistically analyzed gender differences in consumer responses and performed a correlation analysis between visual attention and emotional arousal. Our findings provide insight into improving the design of retail environments for target consumers and contribute to building visual merchandising strategies.
Conference Paper
Full-text available
Personality is an established domain of research in psychology, and individual differences in various traits are linked to a variety of real-life outcomes and behaviours. Personality detection is an intricate task that typically requires humans to fill out lengthy questionnaires assessing specific personality traits. The outcomes of this, however, may be unreliable or biased if the respondents do not fully understand or are not willing to honestly answer the questions. To this end, we propose a framework for objective personality detection that leverages humans' physiological responses to external stimuli. We exemplify and evaluate the framework in a case study, where we expose subjects to affective image and video stimuli, and capture their physiological responses using a commercial-grade eye-tracking sensor. These responses are then processed and fed into a classifier capable of accurately predicting a range of personality traits. Our work yields notably high predictive accuracy, suggesting the applicability of the proposed framework for robust personality detection.
Conference Paper
Full-text available
This paper introduces a novel technique for touchless typing with head movements allowing to reach any alphabet character in only three steps. Head movements are frequently used for human-computer interaction by users with motor impairments unable to operate standard computer input devices. In such interfaces great difficulty is typing. Many directional head movements are required to reach subsequent characters using the on-screen keyboard and additional mechanism (like eye blink or mouth open) supplements the selection process. In this paper, a reduced interaction keyboard for touchless typing with head movements is proposed. The solution is based on recognition of head movements in four main directions.
Chapter
Full-text available
Hands-free control of electronic devices gains increasing interest. The interaction based on the interpretation of hand gestures is convenient for users. However, it requires adequate techniques to capture user movement and appropriate onscreen interface. The hand movements in touchless graphical user interface are translated into the motion of a pointer on a display. The main question is how to convert hand gestures into casual and comfortable text entry. The paper focuses on the evaluation of text input techniques in a touchless interface. Well-known traditional solutions and some innovations for text input have been adapted to noncontact onscreen keyboard interface and subjected to examination. The examined solutions include: single hand and double hands QWERTY-based virtual keyboard, swipe text input, and the 8pen (The 8pen solution, http:// www. 8pen. com/ [1]) based technique.
Chapter
Full-text available
This section considers the application of eye movements to user interfaces—both for analyzing interfaces, measuring usability, and gaining insight into human performance—and as an actual control medium within a human-computer dialogue. The two areas have generally been reported separately; but this book seeks to tie them together. For usability analysis, the user’s eye movements while using the system are recorded and later analyzed retrospectively, but the eye movements do not affect the interface in real time. As a direct control medium, the eye movements are obtained and used in real time as an input to the user-computer dialogue. They might be the sole input, typically for disabled users or hands-busy applications, or they might be used as one of several inputs, combining with mouse, keyboard, sensors, or other devices. Interestingly, the principal challenges for both retrospective and real time eye tracking in humancomputer interaction (HCI) turn out to be analogous. For retrospective analysis, the problem is to find appropriate ways to use and interpret the data; it is not nearly as straightforward as it is with more typical task performance, speed, or error data. For real time use, the problem is to find appropriate ways to respond judiciously to eye movement input, and avoid over-responding; it is not nearly as straightforward as responding to well-defined, intentional mouse or keyboard input. We will see in this chapter how these two problems are closely related. These uses of eye tracking in HCI have been highly promising for many years, but progress in making good use of eye movements in HCI has been slow to date. We see promising research work, but we have not yet seen wide use of these approaches in practice or in the marketplace. We will describe the promises of this technology, its limitations, and the obstacles that must still be overcome. Work presented in this book and elsewhere shows that the field is indeed beginning to flourish.
Conference Paper
Full-text available
This paper investigates novel ways to direct compu ters by eye gaze. Instead of using fixations and dwell times, this wo rk focuses on eye motion, in particular gaze gestures. Gaze gestures are insensi tive to accuracy problems and immune against calibration shift. A user study indi cates that users are able to perform complex gaze gestures intentionally and inv estigates which gestures occur unintentionally during normal interaction wit h the computer. Further experiments show how gaze gestures can be integrated into working with standard desktop applications and controlling media devices.
Conference Paper
Full-text available
Eye tracking technologies offer sophisticated methods for capturing humans’ gaze direction but their popularity in multimedia and computer graphics systems is still low. One of the main reasons for this are the high cost of commercial eye trackers that comes to 25,000 euros. Interestingly, this price seems to stem from the costs incurred in research rather than the value of used hardware components. In this work we show that an eye tracker of a satisfactory precision can be built in the budget of 30 euros. In the paper detailed instruction on how to construct a low cost pupil-based eye tracker and utilise open source software to control its behaviour is presented. We test the accuracy of our eye tracker and reveal that its precision is comparable to commercial video-based devices.
Conference Paper
We present cascading dwell gaze typing, a novel approach to dwell-based eye typing that dynamically adjusts the dwell time of keys in an on-screen keyboard based on the likelihood that a key will be selected next, and the location of the key on the keyboard. Our approach makes unlikely keys more difficult to select and likely keys easier to select by increasing and decreasing their required dwell times, respectively. To maintain a smooth typing rhythm for the user, we cascade the dwell time of likely keys, slowly decreasing the minimum allowable dwell time as a user enters text. Cascading the dwell time affords users the benefits of faster dwell times while causing little disruption to users' typing cadence. Results from a longitudinal study with 17 non-disabled participants show that our dynamic cascading dwell technique was significantly faster than a static dwell approach. Participants were able to achieve typing speeds of 12.39 WPM on average with our cascading technique, whereas participants were able to achieve typing speeds of 10.62 WPM on average with a static dwell time approach. In a small evaluation conducted with five people with ALS, participants achieved average typing speeds of 9.51 WPM with our cascading dwell approach. These results show that our dynamic cascading dwell technique has the potential to improve gaze typing for users with and without disabilities.
Conference Paper
Online systems are often overloaded with marketing content and as a result, perceived intrusiveness negatively affects the user experience and the evaluation of the website. Intentional and unintentional avoidance of the commercial content creates the need for compromise solutions from both the perspective of user experience and business goals. The presented research shows a unique approach to search for tradeoffs between the editorial content and the intensity of marketing components with the use of eye tracking and the multiple-criteria decision analysis methods.
Article
We propose a new research direction for eye-typing which is potentially much faster: dwell-free eye-typing. Dwell-free eye-typing is in principle possible because we can exploit the high redundancy of natural languages to allow users to simply look at or near their desired letters without stopping to dwell on each letter. As a first step we created a system that simulated a perfect recognizer for dwell-free eye-typing. We used this system to investigate how fast users can potentially write using a dwell-free eye-typing interface. We found that after 40 minutes of practice, users reached a mean entry rate of 46 wpm. This indicates that dwell-free eye-typing may be more than twice as fast as the current state-of-the-art methods for writing by gaze. A human performance model further demonstrates that it is highly unlikely traditional eye-typing systems will ever surpass our dwell-free eye-typing performance estimate.
Article
Motivated from the growing importance of visual marketing in practice, we review eye-tracking research to evaluate its effectiveness. We provide a case study of the application of eye-tracking to ad pretesting. We review eye-tracking applications in advertising (print, TV, and banner), health and nutrition warnings, branding, and choice and shelf search behaviors. We then discuss findings, identify current gaps in our knowledge, and provide an outlook on future research.