Conference Paper

Eye Movement Tracking as a New Promising Modality for Human Computer Interaction

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

It is well known that eye movement tracking may reveal information about human intentions. Therefore, it seems that it would be easy to use gaze pointing as a replacement for other traditional human computer interaction modalities like e.g. mouse or trackball, especially when there are more and more affordable eye trackers available. However, it occurs that gaze contingent interfaces are often experienced as difficult and tedious by users. There are multiple reasons of these difficulties. First of all eye tracking requires prior calibration, which is unnatural for users. Secondly, gaze continent interfaces suffer from a so called Midas Touch problem, because it is difficult to detect a moment when a user wants to click a button or any other object on a screen. Eye pointing is also not as precise and accurate as e.g. mouse pointing. The paper presents problems concerned with gaze contingent interfaces and compares the usage of gaze, mouse and touchpad during a very simple shooting game.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Eye-tracking is especially helpful for people with limited ability to move but still retain the ability to control eye movement, allowing them to communicate with the surrounding environment [12,3]. It is not only more convenient for people with disabilities but also sometimes a more effective form of communication [13]. However, hygienic purposes can also be considered a field of virtual keyboard application, removing the need for touching screens on commonly used devices. ...
... Controlling computers with eyes is often based on the analysis of fixations [24,13] described by (x, y) coordinates of the user's gaze as well as their start time and duration. It creates different possibilities for the implementation of eye-computer interaction. ...
... Three parameters for such an approach should be addressed: dwell time, object size, and content placement. A common issue when creating eye-controlled applications is the so-called Midas Touch Problem [13,2] when users accidentally activate an action in a computer program because they unconsciously focus on its element. In such a situation, smaller elements or bigger gaps between them can be considered. ...
... It measures the effectiveness, efficiency and satisfaction for users completing specific tasks (ISO 9241-11, 1998). Relevant studies used accuracy and response time as the measures of interaction accuracy and efficiency, respectively (Kasprowski et al., 2016;Kubíček et al., 2017;Ober, 1997;Paulus & Remijn, 2021). Posttest questionnaires are commonly adopted to measure user satisfaction (Creed et al., 2020;Riegler et al., 2020;Schmidbauer-Wolf & Guder, 2019). ...
... Comparative experiments of gaze-based interaction and other interaction modalities were conducted in previous studies. Traditional mouse-keyboards (Kasprowski et al., 2016;Murata, 2006), touch (Kasprowski et al., 2016) or controller-based interactions (Hou & Chen, 2021;Luro & Sundstedt, 2019;Pai et al., 2019) are commonly used for comparison in usability studies of gaze interactions. In recent studies, other emerging interactive technologies have also been used as comparisons, such as head-and hand-based interactions (Hansen et al., 2018;Kytö et al., 2018;Pfeuffer et al., 2017). ...
... Comparative experiments of gaze-based interaction and other interaction modalities were conducted in previous studies. Traditional mouse-keyboards (Kasprowski et al., 2016;Murata, 2006), touch (Kasprowski et al., 2016) or controller-based interactions (Hou & Chen, 2021;Luro & Sundstedt, 2019;Pai et al., 2019) are commonly used for comparison in usability studies of gaze interactions. In recent studies, other emerging interactive technologies have also been used as comparisons, such as head-and hand-based interactions (Hansen et al., 2018;Kytö et al., 2018;Pfeuffer et al., 2017). ...
... In [14] Kasprowski and Niezabitowoski replaced the mouse input with the ET input in a shooting game context. They measured higher values in the aiming time due to the lack of accuracy of the ET method. ...
... A typical FPS scenario consists of a player who moves around the virtual world, searches for a target, aims and shoots at the target. Similar to [14] we have implemented a moving target, but in a 3D environment, not a 2D one. The authors of [5] use a 3D environment shown on a HMD, but static targets. ...
... VR modalities consistently had lower values in pointing times. For both DB and VR modalities, the times by ET were the highest and the standard deviation the largest which is similar to the results in [14]. ...
... By tracking a user's eye movements, a computer system can respond in rea to the user's visual attention [12]. Eye tracking has become a promising new humanputer interaction modality [13]. With improvements in eye tracker portability, afford ity and tracking accuracy (e.g., Tobii Eye Tracker 5, ~EUR 259), eye trackers can be mounted on or embedded in personal computers (e.g., Lenovo Legion 9000 K, C https://shop.lenovo.com.cn/, ...
... By tracking a user's eye movements, a computer system can respond in real time to the user's visual attention [12]. Eye tracking has become a promising new human-computer interaction modality [13]. With improvements in eye tracker portability, affordability and tracking accuracy (e.g., Tobii Eye Tracker 5,~EUR 259), eye trackers can be easily mounted on or embedded in personal computers (e.g., Lenovo Legion 9000 K, China, https://shop.lenovo.com.cn/, ...
Article
Full-text available
Raster maps provide intuitive visualizations of remote sensing data representing various phenomena on the Earth’s surface. Reading raster maps with intricate information requires a high cognitive workload, especially when it is necessary to identify and compare values between multiple layers. In traditional methods, users need to repeatedly move their mouse and switch their visual focus between the map content and legend to interpret various grid value meanings. Such methods are ineffective and may lead to the loss of visual context for users. In this research, we aim to explore the potential benefits and drawbacks of gaze-adaptive interactions when interpreting raster maps. We focus on the usability of the use of low-cost eye trackers on gaze-based interactions. We designed two gaze-adaptive methods, gaze fixed and gaze dynamic adaptations, for identifying and comparing raster values between multilayers. In both methods, the grid content of different layers is adaptively adjusted depending on the user’s visual focus. We then conducted a user experiment by comparing such adaptation methods with a mouse dynamic adaptation method and a traditional method. Thirty-one participants (n = 31) were asked to complete a series of single-layer identification and multilayer comparison tasks. The results indicated that although gaze interaction with adaptive legends confused participants in single-layer identification, it improved multilayer comparison efficiency and effectiveness. The gaze-adaptive approach was well received by the participants overall, but was also perceived to be distracting and insensitive. By analyzing the participants’ eye movement data, we found that different methods exhibited significant differences in visual behaviors. The results are helpful for gaze-driven adaptation research in (geo)visualization in the future.
... If eye-tracking games are beneficial to students' learning-related perceptions in the process of game education, then it can be expected that introducing other similar interactive methods into teaching in the future can also be helpful. Eye movements often take precedence over other communication methods, and the use of eye interaction methods can help users concentrate more quickly (Kasprowski, Harezlak, & Niezabitowski, 2016). Compared with traditional operation methods, the operation of eye-tracking games is considered to be a more operative and interactive method, which can obtain a better sense of immersion and change the game experience (Smith & Graham, 2006). ...
... Eye-tracking games no longer use the mouse or keyboard as the input device for human-computer interaction in the game (Wankhede, Chhabria, & Dharaskar, 2013) but use real-time detection of the user's facial image to control the game character and make commands. There are various ways to control and interact with the eyes is to track the movement of eye, such as Electro Oculography (EOG) judgment method (Duchowski, 2017), which uses electrodes around the eyes to judge, and Infrared Oculography (IROG) judgment method, which uses infrared emitters and sensors to judge (Ober, Hajda, Loska, & Jamicki, 1997), or Video Oculography (VOG) judgment method using infrared cameras or infrared light sources to judge (Kasprowski et al., 2016). This article mainly uses python, OpenCV and dlib package (third-party function library) to detect eye blinking. ...
Article
This paper explores the learning effect of introducing eye-tracking games in college game design courses and tests the impact of innovative introduction of cutting-edge interactive content on students' learning in higher education in the field of digital media. The results show that a stronger perceived interactivity is linked to stronger user perceived control and more positive flow experience. Furthermore, due to the higher perceived interactivity brought by eye-tracking games than traditional games, the improvement of flow experience after teaching also brings obvious improvement of students' learning interest and continuous learning intention. It is worth noting that this effect seems to have a more robust gain effect in the group of sophomores, while the effect shows a transient characteristic in the group of juniors. The difference may be related to students' learning ability. At the same time, three main factors for the positive influence of eye-tracking games on teaching were extracted, including 'Technology Sensitive', 'Operational' and 'adventures by Difference'. Three main factors for the negative effects were also found, including 'Unattractive Game Types', 'Self-caused Resistance' and 'Game Maturity Deficiency'. The results of the study explored how to amplify the help of eye-tracking games in the teaching process and reduce the possible drawbacks from the extracted multiple constructs. Therefore, as an innovative teaching case with strong interaction, the adoption of eye-tracking game learning is indeed helpful to the course of digital media major.
... The changes that occurred in the intensity of brainwaves of test subjects recorded while browsing different media content were analyzed in [22]. Apart from BCI systems, there are other methods of human-computer interaction, such as eye movement tracking [23]. These systems can be used in the analysis of programming technologies such as LINQ [24], thus allowing, the loading of cognition or source code and algorithm description tools for readability [25]. ...
...  Alpha waves (8-13 Hz): these ones predominate when the Central Nervous System is at rest, relaxed but awake and attentive.  Beta waves (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30): these ones are associated with external cognitive tasks, and activities related to concentration, such as solving a mathematical problem. ...
Article
Full-text available
Epilepsy is a chronic disease with a significant social impact, given that the patients and their families often live conditioned by the possibility of an epileptic seizure and its possible consequences, such as accidents, injuries, or even sudden unexplained death. In this context, ambulatory monitoring allows the collection of biomedical data about the patients’ health, thus gaining more knowledge about the physiological state and daily activities of each patient in a more personalized manner. For this reason, this article proposes a novel monitoring system composed of different sensors capable of synchronously recording electrocardiogram (ECG), photoplethysmogram (PPG), and ear electroencephalogram (EEG) signals and storing them for further processing and analysis in a microSD card. This system can be used in a static and/or ambulatory way, providing information about the health state through features extracted from the ear EEG signal and the calculation of the heart rate variability (HRV) and pulse travel time (PTT). The different applied processing techniques to improve the quality of these signals are described in this work. A novel algorithm used to compute HRV and PTT robustly and accurately in ambulatory settings is also described. The developed device has also been validated and compared with other commercial systems obtaining similar results. In this way, based on the quality of the obtained signals and the low variability of the computed parameters, even in ambulatory conditions, the developed device can potentially serve as a support tool for clinical decision-taking stages.
... Therefore, our eye moves continuously to track objects of interest or to register all interesting elements in a scene. Estimating the current gaze position (the point where the person is looking at) reveals what attracted a person's attention [11]. ...
... However, this method has some severe drawbacks [11]. The electrodes record potential change from all sources including eye movements, swallowing, muscle twitches and ...
... However, the users have to spend longer time to calibrate their gaze. Furthermore, this conventional calibration is designed with poor user experience [12] and it may cause visual fatigue [13]. During gazeto-screen calibration, users are normally not allowed to blink and they have to keep their heads steady. ...
... During gazeto-screen calibration, users are normally not allowed to blink and they have to keep their heads steady. Additionally, the calibration may be repeated several times due to inaccuracy in results of calibration [10], [13]. ...
... On the other hand, a bigger key size makes the typing task more straightforward yet limits the typing area. Finding solutions for the aforementioned problems has attracted researchers for many years [3,6,7]. During this period, several methods for text writing have been developed, differing mainly in ways of typing letters and key layouts. ...
Article
A novel approach for eye-typing using an on-screen keyboard was proposed in the paper. It was compared with the two applied in the previous studies. All utilized the dwell-time selection for writing letters, set to 1000 ms. This value was chosen due to the planned group of participants. Among 14 engaged people, there were four over the age of 50 and one aged 47. The text utilized in the research was prepared in two languages – Polish and English. The obtained results showed that the language used had no influence on typing efficiency. Additionally, they revealed the new keyboard layout was more convenient than one of the two chosen for comparison and similar to the second. Such conclusions were based on the experiment duration and the number of errors made. They were confirmed by the users’ opinions.
... This study focuses exclusively on the selection of map features (i.e., selecting a point, polyline or polygon) via gaze. Using gaze to select objects may be more efficient than using either mouse or touch because eye movements are initially faster [12]. However, gazebased selection encounters at least two difficulties [13,14]: low spatial accuracy and the Midas touch problem (Figure 1). ...
Article
Full-text available
The modes of interaction (e.g., mouse and touch) between maps and users affect the effectiveness and efficiency of transmitting cartographic information. Recent advances in eye tracking technology have made eye trackers lighter, cheaper and more accurate, broadening the potential to interact with maps via gaze. In this study, we focused exclusively on using gaze to choose map features (i.e., points, polylines and polygons) via the select operation, a fundamental action preceding other operations in map interactions. We adopted an approach based on the dwell time and buffer size to address the low spatial accuracy and Midas touch problem in gaze-based interactions and to determine the most suitable dwell time and buffer size for the gaze-based selection of map features. We conducted an experiment in which 38 participants completed a series of map feature selection tasks via gaze. We compared the participants’ performance (efficiency and accuracy) between different combinations of dwell times (200 ms, 600 ms and 1000 ms) and buffer sizes (point: 1°, 1.5°, and 2°; polyline: 0.5°, 0.7° and 1°). The results confirmed that a larger buffer size raised efficiency but reduced accuracy, whereas a longer dwell time lowered efficiency but enhanced accuracy. Specifically, we found that a 600 ms dwell time was more efficient in selecting map features than 200 ms and 1000 ms but was less accurate than 1000 ms. However, 600 ms was considered to be more appropriate than 1000 ms because a longer dwell time has a higher risk of causing visual fatigue. Therefore, 600 ms supports a better balance between accuracy and efficiency. Additionally, we found that buffer sizes of 1.5° and 0.7° were more efficient and more accurate than other sizes for selecting points and polylines, respectively. Our results provide important empirical evidence for choosing the most appropriate dwell times and buffer sizes for gaze-based map interactions.
... However, Kasprowski et al . concludes that touchpad or mouse input is still superior in interaction accuracy and reaction speed [10] for workloads that necessitate quick and precise interaction. Statistical analysis is more widespread purpose of eye tracking in human-computer-interaction projects, for which users and their eye movement are observed while they fulfill given tasks, e.g., using a web site. ...
Chapter
Full-text available
Acquisition and consumption of visual media such as digital image and videos is becoming one of the most important forms of modern communication. However, since the creation and sharing of images is increasing exponentially, images as a media form suffer from being devalued, as the quality of single images are getting less and less important, and the frequency of the shared content turns to be the focus. In this work, an interactive system which allows users to interact with volatile and diverting artwork based on their eye movement only is presented. The system uses real-time image-abstraction techniques to create an artwork unique to each situation. It supports multiple, distinct interaction modes, which share common design principles, enabling users to experience game-like interactions focusing on eye-movement and the diverting image content itself. This approach hints at possible future research in the field of relaxation exercises and casual art consumption and creation.
... Typically, the calibration process is carried out by displaying a series of visual targets that the user must gaze at when related measurements are made. Two main problems caused by calibration process are: (i) users are not accustomed to see the same point for a long time, (ii) these process may induce visual fatigue [28]. Moreover users are not allowed to blink and have to keep their heads steady to achieve accurate calibration results [27]. ...
... Thus, it may be possible to conduct a higher accuracy evaluation of the physiological mental state if the eye-blink types are automatically classified by the system. Hence, that approach can contribute significantly to analyzing usability and the user experience of various systems [15][16][17]. ...
... Facial expressions [12], head pose [13] and ocular dynamics [14] are validated markers of interest level in a number of human-computer interaction (HCI) systems such as multimedia streaming services, e-learning and website content platforms. Development of indices from face images is advantageous for the following reasons: ...
Conference Paper
The paper proposes three metrics for assessing interest levels of users in human-computer interaction (HCI) applications using facial features. The indices, named Expression Index (EI), Ocular Index (OI), and the Head Pose Index (HPI), are based respectively on the three most significant aspects of HCI, viz., facial expression, eyelid dynamics, and head pose. The proposed indices have been computed on a sequence of face images. The region of face and eyes have been localized using Convolutional Neural Networks (CNN) and the indices validated through statistical analysis.
... If not, one could assume that a well-established input form, such as a mouse, would be sufficient to achieve a similar or even better result. By doing so, we want to contribute to the research body of comparative studies in the field of gaze vs. mouse (e.g., [49][50][51][52][53][54][55]). Figure that illustrates the commonalities and differences between CrossG and GazeG; left image: in CrossG, players interact via a mouse (look around, pointing) and a keyboard (WASD-movement, crouching); the guidance (i.e., vignette effect) is driven by the current crosshair position that is pinned to the center of the screen, right image: in GazeG, players move in the same way as in CrossG (i.e., mouse and keyboard); in this condition, the guidance is decoupled from the screen's center and is driven via the player's gaze position. Note: gaze point is shown for demonstration purposes (i.e., no visual feedback on the current gaze position was provided). ...
Article
Full-text available
This paper investigates the effects of gaze-based player guidance on the perceived game experience, performance, and challenge in a first-person exploration game. In contrast to existing research, the proposed approach takes the game context into account by providing players not only with guidance but also granting them an engaging game experience with a focus on exploration. This is achieved by incorporating gaze-sensitive areas that indicate the location of relevant game objects. A comparative study was carried out to validate our concept and to examine if a game supported with a gaze guidance feature triggers a more immersive game experience in comparison to a crosshair guidance version and a solution without any guidance support. In general, our study findings reveal a more positive impact of the gaze-based guidance approach on the experience and performance in comparison to the other two conditions. However, subjects had a similar impression concerning the game challenge in all conditions.
... The second method, electrooculography (EOG), comprises electrodes placed around the eyes, which measure the electric potential changes during eye movement. These electric potential changes allow to conclude the position of the eye gaze and to measure relative eye movement [6], [7]. ...
Conference Paper
Full-text available
Object of Interest(OoI) segmentation in video sequences is, due to its temporal and spatial complexity, stilla difficult task. Thus, it cannot be done automatically, and interactive segmentation methods are inconvenient and time-consuming. Overcoming these drawbacks and performing OoI segmentation in real-time, this paper introduces a new approach for gaze-based OoI segmentation. The user fixates on the OoI throughout watching a video sequence. The users’ gaze provides relevant information on the location of the OoI, which is then used for segmenting the OoI. In comparison to present segmentation methods, our approach notably reduced user-interactions required to attain pixel-accurate segmentations. The first evaluation of our gaze-based OoI segmentation method shows promising results. These are close to, and sometimes even exceeding the quality of the polygon-based ground truth segmentation. In conclusion, these findings encourage further exploration of gaze-based OoI segmentation.
... both elements of their use contribute to time required to complete various tasks and claim additional cognitive resources. These limitations are not critical, however, as in a number of experimental studies where various types of human-computer interaction were examined gaze was not able to outperform mouse and other traditional manual input devices (e.g., [10][11][12][13]). To our knowledge, faster or more accurate selection with gaze compared to mouse has been never demonstrated for non-salient targets. ...
... There are plenty of potential applications of eye tracking: medicine [1,2], psychology [3], sociology [4], education [5], usability assessment [6] or advertisement analysis. There is also a growing number of applications using eye gaze as a new input modality [7]. With access to high quality cameras and devices that are able to process sophisticated image retrieval tasks, anybody can build an eye tracker. ...
Article
Full-text available
Recently eye tracking has become a popular technique that may be used for variety of applications starting from medical ones, through psychological, analyzinguser experience, ending with interactive games. Video based oculography (VOG) is the most popular technique because it is non-intrusive, can be use in users’ natural environment and are relatively cheap as it uses only classic cameras. There are already well established methods for eye detection on a camera capture. However, to be usable in gaze position estimation, this information must be associated with an area in an observer scene, which requires evaluating several parameters. These parameters are typically estimated during the process called calibration. The main purpose of the software described in this paper is to establish a common platform that is easy to use and may be used in different calibration scenarios. Apart from the normal regression based calibration the ETCAL library allows also to use more sophisticated methods like automatic parameters optimization or the automatic detection of gaze targets. The library is also easily extendable and may be accessed with a convenient Web/REST interface.
... Various selection methods have been developed and tested: dwell-time, which is most common [1,5,21], blinking and winking [16,22], voluntary pupil size manipulation [4], and external triggers [23]. However, none is as quick and simple as pressing a button [9]. The second problem is precision: a cursor stands perfectly still indefinitely when its control device remains untouched, while the human eye is in constant motion. ...
Chapter
Gaze as a gaming input modality poses interaction challenges, not the least of which is the well-known Midas Touch problem, when neutral visual scanning leads to unintentional action. This is one of the most difficult problems to overcome. We propose and test a novel method of addressing the Midas Touch problem by using Gaussian-based velocity attenuation based on the distance between gaze and the player position. Gameplay controlled by the new control method was rated highest and the most engaging among those tested, and was rated similarly to the mouse or keyboard in terms of joy of use and ease of navigation. We also showed empirically that this method facilitated visual scanning, without harming game performance compared to other gaze control methods and traditional input modalities. The novel method of game control constitutes a promising solution to problems associated with gaze-controlled human computer interaction.
Conference Paper
Abstract:Realistically the growth of Eye Tracking from an idea to the reality being investigated experimentally now a days in HCI to perceive eye improvements to decide the look course, client position at any given instance of time and consortium of development. Human-Computer Interface (HCI) has turned into a significant space of innovative work for the impaired. A compact remote eye development controlled Human-Computer Interface which can be utilized for the incapacitated who have engine loss of motion and who can’t talk in numerous applications, (for example, correspondence help and shrewd home applications). As many methodologies have been proposed, reports are additionally placed in to give a broad overview of the techniques created throughout the long term. The target of this paper is to give a study of eye development based Human PC associations papers that showed up in the writing over the previous years which were not talked about in the past study. Additionally, sort them into significant methodologies and examines their benefits and burdens. Open issues were inspected and expected bearings for research in eye development based HCI cooperation’s frameworks are proposed to furnish the peruse with a perspective for points that merit thought.
Article
EyeCompass is a novel free-eye drawing system enabling high-fidelity and efficient free-eye drawing through unimodal gaze control, addressing the bottlenecks of gaze-control drawing. EyeCompass helps people to draw using only their eyes, which is of value to people with motor disabilities. Currently, there is no effective gaze-control drawing application due to multiple challenges including involuntary eye movements, conflicts between visuomotor transformation and ocular observation, gaze trajectory control, and inherent eye-tracking errors. EyeCompass addresses this using two initial gaze-control drawing mechanisms: brush damping dynamics and the gaze-oriented method. The user experiments compare the existing gaze-control drawing method and EyeCompass, showing significant improvements in the drawing performance of the mechanisms concerned. The field study conducted with motor-disabled people produced various creative graphics and indicates good usability of the system. Our studies indicate that EyeCompass is a high-fidelity, accurate, feasible free-eye drawing method for creating artistic works via unimodal gaze control.
Chapter
Eye tracking systems are crucial methods by which motor disabled people can interact with computers. Previous research in this field has identified various accessibility affecting eye tracking technologies and applications. However, there is limited research into first-hand user experiences among individuals with motor disabilities. This study aims to examine the actual challenges with eye tracking systems and the gaze interaction faced by motor disabled people. A survey was conducted among people with motor disabilities who used eye trackers for computer interactions. It reveals the current issues from their first-hand experiences in three areas: eye tracking program, gaze interaction, and accessible applications. A knowledge graph arising from the survey delineates the connections among the eye tracking usability issues. The survey’s results also indicate practical strategies for future improvements in eye trackers.KeywordsEye trackingGaze interactionChallengesUser experiences
Article
The paper focuses on utilizing eye movements for controlling computer programs. Four different approaches were considered, including gazing at an element, blinking, and two solutions for eye gestures. The methods were implemented in a simple photo viewer. However, the developed functionalities can be utilized in other applications (switching e-book pages or on-screen keyboard elements). Various sizes for gazed components were verified, as well as the time required for performing particular actions, the number of unwanted element choices, and needed gesture repetitions. The methods were also assessed by the participants of the experiments. Based on the obtained results, it may be reasoned that components of size 200px are convenient for the application control. The users’ opinions and the experiments’ outcomes revealed that the gaze-based method and gestures based on joining points are satisfactory solutions.
Article
Eye movement information provides an alternative way for patients with muscular and neurological disorders to communicate with other people or devices. Electrooculography (EOG) is an effective eye movement recording method that has been widely applied to design human–computer interaction systems. In this paper, we propose a simplified Chinese eye-writing system based on EOG (EsCew) by identifying the basic strokes of Chinese characters. Specifically, we first use a bandpass digital filter to preprocess the raw EOG signals to suppress noise interference. Then, we determine the effective eye movement segments according to different strokes by detecting blink signals using the sliding window technique. On this basis, we establish basic stroke templates in terms of the handwriting characteristics of Chinese characters. To reduce the computational complexity, the DTW algorithm is adopted to classify the EOG segments. Finally, we match the stroke sequence with the encoded Chinese characters to obtain the final recognition results. In the lab environment, the recognition experiments are performed on 10 most representative Chinese characters, i.e., . The average accuracies of the basic strokes and Chinese characters are 93.998% and 94.52%, respectively. The experimental results validate the feasibility of the proposed EsCew system. • Download : Download high-res image (25KB) • Download : Download full-size image
Book
Full-text available
The Institute of Automatic Control was founded on October 1, 1977, as a result of fusion of several groups at the Faculty of Automatic Control at the Silesian University of Technology. Currently the Institute of Automatic Control is one of the three institutes constituting the Faculty of Automatic Control, Electronics and Computer Science. The Institute members are involved in teaching of more than 1000 students from several study specialisations. General research directions of the Institute mainly concern automatic control and robotics, metrology, modelling, analysis of signals and systems, as well as biotechnology and bio-cybernetics, taking into account both theoretical and practical aspects.
Conference Paper
Nowadays, Advanced Driver Assistance Systems (ADAS) support drivers of vehicles in emergency situations that are connected with vehicular traffic. They help to save people’s lives and minimise the losses in accidents. ADAS use information that is supported by a variety of sensors, which are responsible for tracking the vehicle’s surroundings. Unfortunately, the range of the sensors is limited to several dozen metres and even less in the case of obstacles. This shortens the time for a reaction and, therefore, there may not be enough time to avoid an accident. In order to overcome this drawback, vehicles have to share the information that is available in ADAS. The authors investigated different vehicle-to-vehicle communication possibilities. Based on an analysis of the state of the art, the authors present an original concept that is focused on applying the OPC UA (IEC 62541) communication protocol for services that correspond to the Internet of Vehicles concept.
Article
Full-text available
Gaze, as a sole input modality must support complex navigation and selection tasks. Gaze interaction combines specific eye movements and graphic display objects (GDOs). This paper suggests a unifying taxonomy of gaze interaction principles. The taxonomy deals with three types of eye movements: fixations, saccades and smooth pursuits and three types of GDOs: static, dynamic, or absent. This taxonomy is qualified through related research and is the first main contribution of this paper. The second part of the paper offers an experimental exploration of single stroke gaze gestures (SSGG). The main findings suggest (1) that different lengths of SSGG can be used for interaction, (2) that GDOs are not necessary for successful completion, and (3) that SSGG are comparable to dwell time selection.
Conference Paper
Full-text available
Eye movement related artifacts are the most significant source of noise in EEG signals. Thus, a special approach to reduction of their influence is required. However, most of currently used methods of detecting and filtering eye movement related artifacts require either an additional recording of noise signal, or are not suitable for real time applications, such as Brain-Computer Interfaces. In this paper it was proven that it is possible to detect and filter those artifacts in real time, without the need of providing an additional recording of noise signal.
Conference Paper
Full-text available
Eye movement may be regarded as a new promising modality for human computer interfaces. With the growing popularity of cheap and easy to use eye trackers, gaze data may become a popular way to enter information and to control computer interfaces. However, properly working gaze contingent interface requires intelligent methods for processing data obtained from an eye tracker. They should reflect users' intentions regardless of a quality of the signal obtained from an eye tracker. The paper presents the results of an experiment during which algorithms processing eye movement data while 4-digits PIN was entered with eyes were checked for both calibrated and non-calibrated users.
Conference Paper
Full-text available
PINs are one of the most popular methods to perform simple and fast user authentication. PIN stands for Personal Identification Number, which may have any number of digits or even letters. Nevertheless, 4-digit PIN is the most common and is used for instance in ATMs or cellular phones. The main ad-vantage of the PIN is that it is easy to remember and fast to enter. There are however some drawbacks. One of them – addressed in this paper – is a possibility to stole the PIN by a technique called 'shoulder surfing'. To avoid such problems a novel method of the PIN entering was proposed. Instead of using a numerical keyboard, the PIN may be entered by eye gazes, which is a hands-free, easy and robust technique.
Conference Paper
Full-text available
Eye gaze tracking provides a natural and fast method of interacting with computers. Many click alternatives have been proposed so far, each with their own merits and drawbacks. We focus on the most natural selection method, i.e. the dwell, with which a user can select an on-screen object by just gazing at it for a pre-defined dwell time. We have looked at three design parameters of the dwell click alternative, namely dwell time, button size and placement of content. Two experiments, with similar user interfaces, were designed and conducted with 21 and 15 participants, respectively. Different combinations of dwell times and button sizes were tested in each experiment for each participant. One experiment had content placed on the buttons to be gazed at, while the other had content placed above the buttons. One important finding is that moving the content outside the clickable areas avoids accidental clicking, i.e. the Midas Touch problem. In such a design, a combination of big buttons and short dwell times are most suited for maximizing accuracy and ease of use, due to a phenomenon identified as the 'gaze-hold' problem.
Article
Full-text available
It is still unknown whether the very application of gaze for interaction has effects on cognitive strategies users employ and how these effects materialize. We conducted a between-subject experiment in which thirty-six participants interacted with a computerized problem-solving game using one of three interaction modalities: dwell-time, gaze-augmented interaction, and the conventional mouse. We observed how using each of the modalities affected performance, problem solving strategies, and user experience. Users with gaze-augmented interaction outperformed the other groups on several problem-solving measures, committed fewer errors, were more immersed, and had a better user experience. The results give insights to the cognitive processes during interaction using gaze and have implications on the design of eye-tracking interfaces.
Article
Full-text available
Knowledge-based authentication (e.g. passwords) has long been associated with a vulnerability to shoulder surfing; being stolen by attackers overlooking the interaction. In order to combat such threats, steps can be taken to either alter the form of the challenge made to the user, or make use of an interaction technique that is resistant to information leakage. We consider the latter, and empirically evaluate the
Conference Paper
Full-text available
This paper introduces the concept of using gaze as a sole modality for fully controlling player characters of fast-paced action computer games. A user experiment is devised to collect gaze and gameplay data from subjects playing a version of the popular Super Mario Bros platform game. The initial analysis shows that there is a rather limited grid around Mario where the efficient player focuses her attention the most while playing the game. The useful grid as we name it, projects the amount of meaningful visual information a designer should use towards creating successful player character controllers with the use of artificial intelligence for a platform game like Super Mario. Information about the eyes' position on the screen and the state of the game are utilized as inputs of an artificial neural network, which is trained to approximate which keyboard action is to be performed at each game step. Results yield a prediction accuracy of over 83% on unseen data samples and show promise towards the development of eye-controlled fast-paced platform games. Derived neural network players are intended to be used as assistive technology tools for the digital entertainment of people with motor disabilities.
Conference Paper
Full-text available
Personal identification numbers (PINs) are one of the most common ways of electronic authentication these days and used in a wide variety of applications, especially in ATMs (cash machines). A non-marginal amount of tricks are used by criminals to spy on these numbers to gain access to the owners' valuables. Simply looking over the victims' shoulders to get in possession of their PINs is a common one. This effortless but effective trick is known as shoulder surfing. Thus, a less observable PIN entry method is desirable. In this work, we evaluate three different eye gaze interaction methods for PIN- entry, all resistant against these common attacks and thus providing enhanced security. Besides the classical eye input methods we also investigate a new approach of gaze gestures and compare it to the well known classical gaze-interactions. The evaluation considers both security and usability aspects. Finally we discuss possible enhancements for gaze gestures towards pattern based identification instead of number sequences.
Conference Paper
Full-text available
This paper investigates novel ways to direct compu ters by eye gaze. Instead of using fixations and dwell times, this wo rk focuses on eye motion, in particular gaze gestures. Gaze gestures are insensi tive to accuracy problems and immune against calibration shift. A user study indi cates that users are able to perform complex gaze gestures intentionally and inv estigates which gestures occur unintentionally during normal interaction wit h the computer. Further experiments show how gaze gestures can be integrated into working with standard desktop applications and controlling media devices.
Conference Paper
Full-text available
We present a study that explores the use of a commercially available eye tracker as a control device for video games. We examine its use across multiple gaming genres and present games that utilize the eye tracker in a variety of ways. First, we describe a first­person shooter that uses the eyes to control orientation. Second, we study the use of eye movements for more natural interaction with characters in a role playing game. And lastly, we examine the use of eye tracking as a means to control a modified version of the classic action/arcade game Missile Command. Our results indicate that the use of an eye tracker can increase the immersion of a video game and can significantly alter the gameplay experience.
Conference Paper
Full-text available
Eye typing provides a means of communication for severely handicapped people, even those who are only capable of moving their eyes. This paper considers the features, functionality and methods used in the eye typing systems developed in the last twenty years. Primary concerned with text production, the paper also addresses other communication related issues, among them customization and voice output.
Conference Paper
Full-text available
To enable people with motor impairments to use gaze control to play online games and take part in virtual communities, new interaction techniques are needed that overcome the limitations of dwell clicking on icons in the games interface. We have investigated gaze gestures as a means of achieving this. We report the results of an experiment with 24 participants that examined performance differences between different gestures. We were able to predict the effect on performance of the numbers of legs in the gesture and the primary direction of eye movement in a gesture. We also report the outcomes of user trials in which 12 experienced gamers used the gaze gesture interface to play World of Warcraft. All participants were able to move around and engage other characters in fighting episodes successfully. Gestures were good for issuing specific commands such as spell casting, and less good for continuous control of movement compared with other gaze interaction techniques we have developed.
Conference Paper
Robot manipulator teaching is a time consuming procedure where qualified operator programs the execution path. In this paper we introduce and discuss the improvement of traditional teaching method with application of hand gesture recognition system. The paper presents the most common robot programming and hand gesture recognition issues and presents the possibility of joining this two research fields.
Conference Paper
Eye gaze is a compelling interaction modality but requires user calibration before interaction can commence. State of the art procedures require the user to fixate on a succession of calibration markers, a task that is often experienced as difficult and tedious. We present pursuit calibration, a novel approach that, unlike existing methods, is able to detect the user's attention to a calibration target. This is achieved by using moving targets, and correlation of eye movement and target trajectory, implicitly exploiting smooth pursuit eye movement. Data for calibration is then only sampled when the user is attending to the target. Because of its ability to detect user attention, pursuit calibration can be performed implicitly, which enables more flexible designs of the calibration task. We demonstrate this in application examples and user studies, and show that pursuit calibration is tolerant to interruption, can blend naturally with applications and is able to calibrate users without their awareness.
Article
Holmqvist, K., Nyström, N., Andersson, R., Dewhurst, R., Jarodzka, H., & Van de Weijer, J. (Eds.) (2011). Eye tracking: a comprehensive guide to methods and measures, Oxford, UK: Oxford University Press.
Article
The OBER 2 is an infrared light eye movement measuring system and it works with IBM PC compatible computers. As one of the safest systems for measuring of eye movement it uses a very short period of infrared light flashing time (80 microsecond for each measure point). System has an advanced analog-digital controller, which includes background suppression and prediction mechanisms guaranteeing elimination of slow changes and fluctuations of external illumination frequency up to 100 Hz, with effectiveness better than 40 dB. Setting from PC the active measure axis, sampling rate (25 - 4000 Hz) and making start and stop the measure, make it possible to control the outside environment in real-time. By proper controlling of gain it is possible to get high time and position resolution of 0.5 minute of arc even for big amplitude of eye movement (plus or minus 20 degree of visual angle). The whole communication system can also be driven directly by eye movement in real time. The possibility of automatic selection of the most essential elements of eye movement, individual for each person and those that take place for each person in determined situations of life independently from personal features, is a key to practical application. Hence one of conducted research topic is a personal identification based on personal features. Another task is a research project of falling asleep detection, which can be applied to warn the drivers before falling asleep while driving. This measuring system with a proper expert system can also be used to detect a dyslexia and other disabilities of the optic system.
Book
Despite the availability of cheap, fast, accurate and usable eye trackers, there is still little information available on how to develop, implement and use these systems. This second edition of Andrew Duchowski's successful guide to these systems contains significant additional material on the topic and fills this gap in the market with this accessible and comprehensive introduction. Opening with useful background information, including an introduction to the human visual system and key issues in visual perception and eye movement, the second part surveys eye-tracking devices and provides a detailed introduction to the technical requirements necessary for installing a system and developing an application program. The book focuses on video-based, corneal-reflection eye trackers - the most widely available and affordable type of system, before closing with a look at a number of interesting and challenging applications in human factors, collaborative systems, virtual reality, marketing and advertising.