Article

An Eye-Gaze Tracking and Human Computer Interface System for People with ALS and Other Locked-in Diseases

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Eye tracking is one of the most natural ways for people with amyotrophic lateral sclerosis and other locked-in and paralysis diseases to communicate. The majority of existing eye-tracking computer input systems use cameras to capture images of the user's eyes to track pupil movements. Most camera-based systems are expensive and not user-friendly. This paper proposes an eye-tracking system called EyeLive that uses discrete infrared sensors and emitters as the input device. The eye-tracking and calibration algorithms classify eye positions into six categories, namely looking up, looking down, looking left, looking right, looking straight ahead (i.e. middle direction), and eyes closed. A graphical user interface optimized for EyeLive is also developed. It divides the screen into a nine-cell grid and uses a hierarchical selection approach for text input. EyeLive's hardware, eye-tracking algorithm, calibration, and user interface are compared to those of existing eye-tracking systems. The advantages of the EyeLive system, such as low cost, user friendliness, and eye strain reduction, are discussed. The performance of the system in classifying eye positions is experimentally tested with eight healthy individuals. The results show a 5.6% error rate in classifying eye positions in five directions and a 0% error rate in classifying closed eyes. Additional experiments show that the average typing speed is 1.95 words/min for a novice user and 2.91 words/min for an experienced user. The tradeoffs of lower typing speed versus other advantages comparing to other systems are explained.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... But the differently abled and elder users, those who have no/less control over their limbs, need specialised interaction means. As an example, eye controlled systems are designed for patients suffering from Amyotrophic Lateral Sclerosis (ALS) and other motor neuron diseases for which eye movements and eye blinks remain intact [11,12]. This paper presents a review on object acquisition and selection techniques used in specialized HCI systems and the related issues. ...
... The term object acquisition refers to the process of moving cursor/focus on the object of interest in an HCI system by using suitable input means. The objects can be some of the following forms: icons [13], buttons [11], hyperlinks [14], pictures [15] or dialogue boxes [16]. The object acquisition or bringing cursor/focus over objects in a human computer interface is generally performed by using gaze tracking [17,18], tongue movement [19], facial feature tracking [4], scanning method [20], menus [21] and hybrid approaches [22]. ...
... This approach also provides the feature of word prediction/completion to increase typing speed. Therefore, to enhance the eye typing speed hierarchical menus [11] and word prediction [30] techniques are applied in interaction systems. The EOG based system is more efficient than P300 based BCI system in-terms of speed, applicability, accuracy and cost. ...
... Invasive BCIs range from 5.4-69 bits/min (Brunner, Ritaccio, Emrich, Bischof, & Schalk, 2011;Guenther & Brumberg, 2011;Hill et al., 2006;Simeral, Kim, Black, Donoghue, & Hochberg, 2011), whereas non-invasive BCIs can range from 1.8-24 bits/min (Nijboer et al., 2008;Sellers, Krusienski, McFarland, Vaughan, & Wolpaw, 2006;Wolpaw et al., 2002). Eye tracking can produce ITRs in the range of 60-222 bits/min (Frey et al., 1990;Higginbotham et al., 2007;Liu et al., 2012;Majaranta et al., 2006, MacKenzie, Aula, & Räihä, 2006, and ITRs produced with head tracking devices range from 78-174 bits/min (Epstein, Missimer, & Betke, 2014;Williams & Kirsch, 2008). Mechanical switches can produce ITRs in the range of 96-198 bits/min. ...
... Mechanical switches can produce ITRs in the range of 96-198 bits/min. While the ITRs produced by the sEMG cursor are well within these ranges (111 bits/min), future directions include incorporating predictive methods, which have been shown to increase ITRs by as much as 100% (Liu et al., 2012). ...
Article
Full-text available
Many individuals with minimal movement capabilities use AAC to communicate. These individuals require both an interface with which to construct a message (e.g., a grid of letters) and an input modality with which to select targets. This study evaluated the interaction of two such systems: (a) an input modality using surface electromyography (sEMG) of spared facial musculature, and (b) an onscreen interface from which users select phonemic targets. These systems were evaluated in two experiments: (a) participants without motor impairments used the systems during a series of eight training sessions, and (b) one individual who uses AAC used the systems for two sessions. Both the phonemic interface and the electromyographic cursor show promise for future AAC applications.
... Eye-tracking (+ predictive methods) 60-222 [14][15][16][17] Mechanical Switch (+ predictive methods) 96-198 [15] Head tracking and orientation devices 78 [9] sEMG systems (continuous muscle control) 5.4-51 [9][10][11][12] Invasive BCIs 5.4-69 [18,19] Non-invasive BCIs 1.8-24 [19][20][21][22] sEMG systems do not require any particular lighting, and a user can be in any position as long as they can clearly see the computer screen. The systems described in this paper use visual feedback and therefore also require some intact vision, but alternate systems could be adapted for users with visual impairments [23]. ...
... Other changes could maximize the ITRs from these systems including training protocols and predictive direction and text models. Users in both groups had higher ITRs in the final trials as compared to the entire session (Fig. 2), and other studies show that training increased their users' ITRs by nearly 50% [16]; Additionally, adding a language prediction model could improve the final ITRs by as much as 100% [24,25]. ...
Article
Full-text available
Over 50% of the 273,000 individuals with spinal cord injuries in the US have cervical injuries and are therefore unable to operate a keyboard and mouse with their hands. In this experiment, we compared two systems using surface electromyography (sEMG) recorded from facial muscles to control an onscreen keyboard. Both systems used five sEMG sensors to capture muscle activity during five distinct facial gestures that then mapped to five cursor commands: move left, move right, move up, move down, and click. One system used a discrete movement and feedback algorithm, in which the user would make one quick facial gesture, causing a corresponding discrete movement to an adjacent button. The other system was continuously updated and allowed the user to move in any 360(o) direction smoothly. Information transfer rates (ITRs) in bits per minute were high for both systems. Users of the continuous system showed significantly higher ITRs (average of 68.5; p <; 0.02) compared to users of the discrete system (average of 54.3 bits/min).
... [12][13][14] The two main communication systems in use clinically for this population are eye-tracking and mechanical switch systems. Eye-trackers that include predictive methods have achieved typing speeds of 12 to 35 letters/minute [7,[22][23][24], and mechanical switch systems with prediction have achieved typing speeds of 28 letters/minute [27]. With training, users of the sEMG systems in this study showed ITRs in the upper range of ITRs achieved with eye-tracking and mechanical switch systems that include prediction. ...
... Future study, especially in disordered populations, should include additional training sessions in order to fully benchmark the potential performance with these systems. Furthermore, addition of predictive methods such as those employed by Frey [22] and Koester and Levine [30] could reduce the level of effort required by users and could additionally improve ITRs by as much as 100% [23]. ...
Article
Full-text available
Individuals with high spinal cord injuries are unable to operate a keyboard and mouse with their hands. In this experiment, we compared two systems using surface electromyography (sEMG) recorded from facial muscles to control an onscreen keyboard to type five-letter words. Both systems used five sEMG sensors to capture muscle activity during five distinct facial gestures that were mapped to five cursor commands: move left, move right, move up, move down, and "click". One system used a discrete movement and feedback algorithm in which the user produced one quick facial gesture, causing a corresponding discrete movement to an adjacent letter. The other system was continuously updated and allowed the user to control the cursor's velocity by relative activation between different sEMG channels. Participants were trained on one system for four sessions on consecutive days, followed by one crossover session on the untrained system. Information transfer rates (ITRs) were high for both systems compared to other potential input modalities, both initially and with training (Session 1: 62.1 bits/min, Session 4: 105.1 bits/min). Users of the continuous system showed significantly higher ITRs than the discrete users. Future development will focus on improvements to both systems, which may offer differential advantages for users with various motor impairments.
... HCI based interfaces can improve effective communication for disabled in day to day life. Amyotrophic lateral sclerosis is a neuro-degenerative disease that leads to physical and vocal disability [42]. Graphical based UI helps these patients for communicative interactions without the use of physical contact devices. ...
... Eye-tracking[39] can help researchers to understated visual and display based information processing and the factors that impact on the system interfaces [41]. Eye-tracking and calibration algorithms classify eye positions into six categories of looking: up, down, left, right, straight ahead, and eyes closed [42].Eye movements help in tracking visual stimuli. Many categories of Eye movements are there and the basic categorization includes saccades, fixations and scanpath [43].Fixation is therelatively stationary moment when the eyes are, takinginformation (encoding) with a range of 66 to 416 milliseconds, on an average it last for 218 milliseconds. ...
Article
Full-text available
Human Computer Interaction (HCI) has increased attention for the past decade and has increased User Interface (UI) designs for various applications. Real time eyetracking and gaze detection can provide a means of user input for HCI. Eye-tracking and gaze detected data can also be exploited to track user emotions, as a means of Non-verbal communication. Many HCI gaze input systems have been developed as User Interface for physically and vocally challenged. This work focus on quality issues in eye- tracking and gaze detection with a review of earlier approaches across applications in medicine and education. Explored Quality issues are: use of intrusive or non-intrusive devices, Information source input like Pupil, Iris, corneal reflections, Glint, sclera, 3D eye ball model, Camera and User Calibration, Head or pose invariance, Image Noise reduction such as Glint noise, Iris noise, Illumination variation with Natural Light source, use of Infrared illumination and accuracy degree. Finally concludes with the necessity in selecting appropriate information source as input to improve the quality of real time eye- tracking and gaze detection methods, so as to enhance user interaction in HCI systems in twofold: to increase the learning efficiency in education and a way-out for natural communication with machines in medicine.
... This prototype offers the possibility to phone, to visit websites, to read e-books, or to watch TV. An infrared sensor/emitterbased eye tracking prototype was developed from Liu et al., which represents a low-cost alternative to the usual expensive video-based systems [6]. With this eye tracking principle, only up/down/right/left eye gaze moves can be detected as well as staying in the center using the eyelids to trigger an event. ...
Article
Full-text available
In this article, we present a prototype of a virtual presence system combined with an eye-tracking based communication interface and an indoor navigation component to support patients with locked-in syndrome. The common locked-in syndrome is a state of paralysis of all four limbs while a patient retains full consciousness. Furthermore, also the vocal tract and the respiration system are paralyzed. Thus, the virtually only possibility to communicate consists in the utilization of eye movements for system control. Our prototype allows the patient to control movements of the virtual presence system by eye gestures while observing a live view of the scene that is displayed on a screen via an on-board camera. The system comprises an object classification module to provide the patient with different interaction and communication options depending on the object he or she has chosen via an eye gesture. In addition, our system has an indoor navigation component, which can be used to prevent the patient from navigating the virtual presence systems to critical areas and to allow for an autonomous return to the base station using the shortest path. The proposed prototype may open up new possibilities for locked-in syndrome patients to regain a little more mobility and interaction capabilities within their familiar environment.
... Many researches have focused on estimating the human gaze for its importance in areas such as human-computer interaction [1][2][3], bio-medical engineering [4], and autonomous driving [5,6]. Gaze tracking techniques have evolved over time, though most of the early methods require expensive and invasive hardware. ...
Preprint
Gaze tracking is an important technology in many domains. Techniques such as Convolutional Neural Networks (CNN) has allowed the invention of gaze tracking method that relies only on commodity hardware such as the camera on a personal computer. It has been shown that the full-face region for gaze estimation can provide better performance than from an eye image alone. However, a problem with using the full-face image is the heavy computation due to the larger image size. This study tackles this problem through compression of the input full-face image by removing redundant information using a novel learnable pooling module. The module can be trained end-to-end by backpropagation to learn the size of the grid in the pooling filter. The learnable pooling module keeps the resolution of valuable regions high and vice versa. This proposed method preserved the gaze estimation accuracy at a certain level when the image was reduced to a smaller size.
... Most camera-based systems are expensive and not M user-friendly. This paper proposes an eye-tracking system called EyeLive that uses discrete infrared sensors and emitters as the input device [13]. Patients with motor neuron disease and most terminal patients cannot use their hands or arms, and so they need another person for their all needs, only they can control their eyes. ...
Conference Paper
Full-text available
The field of human computer interaction (HCI) involves the creation of interactive computing systems for humans to enhance the quality of life of people especially with disabilities all over the world. This study proposes a multimodal system to give the opportunity to carry out all daily works with a personal computer (PC) for disabled people that cannot use their hands. In this study, it's aimed to create an interaction between the user and a machine that is performed by user's voice and eye movements. Turkish Speech Recognition was performed by using mel-frequency cepstral coefficient (MFCC) extraction, hidden markov model (HMM) and artificial neural networks (ANN). As a joint part of the software, an efficient eye tracking system with a Tobii 4C eye tracker, was developed having a feature of eye blink detection for controlling an interface that provides an alternate way of communication. This multimodal system was developed by the authors using Java language and Matlab library and the system performed promising results for Turkish training words. To increase the system's performance, usage of natural language processing methods is planned as a future work. Keywords - Human Computer Interaction, Speech Recognition, Eye-Gaze Tracking, PC control system, disabled people.
... The Eye tracking is a technology that permits the tracking of the user's eye position in order to analise his or hers gaze movement. Liu et al. [4] developed an eye-gaze tracking interface and in [5] it is described through the use of a wheelchair controlled by eye position. As a consequence of the disease progressing, the eye movements become more and more unreliable, and hence Brain-Computer Interfaces (BCIs) are always more crucial in the process of translation of user's intention into acts such as commands for external devices like communicators or robot. ...
Article
Full-text available
This paper illustrates a new architecture for a human-humanoid interaction based on EEG-Brain Computer Interface (EEG-BCI) for patients affected by locked-in syndrome caused by Amyotrophic Lateral Sclerosis (ALS). The proposed architecture is able to recognise users’ mental state accordingly to the biofeedback factor Bf , based on users’ Attention, Intention and Focus, that is used to elicit a robot to perform customised behaviours. Experiments have been conducted with a population of 8 subjects: 4 ALS patients in a near Locked-in status with normal ocular movement and 4 healthy control subjects enrolled for age, education and computer expertise. The results showed as three ALS patients have completed the task with 96.67% success; the healthy controls with 100% success; the fourth ALS has been excluded from the results for his low general attention during the task; the analysis of Bf factor highlights as ALS subjects have shown stronger Bf (81.20%) than healthy controls (76.77%). Finally, a post-hoc analysis is provided to show how robotic feedback helps in maintaining focus on expected task. These preliminary data suggest that ALS patients could successfully control a humanoid robot through a BCI architecture, potentially enabling them to conduct some everyday tasks and extend their presence in the environment.
... The first description and first recording of an electrical signal from the motor cortex was done by Dr. Walter and had been done by asking a volunteer under open brain surgery to press a bottom and record the electrical activity of the cortex region [5]. Another kind of computer-human interface had been produced, like using of human skin as touch interface [6], or controlling a wheelchair using the tongue [7], also using an eye-tracking system in human-computer interface [8]. Vernon and Joshi used an over ear muscle to control a television [9]. ...
Article
Full-text available
Brain Computer Interface means the group of processes in which the generated signals of EEG in the human brain can be interpreted and transferred to control an external device. In this paper, a novel method is adopted to control the rotation of a servo motor via EEG signals extracted from the human brain cortex. These signals have to pass through a processing procedure consisting mainly noise filtration and signal normalizing. The processed signal is fed to Arduino, which in turn controls the servo motor rotation.
... These have been pioneered both inside academia and out, for groups of individuals with diseases like amyotrophic lateral sclerosis (ALS), a peripheral motor-neuron disease paralyzing the body, while leaving eye movements in tact. These studies most often involve the movement of a cursor, wheelchair, and accessories via the point of gaze, while defining a variety of mouse-click paradigms, including blinks and others, as well as the use of gaze location to control computer graphical menus, zooming of windows, or context-sensitive presentation of information (Jacob, 1990(Jacob, , 1991(Jacob, , 1993aJakob, 1998;Zhai et al., 1999;Tanriverdi and Jacob, 2000;Ashmore et al., 2005;Laqua et al., 2007;Liu et al., 2012;Sundstedt, 2012;Hild et al., 2013;Wankhede et al., 2013). These paradigms have been extended to improve upon human control of robots (Carlson and Demiris, 2009), as well as humans controlling swarms (Couture-Beil et al., 2010;Monajjemi et al., 2013). ...
Article
Full-text available
OBJECTIVE: We aimed to elucidate how our domain-general cuing algorithm improved multitasking performance and changed behavioral strategies in human operators. BACKGROUND: Though many gaze-control systems have been designed, previous real-time gaze-aware assistance systems were not both successful and domain general. It is largely unknown what constitutes optimal search efficiency using the eyes, or ideal control using the mouse. It is unclear what the best coordinating strategies are between these two modalities. Our previously developed closed-loop multitasking aid drastically improved multitasking performance, though the behavioral mechanisms through which it acted were unknown. METHODS: We performed in-depth analyses and generated novel eye tracking and mouse movement measures, to explore the complex effects of our helpful system on gaze and motor behavior. RESULTS: Our overlay cuing algorithm improved control efficiency and reduced well-known biases in search patterns. This system also reduced micromanaging behavior, with humans rationally relying more on imperfect automation in experimental assistance cue conditions. We showed that mouse and gaze were more independently specialized in the helpful cuing condition than in control conditions. Specifically, with our aid, the gaze performed more global movement, and the mouse performed more local clustered movement. Further, the gaze shifted towards search over processing with the helpful cuing system. We also illustrated a relationship between the mouse and the gaze, such that in these studies, the hand was quicker than the eye. CONCLUSIONS: Overall, results suggested that our cuing system improved performance and reduced short term working memory load on humans by delegating it to the computer in real time. Further, it reduced the number of required repeated decisions by an estimate of about one per second. It also enabled the gaze to specialize for improved visual search behavior, and the mouse to specialize for improved control.
... This technology includes systems such as sign language, symbol or picture boards, and electronic devices with synthesized speech [1]. Recently, videooculography (VOG) control interfaces has played an increasingly important role in augmentative communication for disabled people has been growing in popularity for over 20 years [2][3][4]. The video-oculography is known as a non-invasive technology and reliable procedure of estimating autonomic nerve activity by a measurement of pupil response to a light stimulus [5]. ...
Article
This paper presents a system of assistive technology based on video-oculography (VOG) control interfaces, namely “Ophapasai”. The system designed specifically for disabled people communicate with the people that surround them. The primary method of this system consists of a selection of pictogram buttons within the circular menu-augment on the screen, which used an inexpensive video-oculography device. The results indicate that Ophapasai was used to support a number of communication functions (100%). Furthermore, we also conducted an evaluation of performance for pointing video-oculography device with three participants of amyotrophic lateral sclerosis (ALS). The evaluation used throughput of standards for computer pointing devices as measurement of user performance in a multi-direction point and select task which found, participants can pointing and selection their overall mean throughput was 2.02 bits/s.
... This prototype offers the possibility to phone, to visit websites, to read e-books, or to watch TV. An infrared sensor/emitterbased eye tracking prototype was developed from Liu et al. [5], which represents a low-cost alternative to the usual expensive video-based systems. With this eye tracking principle, only up/down/right/left eye gaze moves can be detected as well as staying in the center using the eyelids to trigger an event. ...
Conference Paper
Full-text available
The emergent technology of virtual presence systems opens up new possibilities for locked-in syndrome patients to regain mobility and interaction with their familiar environment. The classic locked-in syndrome is a state of paralysis of all four limbs while retaining full consciousness. Likewise, there is a paralysis of the vocal tract and respiration. Thus, the central problem consists in controlling a system only by eyes. In this paper, we present a prototype of a communication interface for patients with locked-in syndrome. The system allows the localization and identification of objects in a view of the local environment with the help of an eye tracking device. The selected objects can be used to express needs or to interact with the environment in a direct way (e. g., to switch the lights of the room on or off). The long term goal of the system is to give locked-in syndrome patients a larger flexibility and a new degree of freedom.
... Nowadays there exists a myriad of assistive mechanisms which mimic the functions of these input devices for individuals with severe disabilities. These include inductive tongue computer interface [1], eye-controlled device [2], head-controlled device [3,4], infrared-controlled devices [5,6], voicecontrolled device [7], and physiological signals assistive device (such as Electrooculogram (EOG) switch [8][9][10], Electromyography (EMG) switch [11][12][13], and Electroencephalogram (EEG) [14][15][16][17][18][19][20]). However, many assistive input devices provide a mode of switch scanning in which one master unit follows the sequence check status of the slave unit for the input of text characters, and this tends to be rather slow. ...
Article
Full-text available
This study developed an assistive system for the severe physical disabilities, named “code-maker translator assistive input device” which utilizes a contest fuzzy recognition algorithm and Morse codes encoding to provide the keyboard and mouse functions for users to access a standard personal computer, smartphone, and tablet PC. This assistive input device has seven features that are small size, easy installing, modular design, simple maintenance, functionality, very flexible input interface selection, and scalability of system functions, when this device combined with the computer applications software or APP programs. The users with severe physical disabilities can use this device to operate the various functions of computer, smartphone, and tablet PCs, such as sending e-mail, Internet browsing, playing games, and controlling home appliances. A patient with a brain artery malformation participated in this study. The analysis result showed that the subject could make himself familiar with operating of the long/short tone of Morse code in one month. In the future, we hope this system can help more people in need.
... Gaze-based systems to control computer displays, wheelchairs, or other robots have been extensively developed. These methods often aimed to move cursors, wheelchairs, accessories, graphical menus, zoom of windows, display context-sensitive presentation of information, while also including systems for mouse clicks (Jacob, 1990Jacob, , 1991Jacob, , 1993a Jakob, 1998; Zhai et al., 1999; Tanriverdi and Jacob, 2000; Ashmore et al., 2005; Laqua et al., 2007; Liu et al., 2012; Sundstedt, 2012; Hild et al., 2013; Wankhede et al., 2013). Such systems have also been employed for robot and swarm robot control (Carlson and Demiris, 2009; CoutureBeil et al., 2010; Monajjemi et al., 2013). ...
Article
Full-text available
OBJECTIVE: We developed an extensively general closed-loop system to improve human interaction in various multitasking scenarios, with semi-autonomous agents, processes, and robots. BACKGROUND: Much technology is converging toward semi-independent processes with intermittent human supervision distributed over multiple computerized agents. Human operators multitask notoriously poorly, in part due to cognitive load and limited working memory. To multitask optimally, users must remember task order, e.g., the most neglected task, since longer times not monitoring an element indicates greater probability of need for user input. The secondary task of monitoring attention history over multiple spatial tasks requires similar cognitive resources as primary tasks themselves. Humans can not reliably make more than ~2 decisions/s. METHODS: Participants managed a range of 4-10 semi-autonomous agents performing rescue tasks. To optimize monitoring and controlling multiple agents, we created an automated short term memory aid, providing visual cues from users' gaze history. Cues indicated when and where to look next, and were derived from an inverse of eye fixation recency. RESULTS: Contingent eye tracking algorithms drastically improved operator performance, increasing multitasking capacity. The gaze aid reduced biases, and reduced cognitive load, measured by smaller pupil dilation. CONCLUSIONS: Our eye aid likely helped by delegating short-term memory to the computer, and by reducing decision making load. Past studies used eye position for gaze-aware control and interactive updating of displays in application-specific scenarios, but ours is the first to successfully implement domain-general algorithms. Procedures should generalize well to: process control, factory operations, robot control, surveillance, aviation, air traffic control, driving, military, mobile search and rescue, and many tasks where probability of utility is predicted by duration since last attention to a task.
... First, as noted above, end-users of this system will require training in order to maximize their ITRs. Other studies show that training can increase ITRs by almost fifty percent [9]. Training on this system would include motor learning to use the sEMG mouse, becoming more familiar with the placement of the phonemes within the interface, and learning to translate their intended words into the set of 38 phonemes available in this interface (not required for participants of this study). ...
Article
Full-text available
Individuals with very high spinal cord injuries (e.g. C1-C3) may be ventilator-dependent and therefore unable to support speech breathing. However, their facial musculature is intact, given that these muscles are innervated by cranial nerves. We developed a system using surface electromyography (sEMG) recorded from facial muscles to control a phonemic interface and voice synthesizer and tested the system in healthy individuals. Users were able to use five facial gestures to control an onscreen cursor and the phonemic interface. Users had mean information transfer rates (ITRs) of 59.5 bits/min when calculating ITRs using the number of phonemes selected. To compare with orthographic systems, ITRs were also calculated using the equivalent number of letters required to spell the selected word. With this calculation, users had a mean ITR of 70.1. Results are promising for further development and testing in individuals with high spinal cord injuries.
... Eye gaze tracking (EGT) systems keep track of eye movements to estimate the focus of attention of an individual (Kumar & Sharma, 2016;Liu et al., 2012). Eye gaze can effectively estimate the location, where a user is looking at, evaluate graphical user interface and can enable interactive communication on social media and learning platforms (Jiang et al., 2019;Mou & Shin, 2018;Wang, Hung, et al., 2018). ...
... Harrison et al. [11], por exemplo, demonstraram a possibilidade da utilização da pele humana como uma interface sensível ao toque. Já Nam et al. [26] inventaram uma cadeira de rodas que é controlada com movimentos da língua, enquanto que o trabalho de Liu et al. [19] é um dos vários exemplos de rastreamento dos olhos para interação com computadores e, por sua vez, Vernon e Joshi [33] utilizaram um músculo acima da orelha -cuja função foi perdida com a evolução do ser humano -como controle remoto de televisão. É possível ir além, e utilizar uma parte do eixo central do corpo físico em todas as formas de interação humana: o cérebro. ...
Conference Paper
In Brain-Computer Interface (BCI) the user is able to interact with a computer only via biological signals of your brain without the need to use muscles. This is an research field on the rise but still immature relatively. However, it is important to think now about the different aspects of the Human-Computer Interaction (HCI) field related to BCIs, which should be part of interactive systems in the near future. As contributions of this work we highlight the prospecting of state of the art of BCIs and identification of HCI challenges on this subject matter.
... Furthermore, using a combination of different attributes generated from eye movements may become the focus of future research. More information regarding the applications of eye tracking in healthcare along with some future directions can be found in Liu et al. (2018) and Harezla and Kasprowski (2018). ...
Article
Eye tracking is the process of measuring where one is looking (point of gaze) or the motion of an eye relative to the head. Researchers have developed different algorithms and techniques to automatically track the gaze position and direction, which are helpful in different applications. Research on eye tracking is increasing owing to its ability to facilitate many different tasks, particularly for the elderly or users with special needs. This study aims to explore and review eye tracking concepts, methods, and techniques by further elaborating on efficient and effective modern approaches such as machine learning (ML), Internet of Things (IoT), and cloud computing. These approaches have been in use for more than two decades and are heavily used in the development of recent eye tracking applications. The results of this study indicate that ML and IoT are important aspects in evolving eye tracking applications owing to their ability to learn from existing data, make better decisions, be flexible, and eliminate the need to manually re-calibrate the tracker during the eye tracking process. In addition, they show that eye tracking techniques have more accurate detection results compared with traditional event-detection methods. In addition, various motives and factors in the use of a specific eye tracking technique or application are explored and recommended. Finally, some future directions related to the use of eye tracking in several developed applications are described.
... Example, Nam et al. [1] presented a paper on how a wheelchair is controlled by the movements of tongue. Liu et al. [2] proposed the human-computer interaction system that makes use of eyetracking technique. BCI lets users to communicate with computer only through their brain signals. ...
Article
Full-text available
Brain Computer Interfacing (BCI) allows user to control certain devices or objects using human brain. With the availability of various sensors, it is possible to obtain the data from the human brain which controls the physical activities in the brain itself. BCI users can access a system capable of communicating efficiently in a special manner that works based on the brain signals. BCI gives a better platform for physically challenged people to do certain significant functions in their daily routine. The use of brain imaging technologies in BCI helps to enhance the signal quality during the communication among the machines and the human beings. The paper describes some of the research works carried on BCI and its advancements to perform a better communication between the computers and the human brain, by using emerging technologies. This paper also covers the applications, challenges, and future directions of BCI.
... For instance, if the client needs to type the letter "H", he/she first looks to the left, and select the Left matrix by either shutting or staying the eye for 1 second. Although Eye Live system is slower than (camera-based system), it costs much less, reduces eye strain, more user-friendly and enhances mobility than other devices available [9]. ...
... Initially, EGTs were used in neurology, ophthalmology and psychology to study oculomotor patterns with respect to different cognition states of an individual [1,2]. Recently, EGTs have been used to decide appropriate marketing mix, advertisement design, human computer interaction, virtual reality, disease diagnosis and to study human behavior [3,4,5,6,7]. Tracking the gaze of an eye is used to study the influence of visual stimuli on consumer behavior and attentional processes [3,8,9]. ...
Article
Full-text available
Eye gaze tracking has been used to study the influence of visual stimuli on consumer behavior and attentional processes. Eye gaze tracking techniques have made substantial contributions in advertisement design, human computer interaction, virtual reality and disease diagnosis. Eye gaze estimation is considered critical for prediction of human attention, and hence indispensable for better understanding human activities. In this paper, Latent Semantic Analysis is used to develop an information model for identifying emerging research trends within eye gaze estimation techniques. An exhaustive collection of 423 titles and abstracts of research papers published during 2005-2018 were used. Five major research areas and ten research trends were classified based upon this study.
... Eye gaze tracking (EGT) is the process of measuring eye activity. It is primarily done to estimate the focus of attention of an individual (Kumar and Sharma 2016;Liu et al. 2010). Eye gaze analysis may also help to understand human behavior, attention and other cognitive activities (Hornof et al. 2003;Pantic B Jaiteg Singh jaitegkhaira@gmail.com 1 Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India et al. 2007). ...
Article
Full-text available
Eye gaze tracking plays an important role in various fields including, human computer interaction, virtual and augmented reality and in identifying effective marketing solutions in affective manner. This paper addresses real-time eye gaze estimation problem using low resolution ordinary camera available in almost every desktop environment as opposed to gaze tracking technologies requiring costly equipment and infrared light sources. In this research, a camera based non-invasive technique has been proposed for tracking and recording gaze points. Further, the proposed framework was used to analyze gaze behavior of users on advertisements displayed on social media website. Eye gaze fixations data of 32 participants were recorded, and gaze patterns were plotted using Heat maps. In addition, the gaze driven interface was designed for virtual interaction tasks to assess the performance, and usability of our proposed framework.
... They can regain their ability to communicate by using different eye-gaze systems, such as eye typing system that allow the patients to type text using eye-gaze, as well as applying the eye-gaze to move a wheelchair [6]. This work can offer help to individuals suffering from Cerebral Palsy, a neurological disorder that appears in infancy and permanently affects the body movement and muscle coordination, Multiple Sclerosis, a disease that targets the nervous system and affects the spinal cord and the brain [7], Amyotrophic Lateral Sclerosis (ALS), a disease where nerve cells that control the muscles' movements slowly die, so the muscles get weaker with time and begin to shrink [8] [9]. In this paper, Section II will discuss work related to previous eye tracking techniques, Section III will present the methodology used, Section IV will examine the system's implementation and finally Section V will share the results obtained from the system. ...
Conference Paper
Full-text available
One of the methods for handicapped people who can’t talk nor use their hands to communicate with the external world is using their eyes-gaze. Most of the commercial eye trackers used for typing are expensive and not user-friendly. The aim of this paper is to present a low-cost eye-tracking system that can be used easily by consumers. The basic idea of this project is as follows: An eye image is acquired from the web camera, and then the pupil will be detected using image processing techniques. Based on the estimated gaze direction, specific commands will be used to control the Graphical User Interface (GUI) which displays a screen that contains letters and special function buttons.
... This experiment proved that such a system could be a practical solution for people with severe paralysis [37]. In their research paper, S. S. Liu et al. compare camera-based and infrared sensor/emitter eye tracking systems and how the latter has more advantages when considering the cost, user interaction, and eye strain reduction [38]. In [39], how electronic consumers perceive images online was investigated. ...
Article
Full-text available
The paper aims to study the applicability and limitations of the solution resulting from a design process for an intelligent system supporting people with special needs who are not physically able to control a wheelchair using classical systems. The intelligent system uses information from smart sensors and offers a control system that replaces the use of a joystick. The necessary movements of the chair in the environment can be determined by an intelligent vision system analyzing the direction of the patient’s gaze and point of view, as well as the actions of the head. In this approach, an important task is to detect the destination target in the 3D workspace. This solution has been evaluated, outdoor and indoor, under different lighting conditions. In order to design the intelligent wheelchair, and because sometimes people with special needs also have specific problems with their optical system (e.g., strabismus, Nystagmus) the system was tested on different subjects, some of them wearing eyeglasses. During the design process of the intelligent system, all the tests involving human subjects were performed in accordance with specific rules of medical security and ethics. In this sense, the process was supervised by a company specialized in health activities that involve people with special needs. The main results and findings are as follows: validation of the proposed solution for all indoor lightning conditions; methodology to create personal profiles, used to improve the HMI efficiency and to adapt it to each subject needs; a primary evaluation and validation for the use of personal profiles in real life, indoor conditions. The conclusion is that the proposed solution can be used for persons who are not physically able to control a wheelchair using classical systems, having with minor vision deficiencies or major vision impairment affecting one of the eyes.
... Tracking as a metaphor can also be applied broadly to much of modern academic science; for example, amateur astronomers are tracking comets and other celestial objects (Ishiguro et al. 2014;Opitom et al. 2019). The "eye tracking" method is contributing to a wide array of scientific exploration from neuro-to medical science (e.g., Duchowski 2007; Liu et al. 2018), and singlequantum dot tracking is a powerful way to understand the dynamics of cellular organization (e.g., Dahan et al. 2003). In paleoecology, experts track the number of species over millions of years (e.g. ...
Article
Full-text available
In response to recent discussion about terminology, we propose “tracking science” as a term that is more inclusive than citizen science. Our suggestion is set against a post-colonial political background and large-scale migrations, in which “citizen” is becoming an increasingly contentious term. As a diverse group of authors from several continents, our priority is to deliberate a term that is all-inclusive, so that it could be adopted by everyone who participates in science or contributes to scientific knowledge, regardless of socio-cultural background. For example, current citizen science terms used for Indigenous knowledge imply that such practitioners belong to a sub-group that is other, and therefore marginalized. Our definition for “tracking science” does not exclude Indigenous peoples and their knowledge contributions and may provide a space for those who currently participate in citizen science, but want to contribute, explore, and/or operate beyond its confinements. Our suggestion is not that of an immediate or complete replacement of terminology, but that the notion of tracking science can be used to complement the practice and discussion of citizen science where it is contextually appropriate or needed. This may provide a breathing space, not only to explore alternative terms, but also to engage in robust, inclusive discussion on what it means to do science or create scientific knowledge. In our view, tracking science serves as a metaphor that applies broadly to the scientific community—from modern theoretical physics to ancient Indigenous knowledge.
Article
The accurate localization of iris center is difficult since the outer boundary of iris is often occluded significantly by the eyelids. In order to solve this problem, an infrared light source un-coaxial with the camera is used to produce dark pupil image for pupil center estimation. Firstly, the 3D position of the center of cornea curvature, which is used as translational movement information of eyeball, is computed using two cameras and the coordinates of two cornea reflections on the cameras' imaging planes. Then, the relative displacement of pupil center from the projection of the cornea curvature center on 2D image is extracted, describing the rotational movement of the eyeball. Finally, the feature vector is mapped into coordinates of gazing point on the screen using artificial neural network. As for the eye region detection problem, two wide-view webcams are used, and adaptive boosting + active appearance model algorithm is adopted to limit the region of interest within a small area. The result of our experiment shows that the average root-mean-square error is 0.62 degrees in horizontal direction and 1.05 degrees in vertical direction, which demonstrates the effectiveness of our solution in eye gaze tracking.
Article
Full-text available
Objectives This systematic review aimed to comprehensively collect and summarise the current body of knowledge regarding the cost-of-illness of amyotrophic lateral sclerosis, to identify cost-driving factors of the disease and to consider the development of costs over the course of disease. Further, the review sought to assess the methodological quality of the selected studies.MethodsA systematic review was performed using the databases MEDLINE, Embase, Cochrane Library and PsycINFO. Studies examining the economic burden of amyotrophic lateral sclerosis on a patient or national level written in English or German published from the year 2001 onwards were included. Additional searches were conducted. Study characteristics and results were extracted and compared.ResultsIn summary, 20 studies were included in this review. Most studies investigated costs per patient, amounting to total costs between €9741€ to €114,605. Six studies confirmed a rise in costs with disease progression, peaking close to the death of a patient. National costs for amyotrophic lateral sclerosis varied between €149 million and €1329 million.Conclusion Most of these studies suggest the economic burden of amyotrophic lateral sclerosis to be considerable. However, further research is needed to establish a cost-effective health policy in consideration of disease severities.
Conference Paper
Robotic control with gaze-classification has been an area of interest for quite some time. In this paper, a novel solution of implementing sensing system with intelligent visualization has been presented. Such a system can have a variety of applications but can be of exceptional importance for patients suffering with spinal muscular atrophy (SMA) disorder, which gradually end with paralyzed state. The presented solution incorporates Optical Gaze System, within a Virtual reality (VR) headset to control a robotic Avatar remotely based on the gaze position. The robot is being called as avatar due to its ability to maneuver and provide audio and video streams to the user wearing the VR headset, thus acting as an extension of the user him/herself. Hence people with permanent disability due to degenerative diseases such as SMA can benefit from this design by utilizing the avatar to see things and places where they cannot go themselves. The psychological impetus can assist tremendously in their treatment. The Fuzzy Algorithm presented here enables a very fast image classification method for such near-real-time application. Consequently, the processing and decision making of the system takes about a second with an accuracy of over 95%.
Conference Paper
This paper propose and discuss a solution of an intelligent visualization/sensing system for people who suffer from spinal muscular atrophy (SMA) which is a disorder that gradually makes the person paralyzed. The solution includes an electromyography (EMG) system, gaze system, and data fusion algorithm. Moreover, a robot is used that acts as an avatar on one side and a Virtual Reality (VR) technology is used on the patient's side. The evaluation and results of the developed system are shown at the end of the paper. The processing and decision making of the system takes 2.1 seconds. In addition, the accuracy of the EMG and gaze systems is almost 100%.
Conference Paper
Full-text available
Detecting human eyes movement and using it for communication is very common for people with amyotrophic lateral sclerosis (ALS) and other locked-in and paralysis diseases. The majority of existing systems are very expensive and nearly all of them use special devices and cameras, mostly infrared based, that are not commonly available. For Persian patients there is no special user interface at all. The system introduced in this paper which is called EyeType uses just a normal webcam and a calibration-free algorithm to identify eye gestures and by using a specific user interface enables the patients to type what they want. To make it easier, a pervasive algorithm is applied to score the words that are mostly used and after collecting the required information, begins to suggest most possible words to the user as he/she types the initial letters.
Conference Paper
We present a new application (“Sakura”) that enables people with physical impairments to produce creative visual design work using a multimodal gaze approach. The system integrates multiple features tailored for gaze interaction including the selection of design artefacts via a novel grid approach, control methods for manipulating canvas objects, creative typography, a new color selection approach, and a customizable guide technique facilitating the alignment of design elements. A user evaluation (N=24) found that non-disabled users were able to utilize the application to complete common design activities and that they rated the system positively in terms of usability. A follow-up study with physically impaired participants (N=6) demonstrated they were able to control the system when working towards a website design, rating the application as having a good level of usability. Our research highlights new directions in making creative activities more accessible for people with physical impairments.
Conference Paper
Full-text available
Glaucoma is one of the main reasons for irreversible blindness in developing countries including India. Early detection of Glaucoma is important to save the vision loss due to Glaucoma. Here, we propose how eye tracker metrics analysis may help us to detect glaucomatous changes in high-risk groups. This paper presents the use of eye tracker data for early detection of glaucoma. Continuous eye movements are the part of our visual perception. Whenever we look at any scene, saccadic eye movements are generated. Saccade is a change in gaze position by a rapid sweep, followed by fixations where the eye is stable. Scan path is the sequence saccades and fixations. This eye movement data can be collected as a part of person's daily activities to analyze for early detection of Glaucoma.
Chapter
Full-text available
Eye tracking is one of the advanced techniques to enable the people with Amyotrophic lateral Sclerosis and other locked-in diseases to communicate with normal people and make their life easier. The software application is an assistive communication tool designed for the ALS Patients, especially people whose communication is limited only to eye movements. Those patients will have difficulty in communicating their basic needs. The design of this system’s Graphical User Interface is made in such a way that it can be used by physically impaired people to convey their basic needs through pictures using their eye movements. The prototype is user friendly, reliable and performs simple tasks in order the paralyzed or physically impaired person can have an easy access to it. The application is tested and the results are evaluated accordingly.
Article
The Internet of Things (IoT) attempts to help people access Internet-connected devices, applications, and services anytime and anywhere. However, how providing an efficient and intuitive method of interaction between people and IoT devices is still an open challenge. In this paper, we propose a novel interaction system called Watch & Do , where users can control an IoT device by gazing at it and doing simple gestures. The proposed system mainly consists of: 1) object detection module; 2) gaze estimation module; 3) hand gesture recognition module; and 4) IoT controller module. The target device is identified by various deep learning-based gaze estimation and object detection techniques. Afterwards, hand gesture recognition is applied to generate an IoT device control command which is transmitted to the IoT platform. The experimental results and case studies demonstrate the feasibility of the proposed system and imply the future research directions.
Preprint
Glaucoma is one of the main reasons for irreversible blindness in developing countries including India. Early detection of Glaucoma is important to save the vision loss due to Glaucoma. Here, we propose how eye tracker metrics analysis may help us to detect glaucomatous changes in high-risk groups. This paper presents the use of eye tracker data for early detection of glaucoma. Continuous eye movements are the part of our visual perception. Whenever we look at any scene, saccadic eye movements are generated. Saccade is a change in gaze position by a rapid sweep, followed by fixations where the eye is stable. Scan path is the sequence saccades and fixations. This eye movement data can be collected as a part of person's daily activities to analyze for early detection of Glaucoma.
Conference Paper
Full-text available
EyeSeeCam is a novel head mounted camera that is continuously oriented to the user's point of regard by the eye movement signals of a mobile video-based eye tracking device. We have devised a new eye tracking algorithm for EyeSeeCam which has low computational complexity and lends enough robustness in the detection of pupil centre. Accurate determination of the location of the centre of the pupil and processing speed are the most crucial requirements in such a real-time video-based eye-tracking system. However, occlusion of the pupil by artifacts such as eyelids, eyelashes, glints and shadows in the image of the eye and changes in the illumination conditions pose significant problems in the determination of pupil centre. Apart from robustness and accuracy, real-time eye-tracking applications demand low computational complexity as well. In our algorithm, the Fast Radial Symmetry Detector is used to give a rough estimate of the location of the pupil. An edge operator is used to produce the edge image. Unwanted artifacts are deleted in a series of logical steps. Then, Delaunay Triangulation is used to extract the pupil boundary from the edge image, based on the fact that the pupil is a convex hull. A luminance contrast filter is used to obtain an ellipse fit at the subpixel level. The ellipse fitting function is based on a non iterative least squares minimization approach. The pupil boundary was detected accurately in 96% of the cases, including those in which the pupil was occluded by more than half its size. The proposed algorithm is also robust against drastic changes in the environment, i.e., eye tracking in a closed room versus eye tracking in sunlight.
Conference Paper
Full-text available
Various interfaces for gaze control (which are recommended due to certain requirements of controlling a machine by gaze) have already been developed. One problem, especially for novice users, is that respective interfaces all look different and require different steps to use. As a means to unify interfaces for gaze control, pie menus are suggested. Such pEYEs allow for universal input in various applications usable by novices and by experts. We present two examples for pEYE interfaces; one eye-typing application and one desktop navigation. Observations in user studies indicate effective and efficient performance and a large acceptance.
Article
Full-text available
Methodology for the evaluation of potential optical radiation hazards .has been developed in response to the increasing use of high radiance optical sources, such as lasers, compact arc lamps, tungsten-halogen lamps, and electronic flash lamps. Recent biological investigations of injury from ultraviolet radiation and studies of chorioretinal injury from high radiance sources permit a realistic ocular hazard evaluation. Safe exposure criteria that may be readily applied to practical situations have been developed from the available biological data and from experience with occupational hazards. Hazard evaluation techniques, hazard data, and general control measures are provided for a number of commonly encountered ultraviolet, visible, and infrared sources.
Article
In the past twenty years, gaze control has become a reliable alternative input method not only for handicapped users. The selection of objects, however, which is of highest importance and of highest frequency in computer control, requires explicit control not inherent in eye movements. Objects have been therefore usually selected via prolonged fixations (dwell times). Dwell times seemed to be for many years the unique reliable method for selection. In this paper, we review pros and cons of classical selection methods and novel metaphors, which are based on pies and gestures. The focus is on the effectiveness and efficiency of selections. In order to discover the real potential of current suggestions for selection, a basic empirical comparison is recommended.
Conference Paper
Fitts' law, a one-dimensional model of human movement, is commonly applied to two-dimensional target acquisition tasks on interactive computing systems. For rectangular targets, such as words, it is demonstrated that the model can break down and yield unrealistically low (even negative!) ratings for a task's index of difficulty (ID). The Shannon formulation is shown to partially correct this problem, since ID is always ≥ 0 bits. As well, two alternative interpretations “target width” are introduced that accommodate the two-dimensional nature of tasks. Results of an experiment are presented that show a significant improvement in the model's performance using the suggested changes.
Conference Paper
The Eye-gaze Response Interface Computer Aid (ERICA) is a computer system developed at the University of Virginia that tracks eye movement. Originally developed as a means to allow individuals with disabilities to communicate, ERICA was then expanded to provide methods for experimenters to analyze eye movements. This paper describes an application called Gaze Tracker#8482; that facilitates the analysis of a test subject's eye movements and pupil response to visual stimuli, such as still images or dynamic software applications that the test subject interacts with (for example, Internet Explorer).
Conference Paper
Eye typing could provide motor disabled people a reliable method of communication given that the text entry speed of current interfaces can be increased to allow for fluent communication. There are two reasons for the relatively slow text entry: dwell time selection requires waiting a certain time, and single character entry limits the maximum entry speed. We adopted a typing interface based on hierarchical pie menus, pEYEwrite [Urbina and Huckauf 2007] and included bigram text entry with one single pie iteration. Therefore, we introduced three different bigram building strategies. Moreover, we combined dwell time selection with selection by borders, providing an alternative selection method and extra functionality. In a longitudinal study we compared participants performance during character-by-character text entry with bigram entry and with text entry with bigrams derived by word prediction. Data showed large advantages of the new entry methods over single character text entry in speed and accuracy. Participants preferred selecting by borders, which allowed them faster selections than the dwell time method.
Conference Paper
In this study, a new eye tracking system, namely Eye Touch, is introduced. Eye Touch is based on an eyeglasses-like apparatus on which IrDA sensitive sensors and IrDA light sources are mounted. Using inexpensive sensors and light sources instead of a camera leads to lower system cost and need for the computation power. A prototype of the proposed system is developed and tested to show its capabilities. Based on the test results obtained, Eye Touch is proved to be a promising human-computer interface system.
Conference Paper
The Eye-gaze Response Interface Computer Aid (ERICA) is a computer system developed at the University of Virginia that tracks eye movement. To allow true integration into the Windows environment, an effective methodology for performing the full range of mouse actions and for typing with the eye needed to be constructed. With the methods described in this paper, individuals can reliably perform all actions of the mouse and the keyboard with their eye.
Conference Paper
Pie menus offer several features which are advantageous especially for gaze control. Although the optimal number of slices per pie and of depth layers has already been established for manual control, these values may differ in gaze control due to differences in spatial accuracy and congitive processing. Therefore, we investigated the layout limits for hierarchical pie menu in gaze control. Our user study indicates that providing six slices in multiple depth layers guarantees fast and accurate selections. Moreover, we compared two different methods of selecting a slice. Novices performed well with both, but selecting via selection borders produced better performance for experts than the standard dwell time selection.
Article
Reports of 3 experiments testing the hypothesis that the average duration of responses is directly proportional to the minimum average amount of information per response. The results show that the rate of performance is approximately constant over a wide range of movement amplitude and tolerance limits. This supports the thesis that "the performance capacity of the human motor system plus its associated visual and proprioceptive feedback mechanisms, when measured in information units, is relatively constant over a considerable range of task conditions." 25 references. (PsycINFO Database Record (c) 2006 APA, all rights reserved).
Article
Saccadic eye movements, fixation, and smooth pursuit were recorded in 17 subjects with amyotrophic lateral sclerosis (ALS) and 11 age-matched controls using a magnetic scleral search coil. Reflexive, remembered and antisaccades, and smooth pursuit at four target velocities were studied. Subjects with ALS showed significantly elevated error rates (distractibility) and latency in the antisaccade and remembered saccade paradigms but no abnormality of reflexive saccades. The frequency of small saccades that intruded on steady fixation (square-wave jerks) was also increased in ALS subjects. Peak velocity gain of smooth pursuit and performance on the Wisconsin Card Sort Test did not differ significantly between the two groups. These findings are consistent with prefrontal dysfunction in ALS and provide an independent source of support for the thesis that the pathology of this condition invades frontal cortex.
Article
The effects of response amplitude and terminal accuracy on 2-choice reaction time (RT) and on movement time (MT) were studied. Both the required amplitude (A) of a movement, and the width (W) of the target that S was required to hit, had a large and systematic effect on MT, whereas they had a relatively small effect on RT. Defining an index of movement difficulty as ID = log^B2)2A/W, the correlation between ID and MT was found to be above .99 over the ID range from 2.6 to 7.6 bits per response. Thus the times for discrete movements follow the same type of law as was found earlier to hold for serial responses. The relative independence of RT and MT is interpreted as pointing to the serial and independent nature of perceptual and motor processes. (17 ref.)
Conference Paper
Head-fixed camera systems are widely known, but since they are aligned by the head only and not by the eyes they are not able to always look at what the user is looking at, and the image quality is poor if no effort is made to stabilize the camera. The prototype of a new eye movement controlled head-mounted camera system was developed that is continuously aligned with the orientation of gaze. In doing so, the biological gaze stabilization reflexes are used to keep the video camera stable on target. The system is mobile, head-mounted, and battery driven. Applications like documentation of surgery through the eyes of a surgeon or documentary movies for sports and other activities are conceivable. A demonstrator of the device was already tested during an operation which lasted 2.5 hours. During this surgery, the system was able to document the manual activities in great detail. The resulting movies can also be used for teaching purposes.
Conference Paper
A first proof of concept was developed for a head-mounted video camera system that is continuously aligned with the user's orientation of gaze. Eye movements are tracked by video-oculography and used as signals to drive servo motors that rotate the camera. Thus, the sensorimotor output of a biological system for the control of eye movements - evolved over millions of years s used to move an artificial eye. All the capabilities of multi-sensory processing for eye, head, and surround motions are detected by the vestibular, visual, and somatosensory systems and used to drive a technical camera system. A camera guided in this way mimics the natural exploration of a visual scene and acquires video sequences from the perspective of a mobile user, while the vestibulo-ocular reflex naturally stabilizes the "gaze-in-space" of the camera during head movements and locomotion. Various applications in health care, industry, and research are conceivable.
Conference Paper
This paper describes the usability of an eye-gazing input system with respect to the configuration of a selected object. In this study, we analyzed the effect of the configuration of buttons on pointing time using an eye-gazing input system. The configuration included button size, moving distance and moving direction. The input properties of the eye-gazing input system are compared with those of a standard mouse, and the differences are discussed
Method and installation for detecting and following an eye and the gaze direction thereof
  • M Elvesjo
  • G Skogo
  • Elvers
Elvesjo, M. Skogo and G. Elvers, "Method and installation for detecting and following an eye and the gaze direction thereof," U.S. Patent No. 7,572,008, 2009.
Eye tracker with visual feedback
  • J Bouvin
  • P Runemo
J. Bouvin and P. Runemo, "Eye tracker with visual feedback," U.S. Patent App. 2009-0125849, 2009.
Generation of graphical feedback in a computer system
  • J Elvesjii
  • A Olsson
  • J Sahlen
J. Elvesjii, A. Olsson and J. Sahlen, "Generation of graphical feedback in a computer system," U.S. Patent App. 2009-0315827, 2009.
Linear array eye tracker
  • K Yee
K. Yee, "Linear array eye tracker," U.S. Patent No. 6,283,954, 2001.
High-speed eye tracking device and method
  • W J Katz
W. J. Katz, "High-speed eye tracking device and method," U.S. Patent No. 5,270,748, 1993.
Progress on an eye tracking system using multiple near-infrared emitter/detector pairs with special application to efficient eye-gaze communication
  • D Grover
D. Grover, "Progress on an eye tracking system using multiple near-infrared emitter/detector pairs with special application to efficient eye-gaze communication," Proc. 2nd COGAIN, 21-25, 2006.
Selecting with gaze controlled pie menus
  • M H Urbina
  • A Huckauf
M. H. Urbina and A. Huckauf, "Selecting with gaze controlled pie menus," Proc. 5th COGAIN, 25-29, 2009.
Dwell-time free eye typing approaches
  • M H Urbina
  • A Huckauf
M. H. Urbina and A. Huckauf, "Dwell-time free eye typing approaches," Proc. 3rd COGAIN, 65-70, 2007.