ArticlePDF AvailableLiterature Review

A psychotechnological review on eye-tracking systems: Towards user experience

Authors:

Abstract and Figures

The aim of the present work is to show a critical review of the international literature on eye-tracking technologies by focusing on those features that characterize them as 'psychotechnologies'. A critical literature review was conducted through the main psychology, engineering, and computer sciences databases by following specific inclusion and exclusion criteria. A total of 46 matches from 1998 to 2010 were selected for content analysis. Results have been divided into four broad thematic areas. We found that, although there is a growing attention to end-users, most of the studies reviewed in this work are far from being considered as adopting holistic human-computer interaction models that include both individual differences and needs of users. User is often considered only as a measurement object of the functioning of the technological system and not as a real alter-ego of the intrasystemic interaction. In order to fully benefit from the communicative functions of gaze, the research on eye-tracking must emphasize user experience. Eye-tracking systems would become an effective assistive technology for integration, adaptation and neutralization of the environmental barrier only when a holistic model can be applied for both design processes and assessment of the functional components of the interaction.
Content may be subject to copyright.
A preview of the PDF is not available
... González-Rodríguez et al. [42] demonstrated that facial expression recognition is a suitable tool to measure customer satisfaction in the tourism industry and can complement or replace self-completion questionnaires. Several studies investigated eye-activity measurement [43] and found that such measurements can provide feedback on the user's internal processes, cognitive processing [44], and the perceptual characteristics of the test object by measuring gaze fixation, pointing, and duration [45], [46]. ...
Article
Full-text available
Industry 4.0 focuses on the digitalization of production processes and technological innovation. The concept of Industry 5.0 puts the focus on human-centricity, sustainability, and resilience at the heart of research and development and innovation (R&D&I) processes to allow industry to serve humanity with a long-term vision that considers planetary boundaries. Replacing the technology-driven approach with a fundamentally human-centric approach requires a deep understanding of the working environment and workers interacting with machines to optimize worker well-being, working conditions, and job outcomes. Analyzing computer work User eXperience (UX) in industrial environments is vital. However, user perceptions are usually hidden and a challenge to detect. Therefore, measuring and monitoring perceptions, emotional reactions, subjective elements, preferences, and attitudes in the relationships between usability, work performance, and workload is crucial. This study provides conclusions of a literature review on user experience studies focusing on UX testing methods and the disciplines linked to industrial diversification. Based on literature analysis, it identifies UX testing methods and create own grouping to analyze them. It also examines the disciplinary context of user experience testing.
... Researchers proposed different kinds of technologies and algorithms to perform an automatic analysis for the position and direction of human eyes. The analysis of eye-movement has been widely used in different research areas, such as visual systems [1][2][3][4] , neuroscience and psychology [5] , healthcare and psycholinguistics [6] , user experience and interaction [7] , consumer research, professional performance, marketing [8] , clinical research, economy, education [9] , software engineering and product design, virtual reality, transportation and so on [10][11][12] . ...
Article
Full-text available
Eye-movement analysis has gained significant attention from the eye tracking research community, particularly for the real-time applications. The greatest potential comprehension of human behavior depends on the accurate collection of visual information from the relevant scene locations through the use of eye-movement analysis. Researchers may analyze a participant's eye movements when they are engaged in different tasks. Human eye-movement analysis can give them insight into the participant's cognitive processes and a better knowledge of human behavior. For the dynamics of the environment, analyzing and predicting eye-movement is extremely challenging. Therefore, it is very important to develop a method that can be exploited to accurately and objectively record,analyze, and predict human visual behavior. In this research, an eye-movement analysis and prediction system is developed by integrating one of the deep learning and linear quadratic estimation techniques. In order to achieve higher classification accuracy, the Convolutional Neural Network (CNN) algorithm is adopted to analyze the instantaneous eye-movement signal and predict forthcoming eye-movement positions. Before using CNN algorithm, the data collected by the camera will be recorded and preprocessed in advance, in which the camera is mounted on an Arduino board connected to a computer. There are two algorithms proposed to preprocess the collected data, which are face tracking and eye detection. They are used to determine the head pose and extract the eyes' regions. Then, CNN algorithm will be used to classify the eye movements, and the Kalman filter will be carried out to provide a more useful interaction between the proposed system and the user. The experiment for a number of scenarios has been implemented to verify the efficiency for the proposed system. Since the core components of the proposed system are the classification and prediction of eye movements, then the eye-movement classification and prediction algorithm has been evaluated, respectively. For the eye-movement classification algorithm, the classification method proposed in this thesis shows better performance when compared with the Recurrent Neural Network (RNN) model in terms of classification and behavioral classification scores. For the eye-movement prediction algorithm, the evaluation results show that it is effective and efficient to predict three types of eye movements. Based on the modularity architecture design, the system designed in this thesis can realize the classification and prediction for three types of eye movements. In order to realize some particular purposes, the designed system contains many sub-components and modules, and it can be accessed and utilized alone or together, which is more convenient for the users.
... Eye-tracking devices have been used as complementary evaluation tools in usability and user experience studies since the 1970s (Mele and Federici 2012). However, their widespread utilization has been traditionally difficult due to challenges in data analysis and some limitations of the technology itself. ...
Article
Full-text available
Eye-tracking techniques are widely used to analyze user behavior. While eye-trackers collect valuable quantitative data, the results are often described in a qualitative manner due to the lack of a model that interprets the gaze trajectories generated by routine tasks, such as reading or comparing two products. The aim of this work is to propose a new quantitative way to analyze gaze trajectories (scanpaths) using machine learning. We conducted a within-subjects study (N = 30) testing six different tasks that simulated specific user behaviors in web sites (attentional, comparing two images, reading in different contexts, and free surfing). We evaluated the scanpath results with three different classifiers (long short-term memory recurrent neural network—LSTM, random forest, and multilayer perceptron neural network—MLP) to discriminate between tasks. The results revealed that it is possible to classify and distinguish between the 6 different web behaviors proposed in this study based on the user’s scanpath. The classifier that achieved the best results was the LSTM, with a 95.7% accuracy. To the best of our knowledge, this is the first study to provide insight about MLP and LSTM classifiers to discriminate between tasks. In the discussion, we propose practical implications of the study results.
... The minimal physical movement requirement for eye-tracking makes it a reliable method of communication, especially to people with severe disabilities (Karlsson et al. 2018). Large amounts of research have demonstrated the effectiveness, usefulness and usability of this technology (Mele and Federici 2012). For people who, due to nystagmus or other conditions cannot control their eye movements, BCI with direct brain access may be an alternative access method (Allison et al. 2007). ...
Article
Full-text available
Communication is a vital skill of a human’s life. People with different types of disabilities may have Complex Communication Needs and may need a wearable device to help them to communicate. Augmentative and Alternative Communication (AAC) is a term which refers to the methods of facilitating or replacing people’s communication abilities. Brain–computer interfaces (BCI) and Eye-Gaze Technology (EGT) are two widely used access technologies in AAC devices. However, there are only a few studies that have investigated the utilisation of these technologies in a Virtual Reality (VR) or Augmented Reality (AR) environment. VR and AR are both modern technologies which provide immersive environments. In addition, the Mixed Reality (MR) environment combines virtual reality with real life and may offer extra benefits such as better immersion, better interaction, and more information. This paper proposed an MR-based wearable AAC device and compared the usability and acceptability between its Eye-Gaze (EG) and BCI interaction options. Eight neurotypical participants and two participants with cerebral palsy participated. The result showed high usability (accuracy = 93.30%, the information transfer rate was 8.55 selections per minutes) and acceptability (QUEST 2.0 = 4.30, NASA-TLX = 2.14) in the EG session. In contrast, the usability of the BCI system in the current design was questionable. This novel interaction method using Electroencephalogram signals is not sufficiently exploited at the moment, and more research is suggested in the future.
... It also offers an opportunity for people with disabilities to maintain their interaction with the digital world (Hemmingsson & Borgestig, 2020;Sesin et al., 2008). Furthermore, eye-tracking technology can effectively be deployed in assessing individuals with disabilities (Mele & Federici, 2012) or validating the assessment tools (Miller & O'Donnell, 2013). To sum up, eye-tracking technology seems to be an indispensable part of contemporary special education. ...
Article
Full-text available
The present study aimed to present how eye-tracking is employed in special education. While presenting the use of eye-tracking technology, the research considered some eye-tracking-related aspects, such as its potential to facilitate the education of children with special needs, investigation of disabilities through eye-tracking technology, and the relationship between learning outcomes and eye-tracking measurements in studies including children with special needs. Within specified criteria, this systematic review was carried out using famous electronic databases, such as Web of Science, Scopus, Science Direct, Education Source, Education Resources Information Center (ERIC), and EBSCOhost. Accordingly, 48 papers reviewed demonstrated an increasing trend in the use of eye-tracking technology in special education research. The age range of the target group recruited in previous research was often between 6 and 11 years, and reading and word learning were the most studied learning outcomes. Moreover, dyslexia and autism spectrum disorder (ASD) were commonly investigated by children’s eye movements, particularly fixation duration and fixation count. This study also revealed that eye-tracking technology is helpful in exploring reading, word learning, mathematics skills, social communication, language development, motor learning, eyesight improvement, and route learning in special education. Furthermore, it is pretty suitable and effective to utilize eye-tracking technology as assistive technology (AT) to improve the learning environments of children with special needs.
Article
Full-text available
Objetivo: Identificar pesquisas relacionadas a testes com usuários de redes sociais via análise realizada por ferramentas de Eye Tracking, sendo também objetivado um levantamento acerca dos métodos empregados em tais pesquisas, apontando assim possibilidade de empregá-los em pesquisas do âmbito da Ciência da Informação. Metodologia: Pesquisa exploratória de revisão bibliográfica com abordagem quali-quantitativa com fonte nas bases BRAPCI, SciELO e Web of Science. Resultados: Foram encontrados 137 artigos com potencial de satisfazer os resultados da pesquisa. Destes, apenas 3 se sustentaram após os filtros de seleção empregados, permitindo a análise e explanação minuciosa de seus métodos a fim de permitir sua possível replicação em publicações futuras. Conclusões: Concluiu-se que os métodos observados podem ser replicados em pesquisas do campo da Ciência da Informação e que estes têm grande valor para a área uma vez que expõem processos cognitivos referentes à relação informação e sujeito informacional, os métodos encontrados na maior parte dos estudos analisados no levantamento não fazem uso das redes sociais em si (optam por simulação) e por fim foi levantada a questão da preocupação com a carência de produções brasileiras a respeito do tema.
Article
Full-text available
People with disabilities have new and advanced methods to communicate with the applications for virtual keyboards and other communication tools. In this paper, we utilized a novel deep reinforcement learning-based technique for determining the location of the accessible options for gaze-controlled tree-based menu selection system. A virtual English keyboard has been incorporated into the layout of the new user interface, which also includes improved requests for text modification through the gaze. The two methods used to manage the system are: 1) eye tracking for typing on the virtual keyboard and 2) eye tracking with a device for soft-switch utilizing deep reinforcement learning. Simulation results show that DRL based algorithm outperforms other baseline techniques in terms of total loss and accuracy.
Article
Full-text available
Digital technologies in, on, and around bicycles and cyclists are gaining ground. Collectively called Smart Cycling Technologies (SCTs), it is important to evaluate their impact on subjective cycling experiences. Future evaluations can inform the design of SCTs, which in turn can help to realize the abundant benefits of cycling. Wearable body sensors and advanced driver assistance systems are increasingly studied in other domains, however evaluation methods integrating such sensors and systems in the field of cycling research were under-reviewed and under-conceptualized. This paper therefore presents a systematic literature review and conceptual framework to support the use of body sensors in evaluations of the impact of SCTs on perceptions, emotions, feelings, affect, and more, during outdoor bicycle rides. The literature review (n = 40) showed that there is scarce research on this specific use of body sensors. Moreover, existing research designs are typically not tailored to determine impact of SCTs on cycling experience at large scale. Most studies had small sample sizes and explored limited sensors in chest belts and wristbands for evaluating stress response. The evaluation framework helps to overcome these limitations, by synthesizing crucial factors and methods for future evaluations in four categories: (1) experiences with SCTs, (2) experience measurements, (3) causal analysis, (4) confounding variables. The framework also identifies which types of sensors fit well to which types of experiences and SCTs. The seven directions for future research include, for example, experiences of psychological flow, sensors in e-textiles, and cycling with biofeedback. Future interactions between cyclists and SCTs will likely resemble a collaboration between humans and artificial intelligence. Altogether, this paper helps to understand if future support systems for cyclists truly make cycling safer and more attractive.
Article
Proper calibration of human reliance on AI is fundamental to achieving complementary performance in AI-assisted human decision-making. Most previous works focused on assessing user reliance, and more broadly trust, retrospectively, through user perceptions and task-based measures. In this work, we explore the relationship between eye gaze and reliance under varying task difficulties and AI performance levels in a spatial reasoning task. Our results show a strong positive correlation between percent gaze duration on the AI suggestion and user AI task agreement, as well as user perceived reliance. Moreover, user agency is preserved particularly when the task is easy and when AI performance is low or inconsistent. Our results also reveal nuanced differences between reliance and trust. We discuss the potential of using eye gaze to gauge human reliance on AI in real-time, enabling adaptive AI assistance for optimal human-AI team performance.
Article
Full-text available
An experiment is reported that investigated the application of eye movement analysis in the evaluation of webpage usability. Participants completed two tasks on each of four website homepages. Eye movements and performance data (Response Scores and Task Completion Times) were recorded. Analyses of performance data provided reliable evidence for a variety of Page and Task effects, including a Page by Task interaction. Four eye movement measures (Average Fixation Duration, Number of Fixations, Spatial Density of Fixations, and Total Fixation Duration) were also analysed statistically, and were found to be sensitive to similar patterns of difference between Pages and Tasks that were evident in the performance data, including the Page by Task interaction. However, this interaction failed to emerge as a significant effect (although the main effects of Page and Task did). We discuss possible reasons for the nonsignificance of the interaction, and propose that for eye movement analysis to be maximally useful in interface-evaluation studies, the method needs to be refined to accommodate the temporal and dynamic aspects of interface use, such as the stage of task processing that is being engaged in.
Chapter
Full-text available
This chapter provides a practical guide for either the software usability engineer who considers the benefits of eye tracking or the eye tracking specialist who considers software usability evaluation as an application. Usability evaluation is defined rather loosely by industry as any of several applied techniques where users interact with a product, system, or service and some behavioral data are collected. Usability goals are often stipulated as criteria, and an attempt is made to use test participants similar to the target-market users. The chapter discusses methodological issues first in usability evaluation and then in the eye-tracking realm. An integrated knowledge of both of these areas is beneficial for the experimenter who conducts eye tracking as part of a usability evaluation. Within each of these areas, major issues are presented in the chapter by a rhetorical questioning style. By presenting the usability evaluation, the practical use of an eye-tracking methodology is placed into a proper and realistic perspective.
Article
This technical report was developed by the National Joint Committee for the Communication Needs of Persons With Severe Disabilities. National Joint Committee member organizations and their respective representatives who prepared this statement include the American Association of Mental Retardation, Mary Ann Romski; the American Occupational Therapy Association, Jane Rourk; the American Speech-Language-Hearing Association, Beth Mineo Mollica, Rose Sevcik, Diane Paul-Brown (ex officio), and Alex F. Johnson (monitoring vice president); the Council for Exceptional Children, Division for Communicative Disabilities and Deafness, Lee McLean (chair); RESNA, Kevin Caves; The Association for Persons with Severe Handicaps, Pat Mirenda and Martha Snell; and the United States Society for Augmentative Alternative Communication, David Yoder.
Article
This experiment involved the question of where human observers look in a picture. The results indicated that observers fixate earlier, more often, and with longer durations on objects that have a low probability of appearing in a scene (e.g., an octopus in a farm scene) than on objects that have a high probability of appearing (e.g., a tractor in a farm scene). These findings (a) imply a role of cognitive factors in peripheral visual processing and (b) suggest a possible relationship between the nature of information initially acquired from a picture and subsequent recognition memory for that picture. Language: en
Article
Eye tracking is a technique whereby an individual’s eye movements are measured so that the researcher knows both where a person is looking at any given time and the sequence in which the person’s eyes are shifting from one location to another. Tracking people’s eye movements can help HCI researchers to understand visual and display-based information processing and the factors that may impact the usability of system interfaces. In this way, eye-movement recordings can provide an objective source of interface-evaluation data that can inform the design of improved interfaces. Eye movements also can be captured and used as control signals to enable people to interact with interfaces directly without the need for mouse or keyboard input, which can be a major advantage for certain populations of users, such as disabled individuals. We begin this article with an overview of eye-tracking technology and progress toward a detailed discussion of the use of eye tracking in HCI and usability research. A key element of this discussion is to provide a practical guide to inform researchers of the various eye-movement measures that can be taken and the way in which these metrics can address questions about system usability. We conclude by considering the future prospects for eye-tracking research in HCI and usability testing. Purchase this chapter to continue reading all 9 pages >
Chapter
This chapter reveals that for understanding the prospects and problems of text entry by gaze, it is instrumental to know how eye-tracking devices work and to understand their limitations. Text entry by gaze is intended for users with disabilities. There are also other gaze controlled applications intended for the same user group. In one sense, text entry by eye gaze is quite similar to any screen-based text entry technique, such as the on-screen keyboards used with tablet PCs. The interface is more or less the same, only the interaction technique for pointing and selecting changes. Instead of a stylus or other pointing device, eye gaze is used. The most common way to use gaze for text entry is direct pointing by looking at the desired letter. A typical setup has an on-screen keyboard with a static layout, an eye tracking device that tracks the user's gaze, and a computer that analyzes the user's gaze behavior.