Article

The Eye Tracking Methods in User Interfaces Assessment

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Acquiring basic information about the user’s needs, is one of the most important problem which a user interface designer has to face. It influences the selection of the design patterns which match the user’s requirements. Most frequently lots of possible solutions could be found and the appropriate choice has to be done. The results from some previously conducted research regarding human-computer interaction proved that collecting and analysing the eye movement data may be useful in the user interfaces assessment as well. The aim of the preliminary studies presented in this paper was to analyse to what extent the eye tracing methods and eye movement metrics can support the process of user interfaces’ assessment and how this process can be automated.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... As a consequence, substantially more institutions have obtained a new tool for dealing with their quotidian tasks. For example, eye-movement technology may be used for things such as ascertaining the usability of various interface types [1,2], recognizing visual patterns when reading [3,4] or searching for interesting information, and differentiating between experts and novices [5,6]. Usage of eye-tracking solutions also plays an important role in medicine [7], psychology [8], and cognitive studies [9]. ...
Article
Full-text available
Analysis of eye movement has attracted a lot of attention recently in terms of exploring areas of people’s interest, cognitive ability, and skills. The basis for eye movement usage in these applications is the detection of its main components—namely, fixations and saccades, which facilitate understanding of the spatiotemporal processing of a visual scene. In the presented research, a novel approach for the detection of eye movement events is proposed, based on the concept of approximate entropy. By using the multiresolution time-domain scheme, a structure entitled the Multilevel Entropy Map was developed for this purpose. The dataset was collected during an experiment utilizing the “jumping point” paradigm. Eye positions were registered with a 1000 Hz sampling rate. For event detection, the knn classifier was applied. The best classification efficiency in recognizing the saccadic period ranged from 83% to 94%, depending on the sample size used. These promising outcomes suggest that the proposed solution may be used as a potential method for describing eye movement dynamics.
... There are plenty of potential applications of eye tracking: medicine [1,2], psychology [3], sociology [4], education [5], usability assessment [6] or advertisement analysis. There is also a growing number of applications using eye gaze as a new input modality [7]. ...
Article
Full-text available
Recently eye tracking has become a popular technique that may be used for variety of applications starting from medical ones, through psychological, analyzinguser experience, ending with interactive games. Video based oculography (VOG) is the most popular technique because it is non-intrusive, can be use in users’ natural environment and are relatively cheap as it uses only classic cameras. There are already well established methods for eye detection on a camera capture. However, to be usable in gaze position estimation, this information must be associated with an area in an observer scene, which requires evaluating several parameters. These parameters are typically estimated during the process called calibration. The main purpose of the software described in this paper is to establish a common platform that is easy to use and may be used in different calibration scenarios. Apart from the normal regression based calibration the ETCAL library allows also to use more sophisticated methods like automatic parameters optimization or the automatic detection of gaze targets. The library is also easily extendable and may be accessed with a convenient Web/REST interface.
... Obviously, eye movement studies are useful not only in medicine as they provide methods facilitating diagnosis and treatment for people with a range of diseases [4] [5], but also in discovering eye movement behavior patterns of physicians with various skill levels, for instance when looking at X-Ray images [6]. Eye tracking research may also be found in other areas such as psychology, sociology, education, usability assessment and advertising [7] [8]. ...
Article
Full-text available
Eye tracking is an increasingly popular technique that may be used for a variety of applications including user experience analysis, usability engineering and interactive games. Moreover, this technique may be useful inter alia in education, psychological tests and medical diagnosis. Video-based oculography (VOG) is the most commonly utilized technique because it is non-intrusive, can be applied in users' natural environments and is relatively cheap as it uses only basic cameras. There are already well-established methods for eye detection using still images registered by a camera. However, to be usable in the estimation of gaze position, eye image features must be associated with the place where an observer is looking; this typically requires the evaluation of several parameters during a process called calibration. The parameters significantly influence the quality of the analysis of subsequently collected data, thus they should be adjusted to the eye tracker used, the user and the environment. The purpose of this study is to present various calibration techniques and introduce an open, extendable and ready-to-use software application that implements these techniques. This software implements several algorithms already described in the literature and introduces some novel techniques. It provides the opportunity to compare different methods including plug-in self-developed filters and optimization algorithms; it also allows for analysis of results.
Article
The paper focuses on utilizing eye movements for controlling computer programs. Four different approaches were considered, including gazing at an element, blinking, and two solutions for eye gestures. The methods were implemented in a simple photo viewer. However, the developed functionalities can be utilized in other applications (switching e-book pages or on-screen keyboard elements). Various sizes for gazed components were verified, as well as the time required for performing particular actions, the number of unwanted element choices, and needed gesture repetitions. The methods were also assessed by the participants of the experiments. Based on the obtained results, it may be reasoned that components of size 200px are convenient for the application control. The users’ opinions and the experiments’ outcomes revealed that the gaze-based method and gestures based on joining points are satisfactory solutions.
Article
During eye-tracking experiments, a large amount of data is collected. Usually, it is stored in text files, or file formats meant for scientific data. Because these formats provide limited sets of mechanisms for more comprehensive data analysis, various database models were considered as an alternative for them in this study. They included a relational database defined in MS SQL Server and two commonly used NoSQL solutions – MongoDB and Cassandra. The data structures defined for each of them and their performance were compared based on a chosen set of operations. A special multi-database platform was developed, enabling access to all database models without knowing their mechanisms for manipulating data. The studies revealed that Cassandra is the best database to store eye-tracking data when adjusted to the data retrieving operations. MongoDB databases featured with similar performance for various query types, yet with worse execution time. The worst results were obtained for the relational database. Designed to avoid redundancy by assumption, it stores data in many tables, and many join operations are needed while reading data. This fact entails weaker operations efficiency.
Chapter
İnsan-bilgisayar etkileşimine dayalı teknolojilerin eğitim alanında yaygınlaşmasıyla birlikte, günlük problem çözme aracı olmaktan çıkmış ve öğrenenlere çoklu duyulara hitap eden zenginleştirilmiş öğrenme ortamı tasarımları fırsatı sunmaya başlamıştır. OSB olan bireylerin gelişimleri ve eğitimleri teknolojiye dayalı uygulamaların yaygınlaştırıldığı alanlarından biridir. Son yıllarda bu teknolojilerin etkililiği de OSB olan bireylerin eğitimi alanında sıklıkla araştırılan alanlarından biridir. Bu alanda teknoloji kullanımının büyük bir hızla büyüyor olması yapılabilecek çalışmaların da sınırını genişletmektedir. Teknoloji destekli uygulamalar; destekleyici teknoloji, özel eğitim teknolojisi veya rehabilitasyon teknolojileri gibi isimlerle ifade edilebilmektedir. Destekleyici teknoloji uygulamaları, OSB olan bireylerin sadece öğrenmelerinde değil gelişimlerinin izlenmesinde de kullanılmaktadır. Alanyazında destekleyici teknoloji uygulamaları; a) düşük düzey, b) orta düzey ve c) ileri düzey teknoloji uygulamaları olarak gruplandırılmaktadır (Dell, Newton ve Petroff, 2017). OSB olan bireylerin görsel algısal becerilerinin güçlü olduğuna ve teknolojik araçlara yoğun ilgi gösterdiklerine ilişkin araştırma bulgularının ortaya koyulması ile birlikte araştırmacılar, bu bireylerin en yoğun güçlük yaşadıkları alan olan sosyal etkileşim becerilerinin kazandırılmasında teknoloji temelli uygulamaların etkisini incelemeye başlamışlardır (Hetzroni ve Tannous, 2004). Özellikle düşük veya orta düzey teknolojilerin ileri düzey teknoloji ile entegre edildiği uygulamaların OSB olan bireylerin öğrenmeleri üzerindeki etkilerini inceleyen araştırma sayısı son yıllarda ivme kazanarak artmıştır. Diğer yandan robotik uygulamalar, sanal gerçeklik ve artırılmış gerçeklik uygulamaları gibi yapay zeka mimarisine dayalı uygulamalar, OSB olan bireylerin akıllı karakterlerle (ör. robot veya sanal karakterler) etkileşim yoluyla gelişimlerinin takibinde ve etkili öğrenme ortamlarının oluşturulmasında ümit vadeden uygulamalar arasındadır. Gelecek yıllarda özel eğitim uygulamalarında dijital dönüşüm sürecinde bu uygulamaların daha kapsamlı olarak yerini alması beklenmektedir. Bu beklentinin karşılık bulması, teknoloji uygulamalarının OSB’nin heterojen doğasını gözeten hipotezlerini test edecek araştırma problemlerini barındıran araştırma bulguları ile gerçekleşecektir. Bu bölümde ileri düzey destekleyici teknoloji uygulamalarında önemli etkileşim ortakları olan robot ve sanal akıllı karakterlerin geliştirilmesinde kullanılan açıklanabilir yapay zeka uygulamalarının tanıtılması ve uygulamaya yönelik yansımalarının açıklanması amaçlanmıştır.
Conference Paper
This paper presents a proof of the concept of distributed control architecture dedicated for the Autonomous Mobile Platform. The greatest advantage of the proposed solution is its scalability, which enables the application of advanced control algorithms that are based on multiple sensors that supply information to multiple processing units. The key is a highly effective on-board communication system. The proposed method is based on Serial Peripheral Interface SPI communication that joins the STM32 and Raspberry Pi processing units, which share the tasks of processing the signals that are necessary for AMP control. The proposed architecture and test results for on-board communication performance are presented.
Conference Paper
Full-text available
With growing access to cheap low end eye trackers using simple web cameras, there is also a growing demand on easy and fast usage of this devices by untrained and unsupervised end users. For such users the necessity to calibrate the eye tracker prior to its first usage is often perceived as obtrusive and inconvenient. In the same time perfect accuracy is not necessary for many commercial applications. Therefore, the idea of implicit calibration attracts more and more attention. Algorithms for implicit calibration are able to calibrate the device without any active collaboration with users. Especially, a real time implicit calibration, that is able to calibrate a device on-the-fly, while a person uses an eye tracker, seems to be a reasonable solution to the aforementioned problems. The paper presents examples of implicit calibration algorithms (including their real time versions) based on the idea of probable fixation targets (PFT). The algorithms were tested during a free viewing experiment and compared to the state of the art PFT based algorithm and explicit calibration results.
Conference Paper
The eye movement analysis undertaken in many research is conducted to better understand the biology of the brain and oculomotor system functioning. The studies presented in this paper considered eye movement signal as an output of a nonlinear dynamic system and are concentrated on determining the chaotic behaviour existence. The system nature was examined during a fixation, one of key components of eye movement signal, taking its vertical velocity into account. The results were compared with those obtained in the case of the horizontal direction. This comparison showed that both variables provide the similar representation of the underlying dynamics. In both cases, the analysis revealed the chaotic nature of eye movement for the first 200 ms, just after a stimulus position change. Subsequently, the signal characteristic tended to be the convergent one, however, in some cases, depending on a part of the fixation duration the chaotic behaviour was still observable.
Article
Eye movement is one of the key biological signals through which further analysis may reveal substantial information enabling greater understanding the biology of the brain and its mechanisms. Several methods for such signal processing have been developed, however new solutions are being continuously sought. This paper presents analysis of one of the main eye movement components – fixation – by usage of nonlinear time series methods. This analysis, aimed at determining the existence of chaotic behaviour, indicated by many biological systems, was based on an experiment utilising ’jumping point’ stimulus. 29 stimuli were used – presented for 3 secs in different screen positions. 24 subjects participated in the experiment consisting of two sessions conducted with a two–month interval; thus the experimental dataset included 48 recordings. The first derivative of the horizontal positions of eye movement coordinates registered during a fixation served as the time series used for reconstruction of eye movement dynamics. The analysis was performed by means of the Largest Lyapunov Exponent. Its values were studied in various time scopes of a fixation duration. A positive averaged value of this exponent, indicating chaotic behaviour, was observed for the first 200 points in the case of all studied time series. In the remining of the analysed scopes negative average exponent values were shown, however, for a number of users the eye movement signal behaviour was changing from convergent to chaotic and conversely.
Conference Paper
The topic of this paper concerns a way of robotic manipulator control with use of hand gestures. The previously developed hand landmarks detection and localization algorithm is now translated to work with robotic arm manipulator. The proposed algorithm is used for hand gesture recognition and continuous feature points tracking. The points coordinates and their detected paths are transformed into set of orientation and location values in order to achieve proper motion of manipulator. The article presents the way of implementation of hand landmarks tracking algorithm and implementation of robot arm manipulator control algorithm. The main emphasis is laid on the critical problems encountered during the implementation of those two algorithms and the problem of translation of the detection coordinate system into robot task space.
Article
Full-text available
An experiment is reported that investigated the application of eye movement analysis in the evaluation of webpage usability. Participants completed two tasks on each of four website homepages. Eye movements and performance data (Response Scores and Task Completion Times) were recorded. Analyses of performance data provided reliable evidence for a variety of Page and Task effects, including a Page by Task interaction. Four eye movement measures (Average Fixation Duration, Number of Fixations, Spatial Density of Fixations, and Total Fixation Duration) were also analysed statistically, and were found to be sensitive to similar patterns of difference between Pages and Tasks that were evident in the performance data, including the Page by Task interaction. However, this interaction failed to emerge as a significant effect (although the main effects of Page and Task did). We discuss possible reasons for the nonsignificance of the interaction, and propose that for eye movement analysis to be maximally useful in interface-evaluation studies, the method needs to be refined to accommodate the temporal and dynamic aspects of interface use, such as the stage of task processing that is being engaged in.
Article
Full-text available
Eye movement is a new emerging modality in human computer interfaces. With better access to devices which are able to measure eye movements (so called eye trackers) it becomes accessible even in ordinary environments. However, the first problem that must be faced when working with eye movements is a correct mapping from an output of eye tracker to a gaze point – place where the user is looking at the screen. That is why the work must always be started with calibration of the device. The paper describes the process of calibration, analyses of the possible steps and ways how to simplify this process.
Conference Paper
Full-text available
Eye movement data may be used for many various purposes. In most cases it is utilized to estimate a gaze point - that is a place where a person is looking at. Most devices registering eye movements, called eye trackers, return information about relative position of an eye, without information about a gaze point. To obtain this information, it is necessary to build a function that maps output from an eye tracker to horizontal and vertical coordinates of a gaze point. Usually eye movement is recorded when a user tracks a group of stimuli being a set of points displayed on a screen. The paper analyzes possible scenarios of such stimulus presentation and discuses an influence of usage of five different regression functions and two different head mounted eye trackers on the results.
Article
Full-text available
Holmqvist, K., Nyström, N., Andersson, R., Dewhurst, R., Jarodzka, H., & Van de Weijer, J. (Eds.) (2011). Eye tracking: a comprehensive guide to methods and measures, Oxford, UK: Oxford University Press.
Chapter
Full-text available
Eye-movement tracking is a method that is increasingly being employed to study usability issues in HCI contexts. The objectives of the present chapter are threefold. First, we introduce the reader to the basics of eye-movement technology, and also present key aspects of practical guidance to those who might be interested in using eye tracking in HCI research, whether in usability-evaluation studies, or for capturing people's eye movements as an input mechanism to drive system interaction. Second, we examine various ways in which eye movements can be systematically measured to examine interface usability. We illustrate the advantages of a range of different eye-movement metrics with reference to state-of-the-art usability research. Third, we discuss the various opportunities for eye-movement studies in future HCI research, and detail some of the challenges that need to be overcome to enable effective application of the technique in studying the complexities of advanced interactive-system use.
Conference Paper
Full-text available
This paper introduces the concept of a search geometry to describe the eye behaviour of users searching with different tasks across multiple sites. To validate the concept, we present results from an eye tracking study of four common tasks on three different Web portals. The findings show a consistent search geometry that describes eye behaviour across the different sites and tasks. The geometry illustrates that a small set of page regions account for a large proportion of eye movements. The results are briefly discussed in relation to theories of information foraging and information scent.
Conference Paper
Full-text available
The process of fixation identification—separating and labeling fixations and saccades in eye-tracking protocols—is an essential part of eye-movement data analysis and can have a dramatic impact on higher-level analyses. However, algorithms for performing fixation identification are often described informally and rarely compared in a meaningful way. In this paper we propose a taxonomy of fixation identification algorithms that classifies algorithms in terms of how they utilize spatial and temporal information in eye-tracking protocols. Using this taxonomy, we describe five algorithms that are representative of different classes in the taxonomy and are based on commonly employed techniques. We then evaluate and compare these algorithms with respect to a number of qualitative characteristics. The results of these comparisons offer interesting implications for the use of the various algorithms in future work.
Conference Paper
Full-text available
An eye tracking study was conducted to evaluate specific design features for a prototype web portal application. This software serves independent web content through separate, rectangular, user-modifiable portlets on a web page. Each of seven participants navigated across multiple web pages while conducting six specific tasks, such as removing a link from a portlet. Specific experimental questions included (1) whether eye tracking-derived parameters were related to page sequence or user actions preceding page visits, (2) whether users were biased to traveling vertically or horizontally while viewing a web page, and (3) whether specific sub-features of portlets were visited in any particular order. Participants required 2-15 screens, and from 7-360+ seconds to complete each task. Based on analysis of screen sequences, there was little evidence that search became more directed as screen sequence increased. Navigation among portlets, when at least two columns exist, was biased towards horizontal search (across columns) as opposed to vertical search (within column). Within a portlet, the header bar was not reliably visited prior to the portlet's body, evidence that header bars are not reliably used for navigation cues. Initial design recommendations emphasized the need to place critical portlets on the left and top of the web portal area, and that related portlets do not need to appear in the same column. Further experimental replications are recommended to generalize these results to other applications.
Conference Paper
Full-text available
In this paper we evaluate several of the most popular algorithms for segmenting fixations from saccades by testing these algorithms on the scanning patterns of toddlers. We show that by changing the pa- rameters of these algorithms we change the reported fixation dura- tions in a systematic fashion. However, we also show how choices in analysis can lead to very different interpretations of the same eye-tracking data. Methods for reconciling the disparate results of different algorithms as well as suggestions for the use of fixation identification algorithms in analysis, are presented. CR Categories: J.4 (Computer Applications): Social and Behav- ioral Sciences—Psychology
Article
Full-text available
It is hypothesized that the number, position, size, and duration of fixations are functions of the metric used for dispersion in a dispersion-based fixation detection algorithm, as well as of the threshold value. The sensitivity of the I-DT algorithm for the various independent variables was determined through the analysis of gaze data from chess players during a memory recall experiment. A procedure was followed in which scan paths were generated at distinct intervals in a range of threshold values for each of five different metrics of dispersion. The percentage of points of regard (PORs) used, the number of fixations returned, the spatial dispersion of PORs within fixations, and the difference between the scan paths were used as indicators to determine an optimum threshold value. It was found that a fixation radius of 1 degrees provides a threshold that will ensure replicable results in terms of the number and position of fixations while utilizing about 90% of the gaze data captured.
Conference Paper
Full-text available
Efficient programs are characterized by several parameters including the user interface design (UID). From the end-userpsilas point of view, the user interface is the representative of the program. Therefore, friendlier software with limited capabilities is viewed to be more useable than a comprehensive software; in other words, the UI has a great impact on the software to choose. There has been a great amount of work on UID guidelines. In this paper, we introduce the fundamental guidelines that a designer should consider to increase usability. We consider new aspects including user access, language selection and other technical options in the forms.The subjects are studied independently from the application and are applicable to all kinds of environments such as web based, desktop, and embedded software. Following these guidelines results in a software which is friendlier, easier to understand and use, more reusable and less tedious.
Article
Full-text available
Modern window-based user interface systems generate user interface events as natural products of their normal operation. Because such events can be automatically captured and because they indicate user behavior with respect to an application's user interface, they have long been regarded as a potentially fruitful source of information regarding application usage and usability. However, because user interface events are typically voluminos and rich in detail, automated support is generally required to extract information at a level of abstraction that is useful to investigators interested in analyzing application usage or evaluating usability. This survey examines computer-aided techniques used by HCI practitioners and researchers to extract usability-related information from user interface events. A framework is presented to help HCI practitioners and researchers categorize and compare the approaches that have been, or might fruitfully be, applied to this problem. Because many of the techniques in the research literature have not been evaluated in practice, this survey provides a conceptual evaluation to help identify some of the relative merits and drawbacks of the various classes of approaches. Ideas for future research in this area are also presented. This survey addresses the following questions: How might user interface events be used in evaluating usability? How are user interface events related to other forms of usability data? What are the key challenges faced by investigators wishing to exploit this data? What approaches have been brought to bear on this problem and how do they compare to one another? What are some of the important open research questions in this area?
Article
How to measure usability is an important question in HCI research and user interface evaluation. We review current practice in measuring usability by categorizing and discussing usability measures from 180 studies published in core HCI journals and proceedings. The discussion distinguish several problems with the measures, including whether they actually measure usability, if they cover usability broadly, how they are reasoned about, and if they meet recommendations on how to measure usability. In many studies, the choice of and reasoning about usability measures fall short of a valid and reliable account of usability as quality-in-use of the user interface being studied. Based on the review, we discuss challenges for studies of usability and for research into how to measure usability. The challenges are to distinguish and empirically compare subjective and objective measures of usability; to focus on developing and employing measures of learning and retention; to study long-term use and usability; to extend measures of satisfaction beyond post-use questionnaires; to validate and standardize the host of subjective satisfaction questionnaires used; to study correlations between usability measures as a means for validation; and to use both micro and macro tasks and corresponding measures of usability. In conclusion, we argue that increased attention to the problems identified and challenges discussed may strengthen studies of usability and usability research.
Conference Paper
Web search services are among the most heavily used applications on the World Wide Web. Perhaps because search is used in such a huge variety of tasks and contexts, the user interface must strike a careful balance to meet all user needs. We describe a study that used eye tracking methodologies to explore the effects of changes in the presentation of search results. We found that adding information to the contextual snippet significantly improved performance for informational tasks but degraded performance for navigational tasks. We discuss possible reasons for this difference and the design implications for better presentation of search results.
Article
Usability evaluation is an increasingly important part of the user interface design process. However, usability evaluation can be expensive in terms of time and human resources, and automation is therefore a promising way to augment existing approaches. This article presents an extensive survey of usability evaluation methods, organized according to a new taxonomy that emphasizes the role of automation. The survey analyzes existing techniques, identifies which aspects of usability evaluation automation are likely to be of use in future research, and suggests new ways to expand existing approaches to better support usability evaluation. Categories and Subject Descriptors: H.1.2 [Information Systems]: User/Machine Systems - human factors; human information processing; H.5.2 [Information Systems]: User Interfaces - benchmarking; evaluation/methodology; graphical user interfaces (GUI) Human Factors.
Identifying web usability problems from eyetracking data In: Paper presented at the British HCI conference
  • C Ehmke
  • S Wilson
Ehmke, C & Wilson, S.: Identifying Web Usability Problems from Eyetracking Data. Paper presented at the British HCI conference 2007, 03 -07.09.2007, University of Lancaster
Guidelines for the Eye Tracker Calibration Using Points of Regard Advances in Intelligent Systems and Computing
  • P Kasprowski
  • K Harężlak
  • M Stasch
Kasprowski P., Harężlak K., Stasch M.: Guidelines for the Eye Tracker Calibration Using Points of Regard. Information Technologies in Biomedicine. Vol 4. Advances in Intelligent Systems and Computing. Springer Publishing, Vol. 284, pp. 225-236, 2014.
User interface design with visualization techniques
  • S Bahadur
  • B K Sagar
Bahadur, S., Sagar, B.K.: User interface design with visualization techniques. Int. J. Res. Eng. Appl. Sci. 2(6) (2012)