Figure 1 - uploaded by Deepak Akkil
Content may be subject to copyright.
Source publication
Consistent measuring and reporting of gaze data quality is important in research that involves eye trackers. We have developed TraQuMe: a generic system to evaluate the gaze data quality. The quality measurement is fast and the interpretation of the results is aided by graphical output. Numeric data is saved for reporting of aggregate metrics for t...
Context in source publication
Similar publications
Human gaze tracking has gathered much attention due to its capability to detect intuitive attention. Appearance-based methods can work with a single camera in ordinary conditions to track human gaze. An effective way to generate eye appearances is proposed using the Kinect. The head pose information is obtained from the Kinect after a series of cal...
We present a novel calibration method that enables tracking motion of user's eye and gaze by using a single webcam. Human-Computer Interaction (HCI) is significantly influenced by the communication tool used by a human to control the computer. Substituting some functions of the keyboard or the mouse with human gaze driven control opens a new depth...
In this paper a review is presented of the research on eye gaze estimation techniques and applications, that has progressed in diverse ways over the past two decades. Several generic eye gaze use-cases are identified: desktop, TV, headmounted, automotive and handheld devices. Analysis of the literature leads to the identification of several platfor...
Infantile nystagmus (IN) describes a regular, repetitive movement of the eyes. A characteristic feature of each cycle of the IN eye movement waveform is a period in which the eyes are moving at minimal velocity. This so-called “foveation” period has long been considered the basis for the best vision in individuals with IN. In recent years, the tech...
In order to adaptively calibrate the work parameters in the infrared-TV based eye gaze tracking Human-Robot Interaction (HRI) system, a kind of gaze direction sensing model has been provided for detecting the eye gaze identified parameters. We paid more attention to situations where the user's head was in a different position to the interaction int...
Citations
... Examples of procedures, formulas, (pseudo)code or links to software for estimating some measures of data quality and effects thereof may be found in e.g. Crossland and Rubin (2002), , Akkil et al. (2014), Dalrymple et al. (2018), Hessels et al. (2017), Orquin and Holmqvist (2018), Kangas et al. (2020), Niehorster et al., (2020a. ...
... However, participant-controlled calibration does not appear to be the standard in most eye-tracking software today. Akkil et al. (2014) reported for the Tobii T60 that calibrating with 9 points result in a better accuracy compared to using 5 or 2 points, with a difference of about 0.2 • between the 9-point and the 2-point calibrations. ...
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section 6).<br/
... more interest on measuring and reporting the gaze tracker quality (see, for example, Holmqvist et al. [5] and Nyström et al. [7]), but that hasn't led to much action on real measurements. Tools have also been developed to enable easy practice of getting quality measurements (for example, Akkil et al. [1] and Blignaut and Beelders [3]). Lohr et al. [6] used an eye tracking HMD based on HTC Vive and SMI's eye tracker to study the tracker signal quality. ...
... Akkil et al. [1] listed several reasons to collect eye tracking quality measures (e.g. to compare between calibration methods or to exclude certain participants). In the present work the main goal was to to understand the expected gaze accuracy to make informed decisions on gaze enabled user interfaces 2 , and to study gaze accuracy in different directions in the headset's field-of-view. ...
... We calculated the two measures, accuracy and precision, using the equations by Akkil et al. [1]. The accuracy was calculated as ...
... Many researchers report the manufacturer-determined accuracy and precision when publishing eye tracking data [Akkil et al. 2014;Blignaut and Beelders 2012;Dalrymple et al. 2018;Holmqvist et al. 2012], but these metrics are typically calculated under ideal conditions [Tobii Technology 2011]. If manufacturer metrics are used instead of actual measures, the data will be impaired and this can invalidate experimental results and conclusions [Dalrymple et al. 2018;Holmqvist et al. 2011;Nyström et al. 2013]. ...
... Several open-source tools have been developed for remote eye trackers to make accuracy and precision measurements more reliable. These tools are used for validation of data quality [Akkil et al. 2014], assessment of data quality under non-ideal conditions [Clemotte et al. 2014] or with more difficult populations [Dalrymple et al. 2018], and extend the usability of eye tracking to other software, such as MATLAB [Gibaldi et al. 2017]. Tools have also been developed for wearable eye trackers to measure accuracy and precision [MacInnes 2018;Pfeiffer and Latoschik 2008] and to assess the effects of non-ideal stimulus presentation on these metrics [Kowalik 2011]. ...
... Data quality is vital to the comparability and standardization of experimental results when using eye tracking [Akkil et al. 2014;Blignaut and Beelders 2012;Ehinger et al. 2019;Holmqvist et al. 2012;Nyström et al. 2013]. Accuracy and precision measured on-site are almost always worse than what is expected based on manufacturer specifications [Akkil et al. 2014;Clemotte et al. 2014;Feit et al. 2017]. ...
As virtual reality (VR) garners more attention for eye tracking research, knowledge of accuracy and precision of head-mounted display (HMD) based eye trackers becomes increasingly necessary. It is tempting to rely on manufacturer-provided information about the accuracy and precision of an eye tracker. However, unless data is collected under ideal conditions, these values seldom align with on-site metrics. Therefore, best practices dictate that accuracy and precision should be measured and reported for each study. To address this issue, we provide a novel open-source suite for rigorously measuring accuracy and precision for use with a variety of HMD-based eye trackers. This tool is customizable without having to alter the source code, but changes to the code allow for further alteration. The outputs are available in real time and easy to interpret, making eye tracking with VR more approachable for all users.
... For example, Blignaut and Wium [6] ignored the first 1000 ms of data after presenting a target, then used the next 500 ms. Akkil et al. [2] had two parts to their study: a system-controlled routine and a participant-controlled routine. For the system-controlled routine, Akkil et al. ignored the first 500 ms of data after presenting a target and used the next 500 ms of data. ...
... However, some methods do not require calibration targets at all [51]. USCs that do include targets determine when the subject is fixated on a target using one of three approaches [14]: algorithm-controlled [2], operator-controlled [39], or participant-controlled [7,23,39]. ...
We evaluated the data quality of SMI's tethered eye-tracking head-mounted display based on the HTC Vive (ET-HMD) during a random saccade task. We measured spatial accuracy, spatial precision, temporal precision, linearity, and crosstalk. We proposed the use of a non-parametric spatial precision measure based on the median absolute deviation (MAD). Our linearity analysis considered both the slope and adjusted R-squared of a best-fitting line. We were the first to test for a quadratic component to crosstalk. We prepended a calibration task to the random saccade task and evaluated 2 methods to employ this user-supplied calibration. For this, we used a unique binning approach to choose samples to be included in the recalibration analyses. We compared our quality measures between the ET-HMD and our EyeLink 1000 (SR-Research, Ottawa, Ontario, CA). We found that the ET-HMD had significantly better spatial accuracy and linearity fit than our EyeLink, but both devices had similar spatial precision and linearity slope. We also found that, while the EyeLink had no significant crosstalk, the ET-HMD generally exhibited quadratic crosstalk. Fourier analysis revealed that the binocular signal was a low-pass filtered version of the monocular signal. Such filtering resulted in the binocular signal being useless for the study of high-frequency components such as saccade dynamics.
... However, the issue of eye tracking quality quickly becomes a more complex problem to solve than simply improving the hardware. Akkil, Isokoski, et al. developed a system in 2014 named "TraQuMe" [10] which measured the accuracy of eye tracking hardware. The system expects the hardware to have already been calibrated by whatever method the manufacturer suggests [10]. ...
... Akkil, Isokoski, et al. developed a system in 2014 named "TraQuMe" [10] which measured the accuracy of eye tracking hardware. The system expects the hardware to have already been calibrated by whatever method the manufacturer suggests [10]. Next, it displays a series of nine points for an individual to look at for a discrete amount of time [10]. ...
... The system expects the hardware to have already been calibrated by whatever method the manufacturer suggests [10]. Next, it displays a series of nine points for an individual to look at for a discrete amount of time [10]. The system assumes that the individual is following the directions of the software correct and are directly looking at each point on the screen as it appears [10]. ...
Literature review into contemporary research into virtual reality keyboards, and methods for optimizing text entry.
... Once the task was completed successfully, both the instructor and worker were asked to ll in a short questionnaire with 5 diierent questions using the 7-point Likert scale to evaluate the perceived quality of the collaboration. Soon aaer the completion of the gaze conditions, the gaze data quality was measured using a 9-point quality evalua- tion process using TraaMe [Akkil et al., 2014].TraaMe shows predeened xation points on screen, and measures the accuracy and precision of tracking. ...
An emerging use of mobile video telephony is to enable joint activities and collaboration on physical tasks. We conducted a controlled user study to understand if seeing the gaze of a remote instructor is beneficial for mobile video collaboration and if it is valuable that the instructor is aware of sharing of the gaze. We compared three gaze sharing configurations, (a)GazeVisible where the instructor is aware and can view own gaze point that is being shared, (b)GazeInvisible where the instructor is aware of the shared gaze but cannot view her own gaze point and (c)GazeUnaware where theinstructor is unaware about the gaze sharing, with a baseline ofshared-mouse pointer. Our results suggests that naturally occurring gaze may not be as useful as explicitly produced eye movements. Further, instructors prefer using mouse rather than gaze for remote gesturing, while the workers also find value in transferring the gaze information.
... The software development kits were the open-source H3DAPI [40] for haptics and Tobii SDK for eye tracking. We also utilized TraQuMe [41], a tool to measure gaze data quality. A keyboard was used to select and record the answer for each task and move to the next task, and headphones were utilized to block out noise. ...
... Before each of the gaze-based conditions, the eye tracker was calibrated using nine-point onscreen calibration. The quality of eye tracking was measured using a nine-point TraQuMe evaluation [41]. We defined an objective criterion for recalibration. ...
... We further used TraQuMe ( Akkil et al., 2014) to evaluate the effect of gaze-tracking accuracy on objective measures of collaboration such as task completion times and verbal effort required to complete the collaboration. Accuracy of a gaze- tracking system is defined as the closeness of the measured gaze point to the point that the tracked eye is looking at and is a measured as the average distance between a known stim- uli position and the gaze point returned by the tracker. ...
... Before each of the gaze conditions, the gaze tracker was calibrated using the standard 5-point calibration procedure and, soon after the completion of the task, the gaze data qual- ity was measured, using 5-point (centre and four corners) quality evaluation process using TraQuMe (Akkil et al., 2014). TraQuMe shows predefined fixation points on-screen, similar to a gaze tracker calibration procedure and, based on the gaze data, evaluates the accuracy and precision of track- ing. ...
... We analysed the relationship between accuracy of gaze track- ing, measured using TraQuMe (Akkil et al., 2014), and the overall task completion times. We computed the average accuracy of tracking for each participant, across the five gaze data validation points, for the two gaze conditions (with and without distraction for the expert). ...
Remote collaboration on physical tasks is an emerging use of video telephony. Recent work suggests that conveying gaze information measured using an eye tracker between collaboration partners could
be beneficial in this context. However, studies that compare gaze to other pointing mechanisms, such as a mouse-controlled pointer, in video-based collaboration, have not been available. We conducted a
controlled user study to compare the two remote gesturing mechanisms (mouse, gaze) to video only (none) in a situation where a remote expert saw video of the desktop of a worker where his/her
mouse or gaze pointer was projected. We also investigated the effect of distraction of the remote expert on the collaborative process and whether the effect depends on the pointing device. Our result
suggests that mouse and gaze pointers lead to faster task performance and improved perception of the collaboration, in comparison to having no pointer at all. The mouse outperformed the gaze when the task required conveying procedural instructions. In addition, using gaze for remote gesturing required increased verbal effort for communicating both referential and procedural messages.
... The distance from a participant's face to the monitor was about 65 cm during eye typing, therefore one degree of a visual angle corresponded to 1 cm on the screen surface (40 pixels). Akkil et al. [2014], using Head-controlled interface used in this study was evaluated previously in real-time interaction scenarios [Gizatdinova et al. 2012b;Ilves et al. 2014]. The head pointer control was implemented based on a continuous face tracking from a video stream, using two tracking methods [Gizatdinova et al. 2012a]. ...
With the proliferation of small-screen computing devices, there has been a continuous trend in reducing the size of interface elements. In virtual keyboards, this allows for more characters in a layout and additional function widgets. However, vision-based interfaces (VBIs) have only been investigated with large (e.g., full-screen) keyboards. To understand how key size reduction affects the accuracy and speed performance of text entry VBIs, we evaluated gaze-controlled VBI (g-VBI) and head-controlled VBI (h-VBI) with unconventionally small (0.4°, 0.6°, 0.8° and 1°) keys. Novices (N = 26) yielded significantly more accurate and fast text production with h-VBI than with g-VBI, while the performance of experts (N = 12) for both VBIs was nearly equal when a 0.8--1° key size was used. We discuss advantages and limitations of the VBIs for typing with ultra-small keyboards and emphasize relevant factors for designing such systems.
... The testing and reporting of data quality obtained from eye trackers rather than relying on data published by manufacturers has been advocated recently, particularly as the cost of eye tracking systems fall and the situations in which they are used increase. Standardized procedures for doing this have been proposed ( [Holmqvist et al. 2012], [Akkil et al. 2014], [Niehorster et al. 2017], [Feit et al. 2017], [Špakov et al. 2017]). These share similar features and use accuracy and precision as the main quality metrics. ...
... The center of the target object in the Mission game and the Ball game was taken as its reference point. The distance from the center of the closest fixation to the reference point was used as the measure of accuracy and the standard deviation of the gaze points within the whole verification fixation was used as the measure of precision, in accordance with the TraQuMe formulae [Akkil et al. 2014]. ...
To use eye trackers in a school classroom, children need to be able to calibrate their own tracker unsupervised and on repeated occasions. A game designed specifically around the need to maintain their gaze in fixed locations was used to collect calibration and verification data. The data quality obtained was compared with a standard calibration procedure and another game, in two studies carried out in three elementary schools. One studied the effect on data quality over repeated occasions and the other studied the effect of age on data quality. The first showed that accuracy obtained from unsupervised calibration by children was twice as good after six occasions with the game requiring the fixed gaze location compared with the standard calibration, and as good as standard calibration by group of supervised adults. In the second study, age was found to have no effect on performance in the groups of children studied.