Screenshot of the calibration interface with the example gazecontingent automated calibration activated (see text). Shown is the state during the third stage of this procedure, where the currently active calibration is based on data collected for four previous calibration points. Further shown is the status message generated by the calibration controller class (left of the image), the arc-shaped AOI delineation (red-outlined areas) used by the example controller to determine

Screenshot of the calibration interface with the example gazecontingent automated calibration activated (see text). Shown is the state during the third stage of this procedure, where the currently active calibration is based on data collected for four previous calibration points. Further shown is the status message generated by the calibration controller class (left of the image), the arc-shaped AOI delineation (red-outlined areas) used by the example controller to determine

Source publication
Article
Full-text available
Accurate eye tracking is crucial for gaze-dependent research, but calibrating eye trackers in subjects who cannot follow instructions, such as human infants and nonhuman primates, presents a challenge. Traditional calibration methods rely on verbal instructions, which are ineffective for these populations. To address this, researchers often use att...

Contexts in source publication

Context 1
... function takes an instance of a calibration controller class as an optional input argument. Such a class enables arbitrary user logic to run during the calibration, to react to calibration events (such as data being collected or a calibration being computed), to provide status text to be shown on the calibration interface screen (see left side of Fig. 4), and to draw on the participant and operator screens. Specifically, the user should implement the following four method functions in a calibration controller class that will be called by the calibration ...
Context 2
... for a calibration or validation point, completion of calibration computation, or the user pressing the "auto" button in the interface (Fig. 4) indicating that the automated procedure should start or stop. 3. getStatusText(): This function should return the status text to be shown to the operator on the calibration interface, if any. 4. draw(): Any drawing to the participant or operator screens should be performed from this function. One may, for instance, wish to play a video ...
Context 3
... getStatusText(): This function should return the status text to be shown to the operator on the calibration interface, if any. 4. draw(): Any drawing to the participant or operator screens should be performed from this function. One may, for instance, wish to play a video to the participant to retain attention when no calibration or validation targets are being shown, or annotate the operator screen with an area of interest (AOI) delineation that is used by the controller to determine which calibration target the participant is currently looking at (cf. Figure 4). One can also draw the current internal representation of participant gaze used by the controller, since this may be different from the real-time gaze display that can be toggled in the calibration interface if, for instance, incoming gaze data is filtered by a temporal averaging procedure. ...
Context 4
... a calibration controller class is registered with the interface, the "auto" button appears in the row of buttons at the bottom of the calibration and validation interface (Fig. ...
Context 5
... (see Fig. 4) is used to determine to which, if any, calibration target the participant is looking closely enough for a calibration data collection to be started for that point. Once calibration data is collected for all three calibration targets, a new calibration is computed, and a third stage with a further three points is entered following the ...
Context 6
... any, calibration target the participant is looking closely enough for a calibration data collection to be started for that point. Once calibration data is collected for all three calibration targets, a new calibration is computed, and a third stage with a further three points is entered following the same procedure. This third stage is shown in Fig. 4. It should be noted that this controller programming interface can also be used for much simpler purposes than to implement a completely automated calibration and validation procedure. For instance, the class methods could be used simply to monitor whether the participant gazes at the screen or within a specific area on the screen, ...

Citations

... Many tools exist for locally interfacing with eye trackers (e.g., Cornelissen et al., 2002;Dalmaijer et al., 2014;De Tommaso & Wykowska, 2019;Niehorster & Nyström, 2020, Niehorster et al., 2020a, 2024bsee Niehorster et al., 2025, for an overview). However, until now, there has not been an easy-to-use toolbox for streaming gaze data over the network, allowing straightforward implementation of networked eyetracking experiments. ...
Article
Full-text available
Studying the behavior of multiple participants using networked eye-tracking setups is of increasing interest to researchers. However, to conduct such studies, researchers have had to create complicated ad hoc solutions for streaming gaze over a local network. Here we present TittaLSL, a toolbox that enables creating networked multi-participant experiments using Tobii eye trackers with minimal programming effort. An evaluation using 600-Hz gaze streams sent between 15 different eye-tracking stations revealed that the end-to-end latency, including the eye tracker’s gaze estimation processes, achieved by TittaLSL was 3.05 ms. This was only 0.10 ms longer than when gaze samples were received from a locally connected eye tracker. We think that these latencies are low enough that TittaLSL is suitable for the majority of networked eye-tracking experiments, even when the gaze needs to be shown in real time.
Article
Full-text available
There is an abundance of commercial and open-source eye trackers available for researchers interested in gaze and eye movements. Which aspects should be considered when choosing an eye tracker? The paper describes what distinguishes different types of eye trackers, their suitability for different types of research questions, and highlights questions researchers should ask themselves to make an informed choice.
Article
Full-text available
Researchers using eye tracking are heavily dependent on software and hardware tools to perform their studies, from recording eye tracking data and visualizing it, to processing and analyzing it. This article provides an overview of available tools for research using eye trackers and discusses considerations to make when choosing which tools to adopt for one’s study.