Figure 2 - available from: Scientific Reports
This content is subject to copyright. Terms and conditions apply.
A view of the head-mounted eye tracker scene camera in the test track experiment (top) and simulator experiment (bottom), showing the participant's view of the constant radius (circular) path. Red dot indicates point of regard. The eye tracker calibrates eye images to the head frame-of-reference. As the participant rolls her head, the world horizontal axis is tilted. To estimate horizontal visual flow -i.e. horizon of the plane of translation -the screen horizontal axis in the simulator, and the car's horizontal axis on the test track were defined based on the optical visual markers seen on the windcreen, dashboard and screen image. (Text and arrows not visible during experiment).

A view of the head-mounted eye tracker scene camera in the test track experiment (top) and simulator experiment (bottom), showing the participant's view of the constant radius (circular) path. Red dot indicates point of regard. The eye tracker calibrates eye images to the head frame-of-reference. As the participant rolls her head, the world horizontal axis is tilted. To estimate horizontal visual flow -i.e. horizon of the plane of translation -the screen horizontal axis in the simulator, and the car's horizontal axis on the test track were defined based on the optical visual markers seen on the windcreen, dashboard and screen image. (Text and arrows not visible during experiment).

Source publication
Article
Full-text available
It is well-established how visual stimuli and self-motion in laboratory conditions reliably elicit retinal-image-stabilizing compensatory eye movements (CEM). Their organization and roles in natural-task gaze strategies is much less understood: are CEM applied in active sampling of visual information in human locomotion in the wild? If so, how? And...

Context in source publication

Context 1
... variable. Observer rate of rotation was obtained from vehicle telemetry and the simulation software. Gaze direction was measured using a wearable head-mounted eye tracker, and projected to the locomotor frame of reference -i.e. vehicle (test track) or screen (simulator) coordinates -using optical markers in the forward looking camera image ( Fig. 2; for detailed methods see the Methods ...

Similar publications

Article
Full-text available
This paper introduces a self-tuning mechanism for capturing rapid adaptation to changing visual stimuli by a population of neurons. Building upon the principles of efficient sensory encoding, we show how neural tuning curve parameters can be continually updated to optimally encode a time-varying distribution of recently detected stimulus values. We...
Article
Full-text available
Previous work shows that observers can use information from optic flow to perceive the direction of self-motion (i.e. heading) and that perceived heading exhibits a bias towards the center of the display (center bias). More recent work shows that the brain is sensitive to serial correlations and the perception of current stimuli can be affected by...
Article
Full-text available
Induced motion is the illusory motion of a target away from the direction of motion of the unattended background. If it is a result of assigning background motion to self-motion and judging target motion relative to the scene as suggested by the flow parsing hypothesis then the effect must be mediated in higher levels of the visual motion pathway w...
Article
Full-text available
It is a well-established finding that more informative optic flow (e.g., faster, denser, or presented over a larger portion of the visual field) yields decreased variability in heading judgements. Current models of heading perception further predict faster processing under such circumstances, which has, however, not been supported empirically so fa...
Article
Full-text available
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visua...

Citations

... This representative gaze data from two subjects captures the general tendency of participants to keep gaze elevation (green line) roughly constant throughout the turn while the azimuthal gaze angle (orange line) is shifted in the direction of the turn. The sawtooth pattern is consistent with the optokinetic nystagmus that would arise if gaze were periodically tracking features as they rotate through the visual field, similar to the behavior previously reported by [46][47][48] . ...
Article
Full-text available
There is a critical need to understand how aging visual systems contribute to age-related increases in vehicle accidents. We investigated the potential contribution of age-related detriments in steering based on optic flow, a source of information known to play a role in navigation control. Seventeen younger adults (mean age: 21.1 years) and thirteen older adults (mean age: 57.3 years) performed a virtual reality steering task. The virtual environment depicted movement at 19 m/s along a winding road. Participants were tasked with maintaining a central lane position while experiencing eight repetitions of each combination of optic flow density (low, medium, high), turn radius (35, 55, 75 m), and turn direction (left, right), presented in random order. All participants cut corners, but did so less on turns with rotational flow from distant landmarks and without proximal optic flow. We found no evidence of an interaction between age and optic flow density, although older adults cut corners more on all turns. An exploratory gaze analysis revealed no age-related differences in gaze behavior. The lack of age-related differences in steering or gaze behavior as a function of optic flow implies that processing of naturalistic optic flow stimuli when steering may be preserved with age.
... Eye movement can be classified into various oculomotor event types such as fixation, saccade, smooth pursuit, and post-saccadic oscillations (PSO). These events represent distinct patterns of eye movement that play a critical role in achieving retinal image stabilization [12]. During fixation, the gaze remains relatively steady, focusing on a specific location shown by the red and pink circles in the perception and attention image of Fig. 1 respectively. ...
Article
Eye movement analysis is critical to studying human brain phenomena such as perception, cognition, and behavior. However, under uncontrolled real-world settings, the recorded gaze coordinates (commonly used to track eye movements) are typically noisy and make it difficult to track change in the state of each phenomenon precisely, primarily because the expected change is usually a slower transient process. This paper proposes an approach, Improved Naive Segmented linear regression (INSLR), which approximates the gaze coordinates with a piecewise linear function (PLF) referred to as a hypothesis. INSLR improves the existing NSLR approach by employing a hypotheses clustering algorithm, which redefines the final hypothesis estimation in two steps: (1) At each time-stamp, measure the likelihood of each hypothesis in the candidate list of hypotheses by using the least square fit score and its distance from the 𝑘-means of the hypotheses in the list. (2) Filter hypothesis based on a pre-defined threshold. We demonstrate the significance of the INSLR method in addressing the challenges of uncontrolled real-world settings such as gaze denoising and minimizing gaze prediction errors from cost-effective devices like webcams. Experiment results show INSLR consistently outperforms the baseline NSLR in denoising noisy signals from three eye movement datasets and minimizes the error in gaze prediction from a low precision device for 71.1% samples. Furthermore, this improvement in denoising quality is further validated by the improved accuracy of the oculomotor event classifier called NSLR-HMM and enhanced sensitivity in detecting variations in attention induced by distractor during online lecture.
... The aim points constantly move along with the vehicle: the relative distance from the aim point to the vehicle is kept constant. However, more recent literature suggests that, alternatively, drivers use target waypoints along the predicted future path to steer during routine driving (Lappi et al., 2020;Tuhkanen et al., 2019). In contrast to aim points, target waypoints have fixed locations (thus, they do not move with the vehicle), and drivers shift their gaze to a new waypoint at regular intervals. ...
... More data, preferably from controlled studies, are required to analyze the drivers' use of visual risk metrics in more detail, potentially including other lane departure risk metrics (or combinations) that were not studied here. For example, as mentioned, it has been suggested that drivers may use optic flow information and target waypoints to guide steering Lappi et al., 2020;Mole et al., 2016Mole et al., , 2019Okafuji et al., 2018;Tuhkanen et al., 2019;Wilkie & Wann, 2003). Performing future analyses with a larger number of (high quality) data points available, other risk metrics may turn out to be important, and thus, the models suggested in this paper may need to be revisited. ...
... In addition, recent studies utilizing an eye tracker have revealed how these visuospatial information processes are represented in gaze behavior [11]. For instance, drivers regularly look far ahead to make a visuospatial reference point, which is called a future path, for planning a trajectory in the traveling direction [12][13][14]. When approaching a curve, they make a reference point called a tangent point on the inside of the curve 1-2 s before turning the steering wheel [15][16][17][18]. ...
Article
Full-text available
The aim of this study was to assess the characteristics of visual search behavior in elderly drivers in reverse parking. Fourteen healthy elderly and fourteen expert drivers performed a perpendicular parking task. The parking process was divided into three consecutive phases (Forward, Reverse, and Straighten the wheel) and the visual search behavior was monitored using an eye tracker (Tobii Pro Glasses 2). In addition, driving-related tests and quality of life were evaluated in elderly drivers. As a result, elderly drivers had a shorter time of gaze at the vertex of the parking space both in direct vision and reflected in the driver-side mirror during the Forward and the Reverse phases. In contrast, they had increased gaze time in the passenger-side mirror in the Straighten the wheel phase. Multiple regression analysis revealed that quality of life could be predicted by the total gaze time in the Straighten the wheel phase (β = −0.45), driving attitude (β = 0.62), and driving performance (β = 0.58); the adjusted R2 value was 0.87. These observations could improve our understanding of the characteristics of visual search behavior in parking performance and how this behavior is related to quality of life in elderly drivers.
... Studies of human steering have demonstrated tight coupling between eye movements and steering behaviors (Land & Lee, 1994;Wilkie & Wann, 2003b;Kountouriotis et al., 2013;Mole, Kountouriotis, Billington, & Wilkie, 2016;Lappi et al., 2020;Tuhkanen, Pekkanen, Wilkie, & Lappi, 2021). Typical gaze behavior during steering is a repeating pattern of eye movements, comprising smooth pursuit tracking of a point on the ground (for approximately 0.5 s) followed by a saccade to a new point at a time headway 1-3 s ahead Lappi, Pekkanen, & Itkonen, 2013;Lehtonen, Lappi, Kotkanen, & Summala, 2013;Tuhkanen et al., 2019). ...
... Previous experiments have shown that gaze reliably anticipates steering by "picking up" steering points in the direction of locomotion (waypoints with approximately a 1-s lead time and a 2-s time headway), with steering actions coupled to gaze control (Wilkie, Kountouriotis, Merat, & Wann, 2010;Lappi et al., 2020;Tuhkanen et al., 2021;Mole et al., 2021). But do humans really need to bring steering points into "foveal vision" to use them for visual guidance? ...
Article
Full-text available
When steering a trajectory, we direct our gaze to locations (1-3 s ahead) that we want to steer through. How and why are these active gaze patterns conducive to successful steering? While various sources of visual information have been identified that could support steering control, the role of stereotypical gaze patterns during steering remains unclear. Here, experimental and computational approaches are combined to investigate a possible direct connection between gaze and steering: Is there enough information in gaze direction that it could be used in isolation to steer through a series of waypoints? For this, we test steering models using waypoints supplied from human gaze data, as well as waypoints specified by optical features of the environment. Steering-by-gaze was modeled using a "pure-pursuit" controller (computing a circular trajectory toward a steering point), or a simple "proportional" controller (yaw-rate set proportional to the visual angle of the steering point). Both controllers produced successful steering when using human gaze data as the input. The models generalized using the same parameters across two scenarios: (a) steering through a slalom of three visible waypoints located within lane boundaries and (b) steering a series of connected S bends comprising visible waypoints without a visible road. While the trajectories on average broadly matched those generated by humans, the differences in individual trajectories were not captured by the models. We suggest that "looking where we are going" provides useful information and that this can often be adequate to guide steering. Capturing variation in human steering responses, however, likely requires more sophisticated models or additional sensory information.
... Instead, eye-gaze coordination is sometimes influenced by the task performed by observers. However, there is substantial evidence suggesting that visual behavior is strongly determined by motion in the visual stimulus [13], [8], [36], [37], rather than top-down factors such as task demand or mental models when observers focus on dynamic scenes [38]. Although the area of gaze patterns decreased as deceleration increased, the difference in gaze patterns did not reach statistical significance. ...
Article
Automated driving essentially transforms the role of human drivers from active decision-makers to passive supervisors. The change in gaze patterns elicited by such a transformation may influence vehicle control during manual takeover, thereby altering the visual risk perception. However, the specific impact of automated driving on visual risk perception remains unclear. Therefore, this study aims to evaluate the visual risk perception of automated driving tasks by analyzing the gaze pattern dispersion, which reflects the coverage of visual attention distribution. Ten participants perform manual and automated driving tasks. Each driving task includes acceleration, maintaining constant speeds, and deceleration phases. The constant speeds were set to 40, 60, and 80 km/h, and the deceleration rates were set to -2.5, -5.0, and -7.5 m/s2. The probability density estimation method is proposed to calculate gaze density regions that reflect the gaze patterns dispersion. The results indicate that automated driving causes more dispersed gaze patterns in the initial acceleration and lower speed phases. However, gaze patterns are not significantly dispersed in the highest speed and all deceleration phases during automated driving. Furthermore, gaze patterns are highly constrained by the increasing deceleration rates during manual driving. Conversely, automated driving does not constrain gaze patterns among the three deceleration rates. The dispersed gaze patterns during automated driving suggest that the visual risk perception may be decreased, potentially resulting in the weakening of takeover control. Therefore, it is necessary to judiciously design the takeover control of automated driving to ensure driving safety during initial acceleration and lower speeds.
... Eye tracking in naturalistic tasks, outside the confines of typical laboratory behavior, has begun to reveal consistent patterns of gaze behavior that tend to remarkably regular and repeatable within and across participants doing the same task. Gaze patterns and visual strategies in numerous tasks, such as making tea (Land et al., 1999), sandwiches (Hayhoe et al., 2003), foot placement in rugged terrain (Matthis et al., 2018) have been studied, steering a car (Land, 1992;Land and Lee, 1994;Lappi et al., , 2017Lappi et al., , 2020 and sports, such as batting in cricket (Land and McLeod, 2000;Mann et al., 2013) and squash (Hayhoe et al., 2012). ...
Article
Full-text available
Human performance in natural environments is deeply impressive, and still much beyond current AI. Experimental techniques, such as eye tracking, may be useful to understand the cognitive basis of this performance, and “the human advantage.” Driving is domain where these techniques may deployed, in tasks ranging from rigorously controlled laboratory settings through high-fidelity simulations to naturalistic experiments in the wild. This research has revealed robust patterns that can be reliably identified and replicated in the field and reproduced in the lab. The purpose of this review is to cover the basics of what is known about these gaze behaviors, and some of their implications for understanding visually guided steering. The phenomena reviewed will be of interest to those working on any domain where visual guidance and control with similar task demands is involved (e.g., many sports). The paper is intended to be accessible to the non-specialist, without oversimplifying the complexity of real-world visual behavior. The literature reviewed will provide an information base useful for researchers working on oculomotor behaviors and physiology in the lab who wish to extend their research into more naturalistic locomotor tasks, or researchers in more applied fields (sports, transportation) who wish to bring aspects of the real-world ecology under experimental scrutiny. Part of a Research Topic on Gaze Strategies in Closed Self-paced tasks, this aspect of the driving task is discussed. It is in particular emphasized why it is important to carefully separate the visual strategies driving (quite closed and self-paced) from visual behaviors relevant to other forms of driver behavior (an open-ended menagerie of behaviors). There is always a balance to strike between ecological complexity and experimental control. One way to reconcile these demands is to look for natural, real-world tasks and behavior that are rich enough to be interesting yet sufficiently constrained and well-understood to be replicated in simulators and the lab. This ecological approach to driving as a model behavior and the way the connection between “lab” and “real world” can be spanned in this research is of interest to anyone keen to develop more ecologically representative designs for studying human gaze behavior.
... Recent work has focused on model systems like zebrafish, mouse, and healthy as well as impaired human subjects (e.g. Dieterich et al. 2009;Huang and Neuhauss 2008;Naumann et al. 2016;Agarwal et al. 2016;Kretschmer et al. 2017;Lappi et al. 2020). A quantitative behavioral study on owls is missing. ...
Article
Full-text available
Barn owls, like primates, have frontally oriented eyes, which allow for a large binocular overlap. While owls have similar binocular vision and visual-search strategies as primates, it is less clear whether reflexive visual behavior also resembles that of primates or is more similar to that of closer related, but lateral-eyed bird species. Test cases are visual responses driven by wide-field movement: the optokinetic, optocollic, and optomotor responses, mediated by eye, head and body movements, respectively. Adult primates have a so-called symmetric horizontal response: they show the same following behavior, if the stimulus, presented to one eye only, moves in the nasal-to-temporal direction or in the temporal-to-nasal direction. By contrast, lateral-eyed birds have an asymmetric response, responding better to temporal-to-nasal movement than to nasal-to-temporal movement. We show here that the horizontal optocollic response of adult barn owls is less asymmetric than that in the chicken for all velocities tested. Moreover, the response is symmetric for low velocities (< 20 deg/s), and similar to that of primates. The response becomes moderately asymmetric for middle-range velocities (20–40 deg/s). A definitive statement for the complex situation for higher velocities (> 40 deg/s) is not possible.
... Over the past 25 years, studies examining the control of steering have demonstrated that there is tight linkage between the information available from the environment, where drivers look (Land & Lee, 1994), and what kinds of eye movement strategies are used to retrieve that information (Lappi et al., 2020; for review, see Lappi, 2014;Lappi & Mole, 2018). It has also been shown experimentally that instructing people to keep to a particular lane position biases where they look, and having them adopt a specific gaze strategy biases the steering responses produced (Wilkie & Wann, 2003b;Kountouriotis et al., 2013;Mole et al., 2016), indicating that there is a natural coupling between steering and gaze. ...
... While a variety of sources of information have been identified across different environments, the precise relationship between the gaze behaviors exhibited (where you look and when) and the sampling of each source is still not fully understood. It has been shown that in many everyday locomotor contexts, such as driving (Lappi et al., 2013(Lappi et al., , 2020, bicycling Vansteenkiste et al., 2014), and walking (Grasso et al., 1998;Matthis et al., 2018), gaze appears to land on and track fixed "waypoints" that may (or may not) be specified by some visible marker. Recent evidence has demonstrated that the gaze behaviors produced when steering along a path defined using only a series of marked waypoints are comparable to those generated when steering along a winding road (Tuhkanen et al., 2019). ...
Article
Full-text available
Skillful behavior requires the anticipation of future action requirements. This is particularly true during high-speed locomotor steering where solely detecting and correcting current error is insufficient to produce smooth and accurate trajectories. Anticipating future steering requirements could be supported using "model-free" prospective signals from the scene ahead or might rely instead on model-based predictive control solutions. The present study generated conditions whereby the future steering trajectory was specified using a breadcrumb trail of waypoints, placed at regular intervals on the ground to create a predictable course (a repeated series of identical "S-bends"). The steering trajectories and gaze behavior relative to each waypoint were recorded for each participant (N = 16). To investigate the extent to which drivers predicted the location of future waypoints, "gaps" were included (20% of waypoints) whereby the next waypoint in the sequence did not appear. Gap location was varied relative to the S-bend inflection point to manipulate the chances that the next waypoint indicated a change in direction of the bend. Gaze patterns did indeed change according to gap location, suggesting that participants were sensitive to the underlying structure of the course and were predicting the future waypoint locations. The results demonstrate that gaze and steering both rely upon anticipation of the future path consistent with some form of internal model.
... While these high-level descriptions can be useful, they stand in contrast with the fine-grained descriptions of the time-course of gaze patterns that are produced during active steering. A key characteristic of active steering gaze seems to be the 'move-dwell-move' sequence, consistent with an oculomotor pattern of tracking points that lie along the future path 9,11,42 . Recently, Tuhkanen et al. 11 detailed the sequencing of gaze to a series of 'waypoints': drivers first generate a saccade to look at a point in the world 1-3 s in the future (speed dependent), then the eyes track this point for around 0.4 s, before the next saccade is generated to a new waypoint on the future path 11 . ...
... When these move-dwell-move patterns remain within the guiding fixation region for an extended period (i.e. there are few LAFs or other glances beyond the GF region) the vertical gaze angle trace resembles a 'sawtooth' pattern (also called opto-kinetic nystagmus [42][43][44]. Though the precise parameters for how far ahead the saccade lands and the duration of tracking may vary according to the task, the sawtooth pattern itself appears to be a common gaze behaviour produced during curve driving ( 9,11,45,46 ; Fig. 1B). ...
... videos, which are loosely analogous to our automated driving conditions) gaze behaviour is strongly determined by motion in the visual stimuli, rather than top-down factors such as task instructions or different mental models of a scene [54][55][56][57][58] . When driving a vehicle, eye-movement patterns during waypoint pursuit often appear to follow the motion of optic flow 42,46 . Given that at least some aspects of gaze behaviour seem to be produced as a result of the visual signals generated by self-motion through the world, there may be few differences between the gaze patterns produced when the visual stimuli are kept identical but drivers are simply no longer in active control of steering. ...
Article
Full-text available
Automated vehicles (AVs) will change the role of the driver, from actively controlling the vehicle to primarily monitoring it. Removing the driver from the control loop could fundamentally change the way that drivers sample visual information from the scene, and in particular, alter the gaze patterns generated when under AV control. To better understand how automation affects gaze patterns this experiment used tightly controlled experimental conditions with a series of transitions from ‘Manual’ control to ‘Automated’ vehicle control. Automated trials were produced using either a ‘Replay’ of the driver’s own steering trajectories or standard ‘Stock’ trials that were identical for all participants. Gaze patterns produced during Manual and Automated conditions were recorded and compared. Overall the gaze patterns across conditions were very similar, but detailed analysis shows that drivers looked slightly further ahead (increased gaze time headway) during Automation with only small differences between Stock and Replay trials. A novel mixture modelling method decomposed gaze patterns into two distinct categories and revealed that the gaze time headway increased during Automation. Further analyses revealed that while there was a general shift to look further ahead (and fixate the bend entry earlier) when under automated vehicle control, similar waypoint-tracking gaze patterns were produced during Manual driving and Automation. The consistency of gaze patterns across driving modes suggests that active-gaze models (developed for manual driving) might be useful for monitoring driver engagement during Automated driving, with deviations in gaze behaviour from what would be expected during manual control potentially indicating that a driver is not closely monitoring the automated system.