Eli Brenner’s research while affiliated with Vrije Universiteit Amsterdam and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (433)


Congruency between viewers’ movements and the region of the display being sampled speeds up search through an aperture
  • Article

February 2025

·

5 Reads

Perception

·

Danai T. Vorgia

·

Eli Brenner

Searching for a target amongst distractors is faster when moving an aperture over the search display than when moving the search display beneath an aperture. Is this because when moving the aperture, each item is sampled at a different position, while when moving the search display, all items are sampled at the same position? When moving the aperture, it might therefore be easier to keep track of where one has already searched. Experiment 1 showed that, when the extent of the search display is visible to provide an additional reference frame, participants still found targets faster when moving the aperture. Experiment 2 showed that, even when the aperture and search display constantly moved around the screen together so that remembering where on the screen one had already searched is less useful, participants still found targets faster when moving the aperture. Experiment 3 showed that inverting the mapping between movements of the mouse and the item they were toggled to reversed the outcome: for the inverted mapping, search was faster when moving the search display than when moving the aperture. We conclude that the congruency between the user's movements and the spatial region of the search display that they are sampling from is critical for speeding up search.


Figure 1. Schematic representation of a top view of the room, with the (open) door through which the
Figure 2. The fraction of the time that participants were looking at each cup as they approached the table. They
Figure 6. Closest cup to where participants were looking as they approached the table (as in Figure 3). Data for
Gaze when walking to grasp an object in the presence of obstacles
  • Preprint
  • File available

November 2024

·

9 Reads

People generally look at positions that are important for their current actions, such as objects they intend to grasp. What if there are obstacles on their path to such objects? We asked participants to walk into a room and pour the contents of a cup placed on a table into another cup elsewhere on the table. There were two small obstacles on the floor between the door and the table. There was a third obstacle on the table near the target cup. Participants mainly looked at the items on the table, but as they approached and entered the room they often looked at the floor near the obstacles, although there was nothing particularly informative to see there. They relied on peripheral vision and memory of where they had seen obstacles to avoid kicking the obstacles. From well before participants crossed the obstacles, they primarily looked at the object that they intended to grasp. We conclude that people look at positions at which they plan to interact with the environment in a specific manner, rather than at items that constrain such interactions.

Download




When knowing the activity is not enough to predict gaze

July 2024

·

17 Reads

·

3 Citations

Journal of Vision

·

Daan Amelink

·

Eli Brenner

·

[...]

·

Roy S Hessels

It is reasonable to assume that where people look in the world is largely determined by what they are doing. The reasoning is that the activity determines where it is useful to look at each moment in time. Assuming that it is vital to accurately judge the positions of the steps when navigating a staircase, it is surprising that people differ a lot in the extent to which they look at the steps. Apparently, some people consider the accuracy of peripheral vision, predictability of the step size, and feeling the edges of the steps with their feet to be good enough. If so, occluding part of the view of the staircase and making it more important to place one's feet gently might make it more beneficial to look directly at the steps before stepping onto them, so that people will more consistently look at many steps. We tested this idea by asking people to walk on staircases, either with or without a tray with two cups of water on it. When carrying the tray, people walked more slowly, but they shifted their gaze across steps in much the same way as they did when walking without the tray. They did not look at more steps. There was a clear positive correlation between the fraction of steps that people looked at when walking with and without the tray. Thus, the variability in the extent to which people look at the steps persists when one makes walking on the staircase more challenging.


Figure 1. Illustration of the experimental design. (A) Sketch of the experimental setup. (B) Timeline of a single trial. After successful drift correction, a fixation cross was presented for 900 ms. The trial started when the cross disappeared (dotted frame), leaving an empty gray screen. The target appeared 1000 ms after fixation cross offset (bold outlined frame; 0 ms). Participants had to hit this target with their right index finger as quickly as possible. The frame background is illustrated in white, instead of the actual color gray, for better visibility.
Figure 2. Examples of gaze deployment. (A, B) Gaze orientation on the experimental monitor during a single trial. The shade of gray represents the time and is equivalent to the gradient depicted in C and D. The target with the solid outline is the actual target and that with the dashed outline is the other potential target (that was not visible during that trial). (C, D) Temporal evolution of horizontal eye position in a single trial. Fixation cross offset (dotted line) and target presentation (dashed line) are used to classify saccades as predictive (circle) and reactive (square). The onset of the target saccade is indicated by a star. The solid and dot-dashed horizontal lines show the true target and the alternative target location, respectively. In the predictable condition (A, C), we expect participants to perform an early predictive saccade toward the anticipated target location. In the unpredictable condition (B, D), they might shift their gaze between the potential target positions before the target appears, and then make a final reactive target saccade if necessary.
Figure 4. Reduction in target saccade latency. Average target saccade latencies of younger (blue) and older (black) adults during the first and last 10 trials for all conditions. Diamonds on the right side of each panel represent the averaged individual median reduction in latency, with dots indicating individual participants. Negative values indicate shorter latencies in the later than the earlier trials. Note that the values of the diamonds do not correspond with the difference between the mean values of the corresponding curves. They are the means of the median values for individual participants within those curves. This ensures that the exceptionally long latency on some very first trials does not have an excessive influence on the estimated reduction in saccade latency.
Figure 5. First saccade latencies. Negative latencies indicate eye movements that were initiated before target presentation. Individual median latencies are represented as dots. Means across these median latencies are shown as larger squares together with the standard error.
Figure 8. Hand movement results. Effects of aging and predictability of the target's location on (A) hand movement latency relative to target onset and (B) hand movement time. Details as in Figure 5.
Age effects on predictive eye movements for action

June 2024

·

39 Reads

Journal of Vision

When interacting with the environment, humans typically shift their gaze to where information is to be found that is useful for the upcoming action. With increasing age, people become slower both in processing sensory information and in performing their movements. One way to compensate for this slowing down could be to rely more on predictive strategies. To examine whether we could find evidence for this, we asked younger (19–29 years) and older (55–72 years)healthy adults to perform a reaching task wherein they hit a visual target that appeared at one of two possible locations. In separate blocks of trials, the target could appear always at the same location (predictable), mainly at one of the locations (biased), or at either location randomly (unpredictable). As one might expect, saccades toward predictable targets had shorter latencies than those toward less predictable targets, irrespective of age. Older adults took longer to initiate saccades toward the target location than younger adults, even when the likely target location could be deduced. Thus we found no evidence of them relying more on predictive gaze. Moreover, both younger and older participants performed more saccades when the target location was less predictable, but again no age-related differences were found. Thus we found no tendency for older adults to rely more on prediction.


Running together influences where you look

February 2024

·

35 Reads

Perception

To read this article, you have to constantly direct your gaze at the words on the page. If you go for a run instead, your gaze will be less constrained, so many factors could influence where you look. We show that you are likely to spend less time looking at the path just in front of you when running alone than when running with someone else, presumably because the presence of the other runner makes foot placement more critical.


Methods matter: Exploring how Expectations influence Common Actions

February 2024

·

42 Reads

·

3 Citations

iScience

Behavior in controlled laboratory studies is not always representative of what people do in daily life. This has prompted a recent shift toward conducting studies in natural settings. We wondered whether expectations raised by how the task is presented should also be considered. To find out, we studied gaze when walking down and up a staircase. Gaze was often directed at steps before stepping on them, but most participants did not look at every step. Importantly, participants fixated more steps and looked around less when asked to navigate the staircase than when navigating the same staircase but asked to walk outside. Presumably, expecting the staircase to be important made participants direct their gaze at more steps, despite the identical requirements when on the staircase. This illustrates that behavior can be influenced by expectations, such as expectations resulting from task instructions, even when studies are conducted in natural settings.


Eye Tracking to Assess the Functional Consequences of Vision Impairment: A Systematic Review

December 2023

·

168 Reads

·

1 Citation

Optometry and vision science: official publication of the American Academy of Optometry

BACKGROUND Eye tracking is a promising method for objectively assessing functional visual capabilities, but its suitability remains unclear when assessing the vision of people with vision impairment. In particular, accurate eye tracking typically relies on a stable and reliable image of the pupil and cornea, which may be compromised by abnormalities associated with vision impairment (e.g., nystagmus, aniridia). OBJECTIVES This study aimed to establish the degree to which video-based eye tracking can be used to assess visual function in the presence of vision impairment. DATA SOURCES A systematic review was conducted using PubMed, EMBASE, and Web of Science databases, encompassing literature from inception to July 2022. STUDY ELIGIBILITY CRITERIA, PARTICIPANTS, AND INTERVENTIONS Studies included in the review used video-based eye tracking, included individuals with vision impairment, and used screen-based tasks unrelated to practiced skills such as reading or driving. STUDY APPRAISAL AND SYNTHESIS METHODS The included studies were assessed for quality using the Strengthening the Reporting of Observational Studies in Epidemiology assessment tool. Data extraction and synthesis were performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. RESULTS Our analysis revealed that five common tests of visual function were used: (i) fixation stability, (ii) smooth pursuit, (iii) saccades, (iv) free viewing, and (v) visual search. The studies reported considerable success when testing individuals with vision impairment, yielding usable data from 96.5% of participants. LIMITATIONS There was an overrepresentation of conditions affecting the optic nerve or macula and an underrepresentation of conditions affecting the anterior segment or peripheral retina. CONCLUSIONS AND IMPLICATIONS OF KEY FINDINGS The results offer promise for the use of eye tracking to assess the visual function of a considerable proportion of those with vision impairment. Based on the findings, we outline a framework for how eye tracking can be used to test visual function in the presence of vision impairment.


Citations (57)


... In sum, for tasks for which a task structure is evident or for which formal models exist, and where subtasks have different relevant fixation locations in the world, one can predict gaze behavior. However, not all activities that humans encounter on a daily basis may fit this scheme (see e.g., Ghiani et al., 2024). ...

Reference:

The fundamentals of eye tracking part 1: The link between theory and research question
When knowing the activity is not enough to predict gaze
  • Citing Article
  • July 2024

Journal of Vision

... The challenges associated with these conditions underscore the limited research on eye tracking in individuals with such eye conditions. 44 Rather than excluding participants based on potential eye tracking difficulties, it is crucial to inform the scientific field about the specific conditions and severities for which eye tracking was or was not feasible. New forms of eye tracking based on machine learning methods may offer promise as a means of better collecting data in individuals with impairment. ...

Eye Tracking to Assess the Functional Consequences of Vision Impairment: A Systematic Review

Optometry and vision science: official publication of the American Academy of Optometry

... It also reduces the potential damage caused by errors. Similarly, increasing the vigour of responses to errors as the movement progresses (including 'errors' that are artificially imposed by shifting the target) is beneficial, because as the movement progresses, there is less and less time in which to make the necessary adjustment (Brenner et al. 2022(Brenner et al. , 2023Oostwoud Wijdenes et al. 2011;Zhang et al. 2018). Consequently, except at the very end of the movement, the vigour of responses to target shifts is inversely related to the remaining time (Brenner et al. 2022). ...

Online updating of obstacle positions when intercepting a virtual target

Experimental Brain Research

... Thus, visual feedback has a longer time delay compared to proprioceptive feedback, primarily due to the extended time required to process signals in the visual cortex and their subsequent integration into higher brain centers (49). In general, sensory feedback delays can vary between 30 and 250 milliseconds (50,51). ...

How the timing of visual feedback influences goal-directed arm movements: delays and presentation rates

Experimental Brain Research

... The experimental findings do not align with the prevailing notion of an overarching, domaingeneral body schema. The notion that our cognitive system attempts to entertain domain-general and consistent representations or models of the world, has recently been challenged based on multiple examples in which perceptual judgements about specific aspects of a given situation are perceived incongruously (Smeets & Brenner, 2023). For example, in the renowned waterfall illusion, participants perceive an object as moving, yet simultaneously fail to perceive a change in its location. ...

The cost of aiming for the best answers: Inconsistent perception

Frontiers in Integrative Neuroscience

... Land et al. (1999) and Pelz and Canosa (2001) concluded that most fixations were on locations in the world immediately relevant to the task, with some fixations related to upcoming actions. In recent years, the study of visually guided task execution has been extended to, e.g., foot control in rough terrain (Matthis et al., 2018), crowd navigation (Hessels et al., 2020b), assembling a camping tent (Sullivan et al., 2021) or stair walking (Ghiani et al., 2023). ...

Where do people look when walking up and down familiar staircases?

Journal of Vision

... Adapted from a simple target tap task [40], participants are required to quickly and accurately tap falling circular stimuli to assess their basic reaction and visual-motor integration abilities and need to tap as many falling stimuli as possible accurately. Stimuli randomly generate from 5 spawning points at the top of the screen, with an interval of 500ms and a falling speed of 400 screen pixels per second. ...

Tapping on a target: dealing with uncertainty about its position and motion

Experimental Brain Research

... Second, we identified gaze-dependent modulation of leg LLRs, with enhanced responses during tracking of approaching objects with smooth pursuit eye movements (SPEM) compared to central target fixation. This gaze dependency aligns with extensive research on SPEM's role in motion processing (25) and interception movements (15), where SPEM demonstrates performance advantages and is preferentially selected in free-choice paradigms (16). Our previous work established that smooth pursuit eye movements (SPEM) lead to different patterns of anticipatory upper limb control when a seated subject must exert a matching force pulse against a colliding virtual object. ...

Pursuing a target with one's eyes helps judge its velocity
  • Citing Article
  • November 2022

Perception

... Other research has shown that subjects' responses are also sensitive to the magnitude of the background motion; that is, the greater that motion, the larger the changes in the course of the hand's response (Crowe et al., 2022). Brenner and Smeets (2015), meanwhile, assessed whether the number of moving squares on a checkerboard (background) had a proportional influence on the interception of a moving target. ...

How similar are responses to background motion and target displacements?

Experimental Brain Research