Taylor R Hayes

Taylor R Hayes
University of California, Davis | UCD · UC Davis Center for Mind and Brain

PhD

About

66
Publications
6,803
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,427
Citations
Introduction
I don't check ResearchGate very often. All my papers can be found on my website: https://trhayes.org

Publications

Publications (66)
Article
As infants view visual scenes every day, they must shift their eye gaze and visual attention from location to location, sampling information to process and learn. Like adults, infants' gaze when viewing natural scenes (i.e., photographs of everyday scenes) is influenced by the physical features of the scene image and a general bias to look more cen...
Preprint
Full-text available
Humans rapidly process and understand real-world scenes with ease. Our stored semantic knowledge gained from experience is thought to be central to this ability by organizing perceptual information into meaningful units to efficiently guide our attention in scenes. However, the role stored semantic representations play in scene guidance remains dif...
Article
Semantic guidance theories propose that attention in real-world scenes is strongly associated with semantically informative scene regions. That is, we look where there are recognizable and informative objects that help us make sense of our visual environment. In contrast, image guidance theories propose that local differences in semantically uninte...
Article
Full-text available
As we age, we accumulate a wealth of information about the surrounding world. Evidence from visual search suggests that older adults retain intact knowledge for where objects tend to occur in everyday environments (semantic information) that allows them to successfully locate objects in scenes, but may overrely on semantic guidance. We investigated...
Preprint
Full-text available
As we age, we accumulate a wealth of information about the surrounding world. Evidence from visual search suggests that older adults retain intact knowledge for where objects tend to occur in everyday environments (semantic information) that allows them to successfully locate objects in scenes, but may over-rely on semantic guidance. We investigate...
Article
Models of visual search in scenes include image salience as a source of attentional guidance. However, because scene meaning is correlated with image salience, it could be that the salience predictor in these models is driven by meaning. To test this proposal, we generated meaning maps that represented the spatial distribution of semantic informati...
Article
Full-text available
As we act on the world around us, our eyes seek out objects we plan to interact with. A growing body of evidence suggests that overt visual attention selects objects in the environment that could be interacted with, even when the task precludes physical interaction. In previous work, objects that afford grasping interactions influenced attention wh...
Article
Meaning mapping uses human raters to estimate different semantic features in scenes, and has been a useful tool in demonstrating the important role semantics play in guiding attention. However, recent work has argued that meaning maps do not capture semantic content, but like deep learning models of scene attention, represent only semantically-neut...
Preprint
As we act on the world around us, our eyes seek out objects we plan to interact with. A growing body of evidence suggests that overt visual attention selects objects in the environment that could be interacted with, even when the task precludes physical interaction. In previous work, objects that afford grasping interactions influenced attention wh...
Article
Full-text available
Physically salient objects are thought to attract attention in natural scenes. However, research has shown that meaning maps, which capture the spatial distribution of semantically informative scene features, trump physical saliency in predicting the pattern of eye moments in natural scene viewing. Meaning maps even predict the fastest eye movement...
Article
Full-text available
How do spatial constraints and meaningful scene regions interact to control overt attention during visual search for objects in real-world scenes? To answer this question, we combined novel surface maps of the likely locations of target objects with maps of the spatial distribution of scene semantic content. The surface maps captured likely target...
Article
Full-text available
Deep saliency models represent the current state-of-the-art for predicting where humans look in real-world scenes. However, for deep saliency models to inform cognitive theories of attention, we need to know how deep saliency models prioritize different scene features to predict where people look. Here we open the black box of three prominent deep...
Article
The visual world contains more information than we can perceive and understand in any given moment. Therefore, we must prioritize important scene regions for detailed analysis. Semantic knowledge gained through experience is theorized to play a central role in determining attentional priority in real-world scenes but is poorly understood. Here, we...
Article
Little is known about the development of higher-level areas of visual cortex during infancy, and even less is known about how the development of visually guided behavior is related to the different levels of the cortical processing hierarchy. As a first step toward filling these gaps, we used representational similarity analysis (RSA) to assess lin...
Article
Full-text available
We extend decades of research on infants' visual processing by examining their eye gaze during viewing of natural scenes. We examined the eye movements of a racially diverse group of 4- to 12-month-old infants (N = 54; 27 boys; 24 infants were White and not Hispanic, 30 infants were African American, Asian American, mixed race and/or Hispanic) as t...
Preprint
Full-text available
Deep saliency models represent the current state-of-the-art for predicting where humans look in real-world scenes. However, for deep saliency models to inform cognitive theories of attention, we need to know how deep saliency models predict where people look. Here we open the black box of deep saliency models using an approach that models the assoc...
Article
Pedziwiatr, Kümmerer, Wallis, Bethge, & Teufel (2021) contend that Meaning Maps do not represent the spatial distribution of semantic features in scenes. We argue that Pesziwiatr et al. provide neither logical nor empirical support for that claim, and we conclude that Meaning Maps do what they were designed to do: represent the spatial distribution...
Preprint
How do spatial constraints and meaningful scene regions interact to control overt attention during visual search for objects in real-world scenes? To answer this question, we combined novel surface maps of the likely locations of target objects with maps of the spatial distribution of scene semantic content. The surface maps captured likely target...
Preprint
Pedziwiatr, Kümmerer, Wallis, Bethge, & Teufel (2021) contend that Meaning Maps do not represent the spatial distribution of semantic features in scenes. We argue that Pesziwiatr et al. provide neither logical nor empirical support for that claim, and we conclude that Meaning Maps do what they were designed to do: represent the spatial distribution...
Preprint
The visual world contains more information than we can perceive and understand in any given moment. Therefore, we must prioritize important scene regions for detailed analysis. Semantic knowledge gained through experience is theorized to play a central role in determining attentional priority in real- world scenes but is poorly understood. Here we...
Preprint
Pedziwiatr, Kümmerer, Wallis, Bethge, & Teufel (2021) contend that Meaning Maps do not represent the spatial distribution of semantic features in scenes. We argue that Pesziwiatr et al. provide neither logical nor empirical support for that claim, and we conclude that Meaning Maps do what they were designed to do: represent the spatial distribution...
Article
Full-text available
Low-level visual saliency is widely thought to control the allocation of overt attention within natural scenes. However, recent research has shown that the presence of meaningful information at a given location may trump saliency. Here we used representational similarity analysis (RSA) of ERP responses to natural scenes to examine the time course o...
Article
Full-text available
The world is visually complex, yet we can efficiently describe it by extracting the information that is most relevant to convey. How do the properties of real-world scenes help us decide where to look and what to say? Image salience has been the dominant explanation for what drives visual attention and production as we describe displays, but new ev...
Article
Full-text available
Studies assessing the relationship between high-level meaning and low-level image salience on real-world attention have shown that meaning better predicts eye movements than image salience. However, it is not yet clear whether the advantage of meaning over salience is a general phenomenon or whether it is related to center bias: the tendency for vi...
Article
Full-text available
During real-world scene perception, viewers actively direct their attention through a scene in a controlled sequence of eye fixations. During each fixation, local scene properties are attended, analyzed, and interpreted. What is the relationship between fixated scene properties and neural activity in the visual cortex? Participants inspected photog...
Article
Full-text available
The complexity of the visual world requires that we constrain visual attention and prioritize some regions of the scene for attention over others. The current study investigated whether verbal encoding processes influence how attention is allocated in scenes. Specifically, we asked whether the advantage of scene meaning over image salience in atten...
Article
Full-text available
The world is visually complex, yet we can efficiently describe it by extracting the information that is most relevant to convey. How do the properties of real-world scenes help us decide where to look and what to say? Image salience has been the dominant explanation for what drives visual attention and production as we describe displays, but new ev...
Preprint
Full-text available
The world is visually complex, yet we can efficiently describe it by extracting the information that is most relevant to convey. How do the properties of real-world scenes help us decide where to look and what to say? Image salience has been the dominant explanation for what drives visual attention and production as we describe displays, but new ev...
Article
Full-text available
The present study examines eye movement behavior in real-world scenes with a large (N = 100) sample. We report baseline measures of eye movement behavior in our sample, including mean fixation duration, saccade amplitude, and initial saccade latency. We also characterize how eye movement behaviors change over the course of a 12 s trial. These basel...
Preprint
The present study examines eye movement behavior in real-world scenes with a large (N=100) sample. We report baseline measures of eye movement behavior in our sample, including mean fixation duration, saccade amplitude, and initial saccade latency. We also characterize how eye movement behaviors change over the course of a 12 second trial. These ba...
Article
How do we determine where to focus our attention in real-world scenes? Image saliency theory proposes that our attention is ‘pulled’ to scene regions that differ in low-level image features. However, models that formalize image saliency theory often contain significant scene-independent spatial biases. In the present studies, three different viewin...
Preprint
Full-text available
The complexity of the visual world requires that we constrain visual attention and prioritize some regions of the scene for attention over others. The current study investigated whether verbal encoding processes influence how attention is allocated in scenes. Specifically, we asked whether the advantage of scene meaning over image salience in atten...
Article
During scene viewing, is attention primarily guided by low-level image salience or by high-level semantics? Recent evidence suggests that overt attention in scenes is primarily guided by semantic features. Here we examined whether the attentional priority given to meaningful scene regions is involuntary. Participants completed a scene-independent v...
Preprint
During scene viewing, is attention primarily guided by low-level image salience or by high-level semantics? Recent evidence suggests that overt attention in scenes is primarily guided by semantic features. Here we examined whether the attentional priority given to meaningful scene regions is involuntary. Participants completed a scene-independent v...
Article
Full-text available
Perception of a complex visual scene requires that important regions be prioritized and attentionally selected for processing. What is the basis for this selection? Although much research has focused on image salience as an important factor guiding attention, relatively little work has focused on semantic salience. To address this imbalance, we hav...
Article
During real-world scene viewing, humans must prioritize scene regions for attention. What are the roles of low-level image salience and high-level semantic meaning in attentional prioritization? A previous study suggested that when salience and meaning are directly contrasted in scene memorization and preference tasks, attentional priority is assig...
Article
Full-text available
Intelligent analysis of a visual scene requires that important regions be prioritized and attentionally selected for preferential processing. What is the basis for this selection? Here we compared the influence of meaning and image salience on attentional guidance in real-world scenes during two free-viewing scene description tasks. Meaning was rep...
Article
Full-text available
We compared the influence of meaning and of salience on attentional guidance in scene images. Meaning was captured by ''meaning maps'' representing the spatial distribution of semantic information in scenes. Meaning maps were coded in a format that could be directly compared to maps of image salience generated from image features. We investigated t...
Article
Full-text available
The relationship between viewer individual differences and gaze control has been largely neglected in the scene perception literature. Recently we have shown a robust association between individual differences in viewer cognitive capacity and scan patterns during scene viewing. These findings suggest other viewer individual differences may also be...
Data
Determination of sample size. (PDF)
Data
First-order transition model results. The Goodness-of-fit R2 and leave-one-out cross-validated (Rcv2) for predicting individual differences (ID) in clinical trait measures from scan patterns using first-order transition frequency instead of successor representation. A comparison with the SRSA performance in Table 1 shows that successor representati...
Data
Squared correlation between clinical and cognitive measures. (PDF)
Data
Traditional and transition probability models. (PDF)
Data
Squared correlation between clinical and cognitive measures. Clinical trait and cognitive capacity squared correlation matrix. The matrix shows the squared correlation (R2) between each clinical and cognitive measure followed by the number of subjects in parentheses. The abbreviated measures Ospan and Rspan indicate operation span and reading span...
Data
Traditional model results. Goodness-of-fit and leave-one-out cross-validated performance for predicting clinical individual difference measures using traditional eye metrics. The traditional eye metric model included the mean and standard deviation of fixation duration, saccade amplitude, and fixation number as predictors in a multiple regression m...
Preprint
Full-text available
We compared the influences of meaning and salience on attentional guidance in scene images. Meaning was captured by “meaning maps” representing the spatial distribution of semantic information in scenes. Meaning maps were coded in a format that could be directly compared to maps of image salience generated from image features. We investigated the d...
Article
Full-text available
Real-world scenes comprise a blooming, buzzing confusion of information. To manage this complexity, visual attention is guided to important scene regions in real time 1–7 . What factors guide attention within scenes? A leading theoretical position suggests that visual salience based on semantically uninterpreted image features plays the critical ca...
Article
Full-text available
From the earliest recordings of eye movements during active scene viewing to the present day, researchers have commonly reported individual differences in eye movement scan patterns under constant stimulus and task demands. These findings suggest viewer individual differences may be important for understanding gaze control during scene viewing. How...
Article
Full-text available
An important and understudied area in scene perception is the degree to which individual differences influence scene-viewing behavior. The present study investigated this issue by predicting individual differences from regularities in sequential eye movement patterns. Seventy-nine participants completed a free-view memorization task for 40 real-wor...
Article
Full-text available
The ability to adaptively shift between exploration and exploitation control states is critical for optimizing behavioral performance. Converging evidence from primate electrophysiology and computational neural modeling has suggested that this ability may be mediated by the broad norepinephrine projections emanating from the locus coeruleus (LC) [A...
Article
Full-text available
Pupil size is correlated with a wide variety of important cognitive variables and is increasingly being used by cognitive scientists. Pupil data can be recorded inexpensively and non-invasively by many commonly used video- based eye-tracking cameras. Despite the relative ease of data collection and increasing prevalence of pupil data in the cogniti...
Article
In a rich, ever-evolving sensory environment taking note of incongruent events can be a powerful tool. Emerging evidence from animal electrophysiology and computational modeling suggests that norepinephrine may play a key role in how our brains process this type of unexpected uncertainty (Yu & Dayan, 2003). The locus coeruleus (LC) serves as the pr...
Article
Full-text available
Eye movements are an important data source in vision science. However, the vast majority of eye movement studies ignore sequential information in the data and utilize only first-order statistics. Here, we present a novel application of a temporal-difference learning algorithm to construct a scanpath successor representation (SR; P. Dayan, 1993) tha...
Article
Full-text available
Perceptual learning was used as a tool for studying motion perception. The pattern of transfer of learning of luminance- (LM) and contrast-modulated (CM) motion is diagnostic of how their respective processing pathways are integrated. Twenty observers practiced fine direction discrimination with either additive (LM) or multiplicative (CM) mixtures...
Article
Perceptual learning was used as a tool for studying motion perception. The pattern of transfer of learning of luminance-(LM) and contrast-modulated (CM) motion is diagnostic of how their respective processing pathways are integrated. Twenty observers practiced fine direction discrimination with either additive (LM) or multiplicative (CM) mixtures o...

Network

Cited By