Jonathan Winawer’s research while affiliated with New York University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (224)


Figure 3. Measured and predicted time courses. A. Time courses of magnetometer (MAG) data (top) and model (bottom) for each condition (traveling out in yellow, traveling in in red and standing in blue) for an occipital sensor (highlighted in white in the topomaps). Topomaps for each condition are plotted at the time point indicated by the vertical dashed line below the topomaps. Note that amplitude values in the model are arbitrary (see Methods). B. Time courses of EEG signal. Same convention as in A.
Figure 5. The model is specific to the stimulus-induced traveling waves. A. Using paired corrected t-tests, we compared the correlation coefficients between matched and crossed comparisons, for different values of temporal and spatial frequency. The bold outline bars correspond to the temporal/spatial frequencies values used in the experimental conditions. Stars indicate significant pvalues (*** p < 0.001, ** p < 0.01, * p<0.05). Error bars represent the Standard Error of the Mean (SEM). B. Paired t-tests on coefficients between matched and crossed comparisons were performed separately for each condition.
Traveling Waves in the Human Visual Cortex: an MEG-EEG Model-Based Approach
  • Preprint
  • File available

October 2024

·

51 Reads

·

1 Citation

·

·

Jonathan Winawer

·

[...]

·

Brain oscillations might be traveling waves propagating in cortex. Studying their propagation within single cortical areas has mostly been restricted to invasive measurements. Their investigation in healthy humans, however, requires non-invasive recordings, such as MEG or EEG. Identifying traveling waves with these techniques is challenging because source summation, volume conduction, and low signal-to-noise ratios make it difficult to localize cortical activity from sensor responses. The difficulty is compounded by the lack of a known ground truth in traveling wave experiments. Rather than source-localizing cortical responses from sensor activity, we developed a two-part model-based neuroimaging approach: (1) The putative neural sources of a propagating oscillation were modeled within primary visual cortex (V1) via retinotopic mapping from functional MRI recordings (encoding model); and (2) the modeled sources were projected onto MEG and EEG sensors to predict the resulting signal using a biophysical head model. We tested our model by comparing its predictions against the MEG-EEG signal obtained when participants viewed visual stimuli designed to elicit either fovea-to-periphery or periphery-to-fovea traveling waves or standing waves in V1, in which ground truth cortical waves could be reasonably assumed. Correlations on within-sensor phase and amplitude relations between predicted and measured data revealed good model performance. Crucially, the model predicted sensor data more accurately when the input to the model was a traveling wave going in the stimulus direction compared to when the input was a standing wave, or a traveling wave in a different direction. Furthermore, model accuracy peaked at the spatial and temporal frequency parameters of the visual stimulation. Together, our model successfully recovers traveling wave properties in cortex when they are induced by traveling waves in stimuli. This provides a sound basis for using MEG-EEG to study endogenous traveling waves in cortex and test hypothesis related with their role in cognition. Author Summary Brain oscillations, thought to be crucial for many cognitive processes, might actually be waves that travel across the brain’s surface. Understanding these traveling waves is notoriously difficult because current non-invasive methods like magneto- and electro-encephalography (MEG-EEG) face significant technical limitations. To address this challenge, we developed a new approach that combines brain imaging techniques and computational modeling. We focused on the primary visual cortical area (V1) of the brain and created a model that simulates traveling activity across the cortex and predicts how these traveling waves should appear in EEG and MEG recordings. We tested our model by comparing its predictions with brain data collected when participants view visual patterns specifically designed to induce traveling waves in the visual system. The results show that our model accurately captures the direction and pattern of the traveling waves, as well as the specific parameters of the visual stimuli. This novel modeling tool offers a promising method for studying endogenous traveling waves and will enable neuroscientists to explore hypotheses about the spatiotemporal organization of brain activity and its role in cognition.

Download







Temporal dynamics of short-term neural adaptation across human visual cortex

May 2024

·

36 Reads

·

3 Citations

Neural responses in visual cortex adapt to prolonged and repeated stimuli. While adaptation occurs across the visual cortex, it is unclear how adaptation patterns and computational mechanisms differ across the visual hierarchy. Here we characterize two signatures of short-term neural adaptation in time-varying intracranial electroencephalography (iEEG) data collected while participants viewed naturalistic image categories varying in duration and repetition interval. Ventral- and lateral-occipitotemporal cortex exhibit slower and prolonged adaptation to single stimuli and slower recovery from adaptation to repeated stimuli compared to V1-V3. For category-selective electrodes, recovery from adaptation is slower for preferred than non-preferred stimuli. To model neural adaptation we augment our delayed divisive normalization (DN) model by scaling the input strength as a function of stimulus category, enabling the model to accurately predict neural responses across multiple image categories. The model fits suggest that differences in adaptation patterns arise from slower normalization dynamics in higher visual areas interacting with differences in input strength resulting from category selectivity. Our results reveal systematic differences in temporal adaptation of neural population responses between lower and higher visual brain areas and show that a single computational model of history-dependent normalization dynamics, fit with area-specific parameters, accounts for these differences.


Figure 1. Main fMRI task design.
Figure 2. Prescan long-term memory training.
Figure 4. Memory has broader spatial tuning than perception in earlier visual cortex.
Feedback scales the spatial tuning of cortical responses during visual memory

April 2024

·

32 Reads

·

2 Citations

Perception, working memory, and long-term memory each evoke neural responses in visual cortex. While previous neuroimaging research on the role of visual cortex in memory has largely emphasized similarities between perception and memory, we hypothesized that responses in visual cortex would differ depending on the origins of the inputs. Using fMRI, we quantified spatial tuning in visual cortex while participants (both sexes) viewed, maintained in working memory, or retrieved from long-term memory a peripheral target. In each condition, BOLD responses were spatially tuned and aligned with the targets polar angle in all measured visual field maps including V1. As expected given the increasing sizes of receptive fields, polar angle tuning during perception increased in width up the visual hierarchy from V1 to V2, V3, hV4, and beyond. In stark contrast, the tuned responses were broad across the visual hierarchy during long-term memory (replicating a prior result) and during working memory. This pattern is consistent with the idea that mnemonic responses in V1 stem from top-down sources, even when the stimulus was recently viewed and is held in working memory. Moreover, in long-term memory, trial-to-trial biases in these tuned responses (clockwise or counterclockwise of target), predicted matched biases in memory, suggesting that the reinstated cortical responses influence memory guided behavior. We conclude that feedback widens spatial tuning in visual cortex during memory, where earlier visual maps inherit broader tuning from later maps thereby impacting the precision of memory.


Conservation of cortical crowding distance across individuals in human V4

April 2024

·

39 Reads

·

1 Citation

Visual recognition is limited by both object size (acuity) and spacing. The spacing limit, called "crowding", is the failure to recognize an object in the presence of other objects. Here, we take advantage of individual differences in crowding behavior to investigate its biological basis. Crowding distance, the minimum object spacing needed for recognition, varies 2-fold among healthy adults. We test the conjecture that this variation in psychophysical crowding distance is due to variation in cortical map size. To test this, we made paired measurements of brain and behavior in 50 observers. We used psychophysics to measure crowding distance and calculate λ, the number of letters that fit into each observer's visual field without crowding. In the same observers, we used fMRI to measure the surface area A (mm^2) of retinotopic maps V1, V2, V3, and V4. Across observers, λ is proportional to the surface area of V4 but is uncorrelated with the surface area of V1 to V3. The proportional relationship of λ to area of V4 indicates conservation of cortical crowding distance across individuals: letters can be recognized if they are spaced by at least 1.4 mm on the V4 map, irrespective of map size and psychophysical crowding distance. We conclude that the size of V4 predicts the spacing limit of visual perception.


Citations (51)


... However, one ubiquitous neural phenomenon that has not yet been investigated is short-term visual adaptation: the adjustment of neural responses over time when exposed to static visual inputs that are either prolonged or directly repeated. Here, we examine whether PredNet exhibits two neural signatures of temporal adaptation previously observed in intracranial recordings of human participants viewing prolonged and repeated stimuli (Brands et al., 2024). We find that, like human visual cortex, PredNet adapts to static images, evidenced by subadditive temporal response summation: a non-linear accumulation of response magnitudes when prolonging stimulus durations, which results from neurally plausible transient-sustained dynamics in the unit activation time courses. ...

Reference:

Deep predictive coding networks partly capture neural signatures of short-term temporal adaptation in human visual cortex
Temporal dynamics of short-term neural adaptation across human visual cortex

... For example, when information enters WM via perception or LTM, there are broad similarities in neural load signals and representational formats ( Vo et al., 2022;Sutterer, Foster, Serences, Vogel, & Awh, 2019;Fukuda & Woodman, 2017;Lewis-Peacock & Postle, 2008). Yet, recent work has highlighted how WM representations may transform with long-term learning and retrieval (Miller et al., 2022; and how the spatial tuning of mnemonic representations across retinotopic regions differs from the feedforward perceptual signals for the same feature ( Woodry, Curtis, & Winawer, 2024;Favila, Kuhl, & Winawer, 2022). Parsing what neural machinery and algorithms are shared versus distinct between WM and LTM, and under which circumstances, will be critical to understand how the continuum of memory guides adaptive behavior. ...

Feedback scales the spatial tuning of cortical responses during visual memory

... There is debate over the neurobiological processes and brain areas that underlie crowding; some supporting the receptive field view (Freeman & Simoncelli, 2011;Greenwood et al., 2023) and others the cortical distance view (Kurzawski et al., 2024;Pelli, 2008). The sampling bias towards the more peripheral element, as we show here, has implications for neurobiological accounts of crowding. ...

Conservation of cortical crowding distance across individuals in human V4

... In the three decades since, it has been invoked to account for many other properties of neural response (Carandini & Heeger, 2012). In recent years, feature-specific (or narrowband) normalization has been shown to improve accounts of neural responses to natural images (Burg et al., 2021;Coen-Cagli et al., 2012;Coen-Cagli et al., 2015;Fang et al., 2023;Goris et al., 2024), and to improve latent-variable encoding-decoding from natural images (Burge & Geisler, 2014;Jaini & Burge, 2017;Iyer & Burge, 2019). ...

Normalization by orientation-tuned surround in human V1-V3

... Across 3 experiments, acuity for gender judgements showed a horizontal-vertical anisotropy for both upright and inverted faces, where recognition was possible with smaller faces on the horizontal vs. vertical meridian, and a small-but-reliable upper-lower difference, with better acuity in the lower vs. upper field. The presence of both these anisotropies, and the smaller magnitude of the upper-lower difference, matches the patterns of low-level vision [8,11,12,42] and demonstrates that the resolution of face perception varies predictably across the visual field, rather than uniquely or idiosyncratically [3][4][5]. This suggests that spatial properties are preserved throughout the visual hierarchy, including in higher-level face-selective systems. ...

An enhanced Bouma model fits fifty people’s visual crowding
  • Citing Article
  • August 2023

Journal of Vision

... Crowding can occur within the fovea (Lev et al., 2014;Siman-Tov et al., 2019), and even the foveola (Clark et al., 2020) -the central 1º of the retinal region where visual acuity peaks -but its effects are more pronounced outside the fovea. The critical spacing required to avoid crowding scales with eccentricity (Bouma, 1970;Kurzawski et al., 2023). Yet, the relation between crowding and the visual periphery is complex. ...

The Bouma law accounts for crowding in 50 observers

Journal of Vision

... Overall, it is important to note that the stimuli we used were presented only in the lower visual field. Although we did not find any differences in the duration discrimination of the tested spatial positions, it is possible that a finer spatial manipulation, including the upper visual hemifield, could reveal behavioral asymmetries 44 as well as differences in neural tuning properties 45,46 . ...

Polar angle asymmetries in visual perception and neural architecture

Trends in Neurosciences

... Importantly, in this study, even when descending targets were presented in the upper visual field (Upper-Descend), the accuracy of the arrival time estimation was not impaired. Consistent with this result, Ezzo, Winawer, Carrasco, and Rokers (2023) observed a similar accuracy for vertical motion detection in 7°of upper and lower visual fields. These results indicate that the superiority of visual motion processing is not limited to the lower visual field but extends at least to approximately 8.7°in the upper visual field. ...

Asymmetries in the discrimination of motion direction around the visual field

Journal of Vision

... In areas V1-V3, smaller pRFs have been found along the horizontal (vs. vertical) meridian (Silson et al., 2018;Silva et al., 2018), though an upper-lower anisotropy in pRF size is less consistent (Silva et al., 2018;Himmelberg, Tuncok, et al., 2023). V1 also has a larger surface area and a corresponding increase in the number of pRFs along both the horizontal vs. the vertical meridian and in the lower vs. ...

Comparing retinotopic maps of children and adults reveals a late-stage change in how V1 samples the visual field